Rate limiting

IMPORTANT: The content on this page uses the following versions of Compute SDKs: Rust SDK: 0.9.11 (current is 0.10.0, see changes), Go SDK: 1.3.0 (current is 1.3.1, see changes)

Use ratecounters and penalty boxes to stop high-volume automated attacks against your website.

Instructions

IMPORTANT: This tutorial assumes that you have already have the Fastly CLI installed and you are logged in. If you are new to the platform, read our Getting Started guide.

Initialize a project

If you haven't already created a Compute project, run fastly compute init in a new directory in your terminal and follow the prompts to provision a new service using the empty starter kit in your language of choice:

NOTE: Edge Rate Limiting is currently only available in the Rust and Go SDKs. For Fastly VCL see the Rate Limiting concept page.

$ mkdir edge-rate-limiting && cd edge-rate-limiting
$ fastly compute init
Creating a new Compute project.
Press ^C at any time to quit.
Name: [edge-rate-limiting]
Description: A Compute service to protect backends from clients sending too many requests.
Author (email): [developer@example.com]
Language:
(Find out more about language support at https://www.fastly.com/documentation/guides/compute)
[1] Rust
[2] JavaScript
[3] Go
[4] Other ('bring your own' Wasm binary)
Choose option: [1] 1
Starter kit:
[1] Default starter for Rust
A basic starter kit that demonstrates routing, simple synthetic responses and overriding caching rules.
https://github.com/fastly/compute-starter-kit-rust-default
[2] Empty starter for Rust
An empty starter kit project template.
https://github.com/fastly/compute-starter-kit-rust-empty
Choose option or paste git URL: [1] 2

You now have an empty Compute project that will return an empty HTTP response to all requests, however the intention of this tutorial is to protect a backend, so let's start by forwarding all requests to your backend.

Proxy requests to backend

For the purposes of this tutorial we will use http-me.glitch.me, a simple HTTP server that returns predictable responses. You can use any other HTTP server for this purpose. In production this would be the origin server that you are protecting.

Firstly, update the code in the created source file to send the request to the backend and return the response to the client. Note the name of the backend (protected_backend in this example), which you can change to a name that suits your use case.

  1. Rust
  2. Go
#[fastly::main]
fn main(req: Request) -> Result<Response, Error> {
let beresp = req.send("protected_backend")?;
Ok(beresp)
}

Now you can deploy your service to Fastly using the fastly compute publish command. Follow the prompts to create a service and define your backend, making sure to use the same name that was used in the code above.

$ fastly compute publish
✓ Running [scripts.build]
✓ Creating package archive
SUCCESS: Built package (pkg/edge-rate-limiting.tar.gz)
There is no Fastly service associated with this package. To connect to an existing service add the Service ID to the fastly.toml
file, otherwise follow the prompts to create a service now.
Press ^C at any time to quit.
Create new service: [y/N] y
Service name: [edge-rate-limiting] edge-rate-limiting
✓ Creating service
Domain: [some-random-words.edgecompute.app]
Backend (hostname or IP address, or leave blank to stop adding backends): http-me.glitch.me
Backend port number: [443]
Backend name: [backend_1] protected_backend
Backend (hostname or IP address, or leave blank to stop adding backends):
✓ Creating domain 'some-random-words.edgecompute.app'
✓ Creating backend 'protected_backend' (host: http-me.glitch.me, port: 443)
✓ Uploading package
✓ Activating service (version 1)
✓ Checking service availability (status: 200)
Manage this service at:
https://manage.fastly.com/configure/services/uPBMaxGHqhPPRwAUIQWU43
View this service at:
https://some-random-words.edgecompute.app
SUCCESS: Deployed package (service uPBMaxGHqhPPRwAUIQWU43, version 1)

If you visit your service's URL in a web browser, you should see the response from the backend server. Success! Now it's time to add rate limiting to protect the backend from high-volume automated attacks.

Add rate limiting

Edge Rate Limiting is based upon two primitives; a rate counter to track the number of requests from a client, and a penalty box to block clients that exceed a certain threshold. Add the following code to the top of your request handler function to open a rate counter and penalty box.

  1. Rust
  2. Go
// Open the rate counter and penalty box.
let rc = RateCounter::open("rc");
let pb = Penaltybox::open("pb");

Now that you have a rate counter and penalty box, you can combine these into an edge rate limiter, which provides a check_rate method to keep track of requests and determine if the client should be allowed to proceed.

  1. Rust
  2. Go
// Open the Edge Rate Limiter using the rate counter and penalty box.
let limiter = ERL::open(rc, pb);

To keep track of a client's requests, you need a way to identify the requester. There are many ways to do this but for the purposes of this tutorial we will use the client's IP address. This is not a perfect solution, as many clients may share the same IP address, but it is a good starting point. See alternative methods of identifying clients on the Rate Limiting concepts page.

  1. Rust
  2. Go
// Rate limit based upon the client's IP address.
let entry = req.get_client_ip_addr().unwrap().to_string();

Now that you have an edge rate limiter, as well as a way to identify the client, you can use the check_rate method of the rate limiter to determine if the client should be able to proceed:

  1. Rust
  2. Go
// Check if the request should be blocked and update the rate counter.
let result = limiter.check_rate(
&entry, // The client to rate limit.
1, // The number of requests this execution counts as.
RateWindow::SixtySecs, // The time window to count requests within.
100, // The maximum average number of requests per second calculated over the rate window.
Duration::from_secs(300), // The duration to block the client if the rate limit is exceeded.
);

Finally, you can act upon the result by responding with a suitable error message.

  1. Rust
  2. Go
let is_blocked: bool = match result {
Ok(is_blocked) => is_blocked,
Err(err) => {
// Failed to check the rate. This is unlikely but it's up to you if you'd like to fail open or closed.
eprintln!("Failed to check the rate: {:?}", err);
false
}
};
if is_blocked {
return Ok(Response::from_status(StatusCode::TOO_MANY_REQUESTS)
.with_body_text_plain("You have sent too many requests recently. Try again later."));
}

Configure limits dynamically

Sometimes you may want to change the rate limit variables without redeploying your service. You can use the Fastly Config Store to store rate limits and update them at runtime. The following code demonstrates how to do this:

  1. Rust
  2. Go
// Open the config store and get the rate limit configuration.
let config = fastly::ConfigStore::open("rate_limit_config");
// Parse the rate limit configuration.
let rate_limit_window: RateWindow = match config
.get("window")
.expect("no rate_limit_config/window configured")
.as_str()
{
"1" => RateWindow::OneSec,
"10" => RateWindow::TenSecs,
"60" => RateWindow::SixtySecs,
_ => panic!("rate_limit_config/window is not a valid window"),
};
let rate_limit_max_requests: u32 = config
.get("max_requests")
.expect("no rate_limit_config/max_requests configured")
.parse()
.expect("rate_limit_config/max_requests is not a number");
let rate_limit_block_duration: Duration = Duration::from_secs(
config
.get("block_duration_secs")
.expect("no rate_limit_config/block_duration_secs configured")
.parse()
.expect("rate_limit_config/block_duration_secs is not a number"),
);

With these variables now available, you can update the check_rate method call to refer to the new rate limit configuration:

  1. Rust
  2. Go
// Check if the request should be blocked and update the rate counter.
let result = limiter.check_rate(
&entry, // The client to rate limit.
1, // The number of requests this execution counts as.
rate_limit_window, // The time window to count requests within.
rate_limit_max_requests, // The maximum average number of requests per second calculated over the rate window.
rate_limit_block_duration, // The duration to block the client if the rate limit is exceeded.
)

Deploy and test

Now that you have added rate limiting to your service and added the relevant values to your config store, you can deploy the updated code using the fastly compute publish command.

$ fastly compute publish
✓ Verifying fastly.toml
✓ Identifying package name
✓ Identifying toolchain
✓ Running [scripts.build]
✓ Creating package archive
SUCCESS: Built package (pkg/edge-rate-limiting.tar.gz)
✓ Verifying fastly.toml
✓ Uploading package
✓ Activating service (version 2)
Manage this service at:
https://manage.fastly.com/configure/services/uPBMaxGHqhPPRwAUIQWU43
View this service at:
https://some-random-words.edgecompute.app
SUCCESS: Deployed package (service uPBMaxGHqhPPRwAUIQWU43, version 2)
This page is part of a series in the Rate limiting use case.