Rust on Compute@Edge

IMPORTANT: The content on this page is written for version 0.7.0 of the fastly crate. If you have previously used this example, your project may be using an older SDK version. View the changelog to learn how to migrate your program.

WARNING: This information is part of a limited availability release. Portions of this API may be subject to changes and improvements over time. Fields marked deprecated may be removed in the future and their use is discouraged. For more information, see our product and feature lifecycle descriptions.

Compute@Edge supports application code written in Rust, a fast and memory-efficient language for building performant applications.

Rust is the most mature Compute@Edge SDK and provides access to the widest range of features. However, if you are new to strongly typed languages, it may be a steep learning curve. If you are already familiar with JavaScript or TypeScript, consider trying out our beta support for AssemblyScript.

Project layout

If you don't yet have a working toolchain and Compute@Edge service set up, start by getting set up.

At the end of the initialization process, the current working directory will contain a file tree resembling the following:

├── .cargo
├── .gitignore
├── Cargo.lock
├── Cargo.toml
├── README.md
├── fastly.toml
├── rust-toolchain
└── src
└── main.rs

The most important file to work on is src/main.rs, which contains the logic you'll run on incoming requests. If you initialized your project from the default starter template, the contents of this file should match the one in the template's repo. The other files include:

  • Cargo metadata: Cargo.toml and Cargo.lock describe the dependencies of your package, managed using Cargo, Rust's package manager.
  • Fastly metadata: The fastly.toml file contains metadata required by Fastly to deploy your package to a Fastly service. It is generated by the init command and for the moment, should not be edited manually.
  • Rust toolchain: The rust-toolchain directory pins the toolchain to a specific version, which is currently required but may eventually not be needed.

Main interface

The most common way to start a Compute@Edge program is to define a main() function with the #[fastly::main] attribute and the following type signature:

src/main.rs
Rust
2#[fastly::main]
3fn main(req: Request) -> Result<Response, Error> {
4 // ...
6}

HINT: Use of the #[fastly::main] macro creates a simple interface in which the main function of your program receives a request and returns a response. However, there are use cases where this is not necessarily a good fit, such as sending (or beginning to send) the downstream response to the client before the Compute@Edge program finishes. If you wish, you may define a conventional main function and use fastly::Request::from_client and fastly::Response::send_to_client (or fastly::Response::stream_to_client) instead.

The main fastly crate provides the core Request, Body, ResponseExt, and Error types referenced here. The program will be invoked for each request that Fastly receives for a domain attached to your service, and it must return a response that can be served to the client.

Communicating with backend servers and the Fastly cache

A fastly::Request can be forwarded to any backend defined on your service. Backends can be created via the Fastly CLI, API, or web interface, and are referenced by name. If you specify a backend hostname as part of completing the fastly compute deploy wizard, it will be named the same as the hostname or IP address, but with . replaced with _ (e.g., 123_456_789_123). It's a good idea to define backend names as constants:

const BACKEND_NAME: &str = "my_backend_name";

And then reference them when you want to forward a request to a backend:

src/main.rs
Rust
7 req.set_ttl(60);
8 Ok(req.send(BACKEND_NAME)?)

Requests forwarded to a backend will transit the Fastly cache, and the response may come from cache. Where a request doesn't find a matching result in cache, it will be sent to the origin, and the response will be cached based on the freshness rules determined from its HTTP response headers (unless overridden, as in the example above, by Request::set_ttl).

In a future release, it will be possible to interact with the cache and the network separately.

Responses returned from the send method are compatible with the return type of main, so a minimal implementation of a Compute@Edge service that acts as a standard HTTP caching proxy between the client and the backend is:

src/main.rs
Rust
1use fastly::{Error, Request, Response};
2
3const BACKEND_NAME: &str = "my_backend_name";
4
5#[fastly::main]
6fn main(req: Request) -> Result<Response, Error> {
7 Ok(req.send(BACKEND_NAME)?)
8}

Composing requests and responses

In addition to the request passed into main() and responses returned from send(), requests and responses can also be constructed. This is useful if you want to make an arbitrary API call that is not derived from the client request, or if you want to make a response to the client without making any backend fetch at all.

The Request struct can be used as a builder, to chain methods that customize the request:

let req = Request::post("https://example.com/api/getFlags").with_header("some-header", "someValue");

Similarly, Response has several static methods that create a new response:

src/main.rs
Rust
5 Ok(Response::from_body("Hi from the edge"))

Parsing and transforming responses

Requests and responses in Compute@Edge are streams, which allows large payloads to move through your service without buffering or running out of memory. Conversely, running methods such as into_string on a Body will force the stream to be consumed entirely into memory. This can be appropriate where a response is known to be small or needs to be complete to be parsable.

This example will read a backend response into memory, replace every occurrence of "cat" with "dog" in the body, and then create a new body with the transformed string:

let api_req = Request::get("https://host/api/checkAuth");
let mut beresp = api_req.send("example_backend")?;
let beresp_body = beresp.take_body();
// Take care! into_string() will consume the entire body into memory, and replace()
// will futher double the memory requirement
let new_body = beresp_body.into_string().replace("cat", "dog");
beresp.set_body(new_body);

However, it is often better to avoid buffering responses in this way. Peeking at the beginning of the stream can be useful for some use cases; this example identifies when a response stream begins with the WebAssembly 'magic number':

const MAGIC: &[u8] = b"\0asm";
let prefix = beresp_body.get_prefix_mut(MAGIC.len());
if prefix.as_slice() == MAGIC {
println!("might be Wasm!");
}

Parsing responses in Rust usually benefits from well tested dependencies such as serde_json which works well with Compute@Edge. This example tries to consume the body as a JSON value, but only up to the first 4KiB. Using take() here avoids writing the bytes back to the body unnecessarily:

let prefix = beresp_body.get_prefix_mut(4096).take();
if let Ok(_json) = serde_json::from_slice::<serde_json::Value>(&prefix) {
println!("valid json!");
}

Other crates known to be useful for transforming or composing responses include lolhtml and horrorshow.

Using edge dictionaries

Fastly allows you to configure edge dictionaries on your Compute@Edge services. These can be accessed using the Dictionary interface in the fastly crate.

src/main.rs
Rust
5 let config = Dictionary::open("example_dictionary");
6 match config.get("response_text") {
7 Some(text) => Ok(Response::from_body(text)),
8 _ => Ok(Response::from_body("No response text set"))
9 }

Logging

The log-fastly crate provides a standardized interface for sending logs to Fastly real-time logging, which can be attached to many third party logging providers. Before adding logging code to your Compute@Edge program, set up your log endpoint using the CLI, API, or web interface. Log endpoints are referenced in your code by name:

log_fastly::init_simple("my_endpoint_name", log::LevelFilter::Warn);
log::warn!("This will be written to my_endpoint...");
log::info!("...but this won't");

If your code panics, output will be emitted to stderr. You can override this behavior by specifying an endpoint to use for Rust panics with fastly::log::set_panic_endpoint:

fastly::log::set_panic_endpoint("my_error_endpoint").unwrap();
panic!("oh no!");
// => logs "panicked at 'oh no', your/file.rs:line:col" to "my_error_endpoint"

Using dependencies

Compute@Edge compiles your code to WebAssembly and uses the WebAssembly System Interface (WASI). Because of this, it supports WASI-compatible Rust crates. To get an idea of whether a crate will work with WASI, build it using cargo build --target=wasm32-wasi. If it fails, it is not currently compatible. If it succeeds, still note that some crates may use conditional compilation to exclude functionality on Wasm, or include stub implementations that fail at runtime.

Access to the client request, creating requests to backends, the Fastly cache, and other Fastly features are exposed via Fastly's own public crates:

Testing and debugging Rust

You may choose to write unit tests for small independent pieces of your Rust code intended for Compute@Edge. However, Compute@Edge apps heavily depend on and interact with Fastly features and your own systems. This can make an integration testing strategy that focusses on a lesser number of high impact tests more valuable.

To learn more about testing Compute@Edge applications, see Testing & debugging.