Rust on the Compute platform

The guidance on this page was tested with an older version (0.9.1) of the Rust SDK. It may still work with the latest version (0.9.9), but the change log may help if you encounter any issues.

The Compute platform supports application code written in Rust, a fast and memory-efficient language for building performant applications.

Quick access

Project layout

If you don't yet have a working toolchain and Compute service set up, start by getting set up.

At the end of the initialization process, the current working directory will contain a file tree resembling the following:

├── .cargo
├── .gitignore
├── Cargo.lock
├── Cargo.toml
├── fastly.toml
├── rust-toolchain
└── src

The most important file to work on is src/, which contains the logic you'll run on incoming requests. If you initialized your project from the default starter template, the contents of this file should match the one in the template's repo. The other files include:

  • Cargo metadata: Cargo.toml and Cargo.lock describe the dependencies of your package, managed using Cargo, Rust's package manager.
  • Fastly metadata: The fastly.toml file contains metadata required by Fastly to deploy your package to a Fastly service. It is generated by the fastly compute init command. Learn more about fastly.toml.
  • Rust toolchain: The rust-toolchain.toml file specifies that the compiler should produce a WebAssembly binary and which version of Rust to use.

Main interface

The most common way to start a Compute program is to define a main() function with the #[fastly::main] attribute and the following type signature:

3fn main(req: Request) -> Result<Response, Error> {
4 // ...

HINT: Use of the #[fastly::main] macro creates a simple interface in which the main function of your program receives a request and returns a response. However, there are use cases where this is not necessarily a good fit, such doing more work after starting to send the response to the client. If you wish, you may define a conventional main function and use fastly::Request::from_client and fastly::Response::send_to_client (or fastly::Response::stream_to_client) instead.

The main fastly crate provides the core Request, Body, ResponseExt, and Error types referenced here. The program will be invoked for each request that Fastly receives for a domain attached to your service, and it must return a response that can be served to the client.

Communicating with backend servers and the Fastly cache

A fastly::Request can be forwarded to any backend defined on your service. If you specify a backend hostname as part of completing the fastly compute deploy wizard, it will be named the same as the hostname or IP address, but with . replaced with _ (e.g., 151_101_129_57). It's a good idea to define backend names as constants:

const BACKEND_NAME: &str = "my_backend_name";

And then reference them when you want to forward a request to a backend:

7 req.set_ttl(60);
8 Ok(req.send(BACKEND_NAME)?)

Requests forwarded to a backend will typically transit the Fastly cache, and the response may come from cache. For more precise or explicit control over the Fastly edge cache see Caching content with Fastly.

The Rust SDK supports dynamic backends created at runtime using the BackendBuilder.

Responses returned from the send method are compatible with the return type of main, so a minimal implementation of a Compute service that acts as a standard HTTP caching proxy between the client and the backend is:

1use fastly::{Error, Request, Response};
3const BACKEND_NAME: &str = "my_backend_name";
6fn main(req: Request) -> Result<Response, Error> {
7 Ok(req.send(BACKEND_NAME)?)

Composing requests and responses

In addition to the request passed into main() and responses returned from send(), requests and responses can also be constructed. This is useful if you want to make an arbitrary API call that is not derived from the client request, or if you want to make a response to the client without making any backend fetch at all.

The Request struct can be used as a builder, to chain methods that customize the request:

let req = Request::post("").with_header("some-header", "someValue");

Similarly, Response has several static methods that create a new response:

5 Ok(Response::from_body("Hi from the edge"))

Parsing and transforming responses

Requests and responses in Compute services are streams, which allows large payloads to move through your service without buffering or running out of memory. Conversely, running methods such as into_string on a Body will force the stream to be consumed entirely into memory. This can be appropriate where a response is known to be small or needs to be complete to be parsable.

This example will read a backend response into memory, replace every occurrence of "cat" with "dog" in the body, and then create a new body with the transformed string:

let api_req = Request::get("https://host/api/checkAuth");
let mut beresp = api_req.send("example_backend")?;
let beresp_body = beresp.take_body();
// Take care! into_string() will consume the entire body into memory, and replace()
// will futher double the memory requirement
let new_body = beresp_body.into_string().replace("cat", "dog");

However, it is often better to avoid buffering responses in this way. Peeking at the beginning of the stream can be useful for some use cases; this example identifies when a response stream begins with the WebAssembly 'magic number':

const MAGIC: &[u8] = b"\0asm";
let prefix = beresp_body.get_prefix_mut(MAGIC.len());
if prefix.as_slice() == MAGIC {
println!("might be Wasm!");

Parsing responses in Rust usually benefits from well tested dependencies such as serde_json which works well on the Compute platform. This example tries to consume the body as a JSON value, but only up to the first 4KiB. Using take() here avoids writing the bytes back to the body unnecessarily:

let prefix = beresp_body.get_prefix_mut(4096).take();
if let Ok(_json) = serde_json::from_slice::<serde_json::Value>(&prefix) {
println!("valid json!");

Other crates known to be useful for transforming or composing responses include lolhtml and horrorshow.


Fastly can compress and decompress content automatically, and it is often easier to use these features than to try to perform compression or decompression within your Rust code. Learn more about compression with Fastly.

Using edge data

Fastly allows you to configure various forms of data stores to your services, both for dynamic configuration and for storing data at the edge. The Rust SDK exposes the kv_store, config_store and secret_store modules to allow access to these APIs.

All edge data resources are account-level, service-linked resources, allowing a single store to be accessed from multiple Fastly services.


The log-fastly crate provides a standardized interface for sending logs to Fastly real-time logging, which can be attached to many third party logging providers. Log endpoints are referenced in your code by name:

log_fastly::init_simple("my_endpoint_name", log::LevelFilter::Warn);
log::warn!("This will be written to my_endpoint...");
log::info!("...but this won't");

If your code panics, output will be emitted to stderr. You can override this behavior by specifying an endpoint to use for Rust panics with fastly::log::set_panic_endpoint:

panic!("oh no!");
// => logs "panicked at 'oh no', your/" to "my_error_endpoint"

Using dependencies

The Compute build process compiles your code to WebAssembly and uses the WebAssembly System Interface (WASI). Because of this, it supports WASI-compatible Rust crates. To get an idea of whether a crate will work with WASI, build it using cargo build --target=wasm32-wasi. If it fails, it is not currently compatible. If it succeeds, still note that some crates may use conditional compilation to exclude functionality on Wasm, or include stub implementations that fail at runtime.

Our Fiddle tool allows the use of a subset of crates that we have tested and confirmed will work on the Compute platform:

This is a tiny fraction of the crates which will work on the Compute platform, but these are the most commonly useful crates when building applications.

Access to the client request, creating requests to backends, the Fastly cache, and other Fastly features are exposed via Fastly's own public crates:

Testing and debugging

Logging is the main mechanism to debug Compute programs. Log output from live services can be monitored via live log tailing. The local test server and Fastly Fiddle display all log output automatically. See Testing & debugging for more information about choosing an environment in which to test your program.

Most common logging requirements involve HTTP requests and responses. It's important to do this in a way that doesn't affect the main program logic, since consuming a request or response body can only be done once. The following example demonstrates a println! statement for request headers, response headers, request body and response body:

Since the bodies of HTTP requests and responses in Compute services are streams, we are using the try_get_body_prefix_str method to 'spy' on the response without consuming it.

Unit testing

Due to Fastly's custom WASI hostcalls, some setup is required in order to run Rust unit tests in a way you might be used to. By default, the environment created by cargo test doesn't expose the Fastly hostcalls and instead panics when there is an attempt to read a value from a type that is provided by the fastly crate.

In order to provide these types, you can use a special run-mode on our local development server to run each individual test and use cargo-nextest to handle the surrounding orchestration. To do that, you will need to install the local testing server independently of the Fastly CLI, and modify your cargo.toml. Full instructions can be found in the testing server's GitHub repo. This will enable you to use cargo nextest run to run unit tests that use the Fastly crate.

User contributed notes


Do you see an error in this page? Do have an interesting use case, example or edge case people should know about? Share your knowledge and help people who are reading this page! (Comments are moderated; for support, please contact