Testing and debugging on Compute@Edge

WARNING: This information is part of a limited availability release. Portions of this API may be subject to changes and improvements over time. Fields marked deprecated may be removed in the future and their use is discouraged. For more information, see our product and feature lifecycle descriptions.

When building for Compute@Edge, you have several options to test and debug your application:

  • Deploy to a live service: If you need the full functionality of Compute@Edge, you can deploy to a Fastly-hosted service and monitor logs in your console.
  • Run a local test server: If you need to develop rapidly or work offline, you can run your application with the local testing server.
  • Use Fastly Fiddle: If you are prototyping your application or experimenting with Compute@Edge, you can use Fiddle to create ephemeral Fastly services, and write test assertions against their instrumentation data.

HINT: Regardless of which method you use, you'll probably want to create log data from the program to provide visibility into what's going on. For information about how to log and what to log, see the usage guide to your chosen language, e.g. using Rust.

Live log monitoring in your console

During development it can be helpful to deploy your application to Compute@Edge using fastly compute deploy and interact with it using a *.edgecompute.app domain (or other testing domain that you attach to the service). To make debugging easier, the Fastly CLI also provides a fastly log-tail command that allows you to watch your service's log output from your local console.

Any output sent to stdout and stderr will be forwarded to your console, along with runtime errors encountered by the application. To see log tailing in action, add some println! statements to the default starter kit for Rust:

main.rs
Rust
#[fastly::main]
fn main(mut req: Request) -> Result<Response, Error> {
// Log request path to stdout.
println!("Request received for path {}", req.get_path());
}

Build and deploy the application with fastly compute publish, then run fastly log-tail to see log output from the live service streaming into your console:

$ fastly log-tail
INFO: Managed logging enabled on service PS1Z4isxPaoZGVKVdv0eY
stdout | d81ad0e4 | Request received for path /
stdout | f00dfcda | Request received for path /favicon.ico

Combining with log endpoints

Data sent to named log endpoints is not included in log tailing output. In production, the best way to get logs out of your application is with one of our many logging integrations, which support batching and high volumes as part of the Real Time Logging feature.

If you wish to retain the capability to debug your service using log tailing once it is serving production traffic, it's important that you do not log on every request, since you may generate more output than can be practically streamed to your local machine (see constraints and limitations). Consider switching the log destination based on a simple request flag such as a cookie:

main.rs
Rust
log_fastly::init_simple("my_endpoint_name", log::LevelFilter::Warn);
if let Some(cookie_val) = _req.get_header("Cookie") {
if cookie_val.to_str().unwrap_or("").contains("key=some-secret") {
println!("This will go to stdout and be available for log tailing");
}
}
log::warn!("This will be written to the log endpoint...");

Constraints and limitations

The following limits apply to the use of log tailing:

ItemLimitScope
Maximum STDIO ingestion rate for log tailing20KB/sper service
High watermark: when the amount of data buffered exceeds this, older data is deleted10MBper service
Low watermark: when the high watermark is reached, data is deleted until the amount buffered is no more than this8MBper service

Other limitations apply to logging in general.

Running a local testing server

With the fastly compute serve command, you can run a local development server that behaves like the Fastly platform. Much like a Fastly service, the development server can be configured with backends. See the local testing section of the fastly.toml reference for all of the available configuration parameters.

Starting the server

Once you have defined your resources in the fastly.toml file, run the fastly compute serve command to start the testing server:

$ fastly compute serve
✓ Initializing...
✓ Verifying package manifest...
✓ Verifying local rust toolchain...
✓ Building package using rust toolchain...
✓ Creating package archive...
SUCCESS: Built rust package carpet-room (pkg/carpet-room.tar.gz)
✓ Initializing...
✓ Checking latest Viceroy release...
✓ Checking installed Viceroy version...
✓ Running local server...
Jul 16 12:51:52.346 INFO checking if backend 'example_backend' is up
Jul 16 12:51:52.546 INFO backend 'example_backend' is up
Jul 16 12:51:52.546 INFO Listening on http://127.0.0.1:7676

Open a web browser and go to http://127.0.0.1:7676 to see your Compute@Edge application served by your own machine. Check the console to see both stdio and log endpoint output from your application.

Detecting whether code is running under a local environment

In the local server, the value of FASTLY_HOSTNAME is always "localhost", which can be used to determine that your code is executing locally rather than on the live Compute@Edge platform.

  1. Rust
let LOCAL = std::env::var("FASTLY_HOSTNAME").unwrap() == "localhost";
if LOCAL {
println!("I'm testing locally");
}

Constraints and limitations

The local server is developed and maintained in parallel to our compute platform, and while it is not intended to be a perfect replica of the Fastly platform, features are regularly added to allow testing as much of the functionality of a Compute@Edge program as possible. The Fastly CLI will keep your local server instance up to date automatically as new features become available.

The following limitations apply to the current version of the local server:

  • There is no cache, so backend responses that would ordinarily be cacheable, will not be stored.
  • Data written to named log endpoints will not be routed to those endpoints but instead will be emitted to stdout, along with any output generated by code that prints to stdio directly.
  • Requests made to the local server do not transit Fastly's routing infrastructure and as a result differ in a few ways:
    • Requests that would normally be rejected, for example for exceeding our platform limit on URL length, may instead reach your code.
    • Outbound response filters triggered by headers, such as X-Compress-Hint, are not available.
  • Requests made from the local server to backends also differ as a result of not transiting Fastly's routing infrastructure:
  • The Geolocation interface is not supported.
  • TLS information about the client connection is not available.
  • Most Compute@Edge environment variables are not available. The following are defined:
    • FASTLY_HOSTNAME: Always set to "localhost"
    • FASTLY_TRACE_ID: An ID starting from 0 and incrementing with each incoming request, providing each instance with its own unique ID