JavaScript on Compute@Edge

IMPORTANT: The content on this page is written for version 0.2.1 of the @fastly/js-compute package. If you have previously used this example, your project may be using an older SDK version. View the changelog to learn how to migrate your program.

WARNING: This information is part of a beta release, which may be subject to breaking changes and improvements over time. For more information, see our product and feature lifecycle descriptions.

Compute@Edge supports application code written in JavaScript bundled into a Wasm binary. It is a great SDK to get started with on Compute@Edge if you are used to writing traditional JavaScript or Node.js applications.

Project layout

If you don't yet have Node.js and Compute@Edge service set up, start by getting set up.

At the end of the initialization process, the current working directory will contain a file tree resembling the following:

├── README.md
├── fastly.toml
├── node_modules
├── package-lock.json
├── package.json
├── webpack.config.js
└── src
└── index.js

The most important file to work on is src/index.js, which contains the logic you'll run on incoming requests. If you initialized your project from the default starter template, the contents of this file should match the one in the template's repo. The other files include:

  • npm metadata: package.json and package-lock.json describe the dependencies of your package, managed using npm, Node's package manager.
  • Fastly metadata: The fastly.toml file contains metadata required by Fastly to deploy your package to a Fastly service. It is generated by the init command and for the moment, should not be edited manually.
  • Project dependencies: The node_modules directory contains the dependencies of your package.
  • Module bundling: The webpack.config.js contains webpack configuration options.

Main interface

Writing JavaScript code for Compute@Edge amounts to writing event handler functions that run when a fetch event fires. This may look familiar if you've used the Service Worker API.

A fetch event will be dispatched for each request that Fastly receives for a domain attached to your service, and any associated EventListeners must call event.respondWith with a valid response to send to the client. The downstream Request object can be accessed through the event.request property.

src/index.js
JavaScript
3function handler(event) {
4 // Get the request from the client.
5 const req = event.request;
6
7 return fetch(req, {
8 backend: "example_backend"
9 });
10}
11
12addEventListener("fetch", event => {
13 // Send the backend response back to the client.
14 return event.respondWith(handler(event));
15});

The @fastly/js-compute module provides the core event target interface, and Request and Response classes referenced in this guide.

Communicating with backend servers and the Fastly cache

A Request can be forwarded to any backend defined on your service. Backends can be created via the Fastly CLI, API, or web interface, and are referenced by name. If you specify a backend hostname as part of completing the fastly compute deploy wizard, it will be named the same as the hostname or IP address, but with . replaced with _ (e.g., 123_456_789_123). It's a good idea to define backend names as constants:

const backendName = "my_backend_name";

And then reference them when you want to forward a request to a backend:

src/index.js
JavaScript
3// Define a named backend.
4const backendName = "my_backend_name";
7 // Get the request from the client.
8 const req = event.request;
9
10 // Create a cache override.
11 let cacheOverride = new CacheOverride("override", { ttl: 60 });
12
13 return fetch(req, {
14 backend: backendName,
15 cacheOverride
16 });

Requests forwarded to a backend will transit the Fastly cache, and the response may come from cache. Where a request doesn't find a matching result in cache, it will be sent to the origin, and the response will be cached based on the freshness rules determined from its HTTP response headers (unless overridden, as in the example above, by CacheOverride).

In a future release, it will be possible to interact with the cache and the network separately.

Composing requests and responses

In addition to the request referenced by event.request and responses returned from fetch, requests and responses can also be constructed. This is useful if you want to make an arbitrary API call that is not derived from the client request, or if you want to make a response to the client without making any backend fetch at all.

To compose a request from scratch, instantiate a new Request:

// Create some headers for our upstream request to our origin.
let upstreamHeaders = new Headers({ "some-header": "someValue" });
// Create our upstream request to our origin using our upstream headers.
let upstreamRequest = new Request("https://example.com/", {
method: "POST",
headers: upstreamHeaders,
});

Similarly, responses can be created by instantiating a Response:

src/index.js
JavaScript
4 // Set some basic headers.
5 const headers = new Headers();
6 headers.set('Content-Type', 'text/plain');
7
8 // Build a response.
9 return new Response("Hi from the edge", {
10 status: 200,
11 headers,
12 url: event.request.url
13 })

Parsing and transforming responses

Requests and responses in Compute@Edge are streams, which allows large payloads to move through your service without buffering or running out of memory. Conversely, running methods such as text on a Response will force the stream to be consumed entirely into memory. This can be appropriate where a response is known to be small or needs to be complete to be parsable.

This example will read a backend response into memory, replace every occurrence of "cat" with "dog" in the body, and then create a new body with the transformed string:

let backendResponse = await fetch("https://host/api/checkAuth", {
method: "POST",
backend: "example_backend",
});
// Take care! .text() will consume the entire body into memory!
let bodyStr = await backendResponse.text();
let newBody = bodyStr.replace("cat", "dog");
return new Response(newBody, {
status: backendResponse.status,
headers: backendResponse.headers
})

Parsing JSON responses in JavaScript is built in but also requires consuming the entire response into memory:

let backendResponse = await fetch("https://host/api/checkAuth", {
method: "POST",
backend: "example_backend",
});
// Take care! .json() will consume the entire body into memory!
let jsonData = await backendResponse.json();

It is often better to avoid buffering responses in this way, especially if the response is large, being delivered slowly in multiple chunks, or capable of being rendered progressively by the client. The Fastly JavaScript SDK implements WHATWG streams. In this example, the backend response is capitalized as it's received:

// Take a readable stream and return another readable stream that copies
// the content from the first but uppercases it
function capitalizeFilter(sourceStream) {
const decoder = new TextDecoder();
const encoder = new TextEncoder();
const reader = sourceStream.getReader();
return new ReadableStream({
pull(controller) {
return reader.read().then(({value: chunk, done: readerDone}) => {
const chunkStr = decoder.decode(chunk);
const transformedChunk = chunkStr.toUpperCase();
controller.enqueue(encoder.encode(transformedChunk));
if (readerDone) {
controller.close();
}
});
}
});
}
async function handler(event) {
// Send the client request to the backend.
const clientReq = event.request;
const backendResponse = await fetch(clientReq, { backend: "example_backend" });
// Pass the backend response through a filter, which uppercases all the text.
const filteredStream = capitalizeFilter(backendResponse.body);
// Construct a response using the filtered stream and deliver it to the client.
return new Response(filteredStream, {
headers: {
...backendResponse.headers,
"cache-control": "private, no-store"
}
});

Using edge dictionaries

Fastly allows you to configure edge dictionaries on your Compute@Edge services. These can be accessed using the Dictionary class in the SDK.

src/index.js
JavaScript
4 const exampleDictionary = new Dictionary("example_dictionary");
5 const someValue = exampleDictionary.get("key_name");

Logging

console provides a standardized interface for emitting log messages to stdout or stderr. To send logs to Fastly real-time logging, which can be attached to many third party logging providers, use the fastly.getLogger method. Before adding logging code to your Compute@Edge program, set up your log endpoint using the CLI, API, or web interface. Log endpoints are referenced in your code by name:

const logger = fastly.getLogger("JavaScriptLog");
logger.log("Hello!");
// logs "Hello!" to the "JavaScriptLog" log endpoint

If your code errors, output will be emitted to stderr:

throw new Error('Oh no!');
// This logs "abort: Oh no! in your/file.js(line:col)" to stderr.

Debug logging in the JavaScript runtime

Because JavaScript is an interpreted language, our tooling bundles a JavaScript runtime with your Compute@Edge package. This runtime has the ability to provide additional debugging output during your application's execution. To enable this, call the fastly.enableDebugLogging method when your app initializes:

fastly.enableDebugLogging(true);

The output is unstable and subject to change, but this is the kind of data you can expect to see emitted to STDOUT:

Running JS handleRequest function for C@E service version 1
Request handler took 8.548000ms
Running promise reactions
Running promise reactions took 0.028000ms
Done, waited for 214.297000ms
Running promise reactions took 0.750000ms

Logs written to STDOUT/STDERR can be monitored in real time using log tailing or the local testing server.

Using dependencies

Compute@Edge compiles your code to WebAssembly and uses the WebAssembly System Interface (WASI). Because of this, it supports WASI-compatible npm modules, which in practice is most modules that do not use have native platform bindings. Our Fiddle tool allows the use of a subset of modules that we have tested and confirmed will work with Compute@Edge:

This is a tiny fraction of the modules which will work on Compute@Edge, but these are the most commonly useful modules when building applications.

Access to the client request, creating requests to backends, the Fastly cache, and other Fastly features are exposed via Fastly's own public modules:

  • @fastly/js-compute - Core event interface and classes, provides access to client request, backend fetches and caching.

Developer experience

For the best experience of developing on Compute@Edge in JavaScript, include the following comment at the top of any file that uses the fastly. interface:

/// <reference types="@fastly/js-compute" />

This will allow your IDE to import the type definitions for the Fastly JavaScript SDK.

Module bundling

JavaScript application code must be compiled as a web worker before it can be compiled to WebAssembly. Usually, this means that you must use a module bundler to transform your code before you can deploy it to Compute@Edge. The JavaScript starter kit for Compute@Edge contains a Webpack configuration which sets reasonable defaults and is suitable for most Fastly code examples.

You can adapt this configuration to suit your needs.

For example, you may choose to add rules that determine how the different types of modules will be treated:

module.exports = { // ...
module: {
rules: [
// This allows for inlining of svg images, e.g.,
// import svgStr from './path/to/image.svg'
{
test: /\.(svg)$/,
type: "asset/source",
},
],
}
};

Shimming and redirecting module requests are useful techniques when your code relies on Node.js builtins, proposals, or newer standards.

module.exports = { // ...
// Shimming globals
plugins: [
new webpack.ProvidePlugin({
// Polyfill the URL standard
URL: "core-js/web/url",
// Polyfill Node.js' buffer (requires https://www.npmjs.com/package/buffer)
Buffer: ["buffer", "Buffer"],
}),
],
// Redirecting module requests
resolve: {
fallback: {
crypto: require.resolve("crypto-browserify"),
stream: require.resolve("stream-browserify"),
},
},
};

WARNING: Adding custom rules to the Webpack configuration may cause the resulting bundle - and therefore the compiled Wasm package - to become significantly larger. Compute@Edge packages are subject to platform and account-level limits on the maximum package size.

Testing and debugging

Logging is the main mechanism to debug Compute@Edge programs. Log output from live services can be monitored via live log tailing. The local test server and Fastly Fiddle display all log output automatically. See Testing & debugging for more information about choosing an environment in which to test your program.

Most common logging requirements involve HTTP requests and responses. It's important to do this in a way that doesn't affect the main program logic, since consuming a response body can only be done once. The following example demonstrates a console.log statement for request headers, response headers, request body and response body:

Since the bodies of HTTP requests and responses in Compute@Edge are streams, we are consuming the stream to its end and then logging the resulting data. In JavaScript once the .body property of a request or response has been read it cannot be used by fetch or respondWith, so we use the extracted body data to construct a new Request or Response after logging the body.

WARNING: Logging body streams in this way will likely slow down your program, and may trigger a memory limit if the payload is large.

Unit testing

You may choose to write unit tests for small independent pieces of your Rust code intended for Compute@Edge. However, Compute@Edge apps heavily depend on and interact with Fastly features and your own systems. This can make an integration testing strategy that focusses on a lesser number of high impact tests more valuable.