JavaScript on Compute@Edge

The guidance on this page was tested with an older version (1.6.0) of the JavaScript SDK. It may still work with the latest version (2.0.1), but the change log may help if you encounter any issues.

Compute@Edge supports application code written in JavaScript bundled into a WebAssembly (Wasm) binary. It is a great SDK to get started with on Compute@Edge if you are used to writing browser-based JavaScript or Node.js applications.

Quick access

HINT: If you are using a JavaScript web framework such as Gatsby, Next.js, or Remix, check out using frameworks on Compute@Edge.

Project layout

If you don't yet have Node.js and Compute@Edge service set up, start by getting set up.

At the end of the initialization process, the current working directory will contain a file tree resembling the following:

├── fastly.toml
├── node_modules
├── package-lock.json
├── package.json
├── webpack.config.js
└── src
└── index.js

The most important file to work on is src/index.js, which contains the logic you'll run on incoming requests. If you initialized your project from the default starter kit, the contents of this file should match the one in the starter kit's repo. The other files include:

  • npm metadata: package.json and package-lock.json describe the dependencies of your package, managed using npm, Node's package manager.
  • Fastly metadata: The fastly.toml file contains metadata required by Fastly to deploy your package to a Fastly service. It is generated by the fastly compute init command. Learn more about fastly.toml.
  • Project dependencies: The node_modules directory contains the dependencies listed in package.json.
  • Module bundling: The webpack.config.js contains webpack configuration options.

IMPORTANT: It's not essential to have a webpack configuration, but if you do, it must include an external configuration to load Fastly's namespaced imports. Learn more about module bundling.

Main interface

Incoming requests trigger an event handler function with a fetch event. This may look familiar if you've used the Service Worker API.

A FetchEvent will be dispatched for each request that Fastly receives that's routed to your service, and any associated EventListeners must synchronously call event.respondWith with a valid response to send to the client. The FetchEvent's .request property is a standard web Request, while .client exposes data about the requesting client.

Although event.respondWith must be called synchronously, the argument provided to it may be a Promise, so it is often convenient to define an async function to handle the request and return a promised Response:

3addEventListener("fetch", event => event.respondWith(handleRequest(event)) );
5async function handleRequest(event) {
6 // Get the request from the client.
7 const req = event.request;
9 return fetch(req, {
10 backend: "example_backend"
11 });

The FetchEvent is provided by the js-compute module, Fastly's JavaScript SDK, which must be included in your project's dependencies.

Communicating with backend servers and the Fastly cache

You can make HTTP requests from your Fastly service by passing a Request to the fetch() function. Our implementation of fetch offers a few extra properties compared to the web standard version, including .backend, which allows you to specify the name of a backend defined statically on your service, or pass an instance of a dynamic backend. If you specify a backend hostname as part of completing the fastly compute deploy wizard, it will be named the same as the hostname or IP address, but with . replaced with _ (e.g., 151_101_129_57). It's a good idea to define backend names as constants:

const backendName = "my_backend_name";

And then reference them when you want to forward a request to a backend:

3import { CacheOverride } from "fastly:cache-override";
4const backendName = "my_backend_name";
6function handler(event) {
7 // Create a cache override.
8 let cacheOverride = new CacheOverride("override", { ttl: 60 });
10 return fetch(event.request, {
11 backend: backendName,
12 cacheOverride
13 });

If using dynamic backends, optionally call allowDynamicBackends() to automatically create backends on demand from the properties of the Request:

/// <reference types="@fastly/js-compute" />
import { allowDynamicBackends } from "fastly:experimental";
async function app() {
// For any request, return the fastly homepage -- without defining a backend!
return fetch('');
addEventListener("fetch", event => event.respondWith(app(event)));

HINT: Many JavaScript libraries expect to use the standard fetch API to make HTTP requests. If your application imports a dependency that makes HTTP calls using fetch, those requests will fail unless dynamic backends are enabled on your account.

Requests forwarded to a backend will transit the Fastly cache, and the response may come from cache. Where a request doesn't find a matching result in cache, it will be sent to the origin, and the response will be cached based on the freshness rules determined from its HTTP response headers (unless overridden, as in the example above, by CacheOverride).

In a future release, it will be possible to interact with the cache and the network separately.

Composing requests and responses

In addition to the Request referenced by event.request and Response objects returned from the fetch() function, requests and responses can also be constructed. This is useful if you want to make an arbitrary API call that is not derived from the client request, or if you want to make a response to the client without making any backend fetch at all.

To compose a request from scratch, instantiate a new Request:

// Create some headers for the request to origin
let upstreamHeaders = new Headers({ "some-header": "someValue" });
// Create a POST request to our origin using the custom headers
let upstreamRequest = new Request("", {
method: "POST",
headers: upstreamHeaders,

Similarly, responses can be created by instantiating a Response:

4 const headers = new Headers();
5 headers.set('Content-Type', 'text/plain');
7 return new Response("Hi from the edge", {
8 status: 200,
9 headers,
10 url: event.request.url
11 })

Parsing and transforming responses

Requests and responses in Compute@Edge are streams, which allows large payloads to move through your service without buffering or running out of memory. Conversely, running methods such as text on a Response will force the stream to be consumed entirely into memory. This can be appropriate where a response is known to be small or needs to be complete to be parsable.

This example will read a backend response into memory, replace every occurrence of "cat" with "dog" in the body, and then create a new body with the transformed string:

let backendResponse = await fetch("https://host/api/checkAuth", {
method: "POST",
backend: "example_backend",
// Take care! .text() will consume the entire body into memory!
let bodyStr = await backendResponse.text();
let newBody = bodyStr.replace("cat", "dog");
return new Response(newBody, {
status: backendResponse.status,
headers: backendResponse.headers

Parsing JSON responses is available natively in the Fetch API via the json() method of a Response but also requires consuming the entire response into memory:

let backendResponse = await fetch("https://host/api/checkAuth", {
method: "POST",
backend: "example_backend",
// Take care! .json() will consume the entire body into memory!
let jsonData = await backendResponse.json();

It is often better to avoid buffering responses in this way, especially if the response is large, being delivered slowly in multiple chunks, or capable of being rendered progressively by the client. The Fastly JavaScript SDK implements WHATWG streams. In this example, the backend response is capitalized as it's received:

// Take a readable stream and return another readable stream that copies
// the content from the first but uppercases it
function capitalizeFilter(sourceStream) {
const decoder = new TextDecoder();
const encoder = new TextEncoder();
const reader = sourceStream.getReader();
return new ReadableStream({
pull(controller) {
return{value: chunk, done: readerDone}) => {
const chunkStr = decoder.decode(chunk);
const transformedChunk = chunkStr.toUpperCase();
if (readerDone) {
async function handler(event) {
// Send the client request to the backend.
const clientReq = event.request;
const backendResponse = await fetch(clientReq, { backend: "example_backend" });
// Pass the backend response through a filter, which uppercases all the text.
const filteredStream = capitalizeFilter(backendResponse.body);
// Construct a response using the filtered stream and deliver it to the client.
return new Response(filteredStream, {
headers: {
"cache-control": "private, no-store"


Fastly can compress and decompress content automatically, and it is often easier to use these features than to try to perform compression or decompression within your JavaScript code. Learn more about compression with Fastly.

Using edge data

Fastly allows you to configure various forms of data stores to your services, both for dynamic configuration and for storing data at the edge. The JavaScript SDK exposes the fastly:config-store and fastly:kv-store packages to allow access to these APIs.

All edge data resources are account-level, service-linked resources, allowing a single store to be accessed from multiple Fastly services.


console provides a standardized interface for emitting log messages to STDOUT or STDERR.

To send logs to Fastly real-time logging, which can be attached to many third party logging providers, use the Logger class. Log endpoints are referenced in your code by name:

/// <reference types="@fastly/js-compute" />
import { Logger } from "fastly:logger";
function handler(event) {
// logs "Hello!" to the "JavaScriptLog" log endpoint
const logger = new Logger("JavaScriptLog");
return new Response({ status: 200 });
addEventListener("fetch", event => event.respondWith(handler(event)));

If your code errors, output will be emitted to stderr:

// This logs "abort: Oh no! in src/index.js(line:col)" to stderr.
throw new Error('Oh no!');

Using dependencies

Compute@Edge compiles your code to WebAssembly and uses the WebAssembly System Interface (WASI). Because of this, it supports WASI-compatible npm modules, which in practice is most modules that do not use have native platform bindings. Access to the client request, creating requests to backends, the Fastly cache, and other Fastly features are exposed via Fastly's own public module @fastly/js-compute, which must be a deendency of your project.

Our Fiddle tool allows the use of an approved subset of modules for experimentation that we have tested and confirmed will work with Compute@Edge:

This is a tiny fraction of the modules which will work on Compute@Edge, but these are the most commonly useful modules when building applications.

Developer experience

For the best experience of developing on Compute@Edge in JavaScript, include the following comment at the top of any file that uses the fastly. interface:

/// <reference types="@fastly/js-compute" />

This will allow your IDE to import the type definitions for the Fastly JavaScript SDK. If you use eslint with a custom eslintrc file, you may also need to add some extensions to recognize the Fastly types:

"parser": "@typescript-eslint/parser",
"plugins": ["@typescript-eslint"],
"extends": [

Module bundling

Compute@Edge applications written in JavaScript can be compiled by the Fastly CLI without any bundling, but you can choose to use a module bundler if you want to replace global modules or provide polyfills. The default JavaScript starter kit for Compute@Edge contains a Webpack configuration which sets reasonable defaults and is suitable for most use cases.

You can adapt this configuration to suit your needs. For example, you may choose to add rules that determine how the different types of modules will be treated:

module.exports = { // ...
module: {
rules: [
// This allows for inlining of svg images, e.g.,
// import svgStr from './path/to/image.svg'
test: /\.(svg)$/,
type: "asset/source",
// If your project uses WebPack you MUST include this externals rule to ensure
// that "fastly:*" namespaced module imports work as intended.
externals: [
({request,}, callback) => {
if (/^fastly:.*$/.test(request)) {
return callback(null, 'commonjs ' + request);

Shimming and redirecting module requests are useful techniques when your code relies on Node.js builtins, proposals, or newer standards.

module.exports = { // ...
// Shimming globals
plugins: [
new webpack.ProvidePlugin({
// Polyfill Node.js' buffer (requires
Buffer: ["buffer", "Buffer"],
// Redirecting module requests
resolve: {
fallback: {
crypto: require.resolve("crypto-browserify"),
stream: require.resolve("stream-browserify"),
// If your project uses WebPack you MUST include this externals rule to ensure
// that "fastly:*" namespaced module imports work as intended.
externals: [
({request,}, callback) => {
if (/^fastly:.*$/.test(request)) {
return callback(null, 'commonjs ' + request);

WARNING: Adding custom rules to the Webpack configuration may cause the resulting bundle - and therefore the compiled Wasm package - to become significantly larger. Compute@Edge packages are subject to platform and account-level limits on the maximum package size.

Testing and debugging

Logging is the main mechanism to debug Compute@Edge programs. Log output from live services can be monitored via live log tailing. The local test server and Fastly Fiddle display log output automatically. See Testing & debugging for more information about choosing an environment in which to test your program.

Most common logging requirements involve HTTP requests and responses. It's important to do this in a way that doesn't affect the main program logic, since consuming a response body can only be done once. The following example demonstrates a console.log statement for request headers, response headers, request body and response body:

Since the bodies of HTTP requests and responses in Compute@Edge are streams, we are consuming the stream to its end and then logging the resulting data. In JavaScript once the .body property of a request or response has been read it cannot be used by fetch or respondWith, so we use the extracted body data to construct a new Request or Response after logging the body.

WARNING: Logging body streams in this way will likely slow down your program, and may trigger a memory limit if the payload is large.

Unit testing

You may choose to write unit tests for small independent pieces of your JavaScript code intended for Compute@Edge. However, Compute@Edge apps heavily depend on and interact with Fastly features and your own systems. This can make an integration testing strategy that focusses on a lesser number of high impact tests more valuable.