WebSockets passthrough

The guidance on this page was tested with an older version (0.8.9) of the Rust SDK. It may still work with the latest version (0.9.1), but the change log may help if you encounter any issues.

WebSockets are two-way communication channels between a client device (such as a web browser) and a server, allowing the server to send messages to the client at any time without the client having to make a request.

Unlike the HTTP requests which make up the bulk of web traffic handled by Fastly, WebSockets are long-lived connections and do not have a request-response cycle, but instead can carry data in either direction at any time. As a result, WebSockets don't fit Fastly's normal processing model for edge traffic, but since WebSocket connections begin life as HTTP requests, you can pass the WebSocket connection directly to the origin server.

HINT: WebSocket passthrough creates one WebSocket connection to origin for every connection from a client device to Fastly. To have Fastly broker messages for you, with channels, and one-to-many publishing, consider using Fanout instead.

Enabling WebSocket passthrough

To use WebSocket passthrough, you need a Fastly VCL service, or a Rust-based compute@edge service.

IMPORTANT: WebSocket passthough is a premium feature available on paid accounts and must be explicitly enabled in the web interface before the instructions below will work.

Compute@Edge

You can create a Rust-based service, automatically populated with the code needed to perform websocket passthrough, using fastly compute init:

$ fastly compute init --from=https://github.com/fastly/compute-starter-kit-rust-websockets
$ fastly compute publish

Compute@Edge programs are typically invoked for each client request and end when you deliver a response. To handle the WebSocket connection, the request must be handed off from Compute@Edge so that Fastly can continue to hold the connection and relay traffic in both directions.

This is performed by the handoff_websocket method on the Request struct. If you are expecting your service to handle more than just WebSockets traffic, it's a good idea to only do this when the request has an Upgrade: websocket header:

use fastly::experimental::RequestUpgradeWebsocket;
use fastly::{Error, Request};
fn main() -> Result<(), Error> {
let req = Request::from_client();
if let Some("websocket") = req.get_header_str("Upgrade") {
return Ok(req.handoff_websocket("ws_backend_name")?);
}
Ok(req.send("non_ws_backend_name")?.send_to_client())
}

VCL

In a VCL service, your VCL is invoked for each inbound client request and the VCL workflow is designed to manage a conventional request/response cycle. To handle the WebSocket connection, you must hand off the request from VCL so that Fastly can continue to hold the connection and relay traffic in both directions. To do this, return(upgrade) from vcl_recv:

sub vcl_recv { ... }
Fastly VCL
if (req.http.Upgrade) {
return (upgrade);
}

Tips

The following tips and best practices may help you get the most out of WebSockets passthrough:

  • Unlike most Rust-based Compute@Edge programs, you cannot use the #[fastly::main] macro in a program that does handoff_websocket. This is because handoff_websocket will immediately start a response to the client, making it impossible to return a Response from the main() function without causing an error.
  • It is not currently possible to modify the Request (or use a constructed Request) for the handoff_websocket invocation. If you try this, the WebSocket handoff will be based on the original request as presented by the client.
  • WebSocket connections, once handed off, are not subject to the between_bytes_timeout, and will only drop when either the client or server disconnects.
  • If either the client or server disconnects, Fastly will relay that disconnect to the other party.