Using VCL

Fastly VCL is a domain specific language derived from the Varnish proxy cache, which is part of Fastly's platform architecture. It's intentionally limited in range, which allows us to run it extremely fast, make it available to all requests that pass through Fastly, and maintain the security of the Fastly network. With VCL, you can do anything from adding a cookie or setting a Cache-Control header to implementing a complete paywall solution.

VCL services on Fastly do not provide a single entry point for your application code. Instead, we expose a number of hooks, in the form of built-in subroutines, and these are called at significant moments in the lifecycle of each HTTP request that passes through your service. As a result, code that you upload to a Fastly VCL service is known as a configuration, not an application.

The VCL request lifecycle

The following subroutines are triggered by Fastly if defined in your configuration:

NameTrigger pointDefault return stateAlternative return states
vcl_recvClient request receivedlookuprecv-notepass, error, restart
vcl_hashA cache key will be calculatedhash hash-note
vcl_hitAn object has been found in cachedeliverpass, error, restart
vcl_missNothing was found in the cache, preparing backend fetchfetchdeliver_stale, pass, error
vcl_passCache passed, preparing backend fetchpass pass-noteerror
vcl_fetchOrigin response headers receiveddeliver fetch-notedeliver_stale, pass, error, restart
vcl_errorError triggered (explicitly or by Fastly)deliverrestart
vcl_deliverPreparing to deliver response to clientdeliverrestart
vcl_logFinished sending response to clientdeliverlog-note

The return state of a subroutine determines the next action taken by the Fastly cache server. This is better illustrated in a flow diagram:

RECVHASHHITPASSMISSFETCHERRORDELIVERLOGFetch serverDefault pathAlternative pathErrorRestart

Structure of a VCL configuration

Everything that your service does is powered by VCL, including any high level features that you enable in the management UI or via the API, so we need to be able to combine your VCL with code generated by these features. To support combining your own VCL logic with Fastly's generated VCL code, we require that you include 'macros' in your VCL source, one in each subroutine, such as #FASTLY recv. These may look like comments, but your code will not compile if they are not present.

The VCL generated by Fastly if you don't upload your own code also includes a number of small tweaks which help our default behaviors match expectations or avoid problems. When you use your own VCL these are not included, so if you want them, you need to add them yourself. We recommend that you start from the following 'boilerplate' when writing your own VCL code.

HINT: If you use the Fastly Image Optimizer, use the image optimization VCL boilerplate instead of this one.

sub vcl_recv {
#FASTLY recv
# Normally, you should consider requests other than GET and HEAD to be uncacheable
# (to this we add the special FASTLYPURGE method)
if (req.method != "HEAD" && req.method != "GET" && req.method != "FASTLYPURGE") {
sub vcl_hash {
#FASTLY hash
set req.hash +=;
set req.hash += req.url;
sub vcl_hit {
sub vcl_miss {
#FASTLY miss
sub vcl_pass {
#FASTLY pass
sub vcl_fetch {
#FASTLY fetch
# In the event of a server-failure response from origin, retry once more
if ((beresp.status == 500 || beresp.status == 503) && req.restarts < 1 && (req.method == "GET" || req.method == "HEAD")) {
# Log the number of restarts for debugging purposes
if (req.restarts > 0) {
set beresp.http.Fastly-Restarts = req.restarts;
# If the response is setting a cookie, make sure it is not cached
if (beresp.http.Set-Cookie) {
# By default we set a TTL based on the `Cache-Control` header but we don't parse additional directives
# like `private` and `no-store`. Private in particular should be respected at the edge:
if (beresp.http.Cache-Control ~ "(private|no-store)") {
# If no TTL has been provided in the response headers, set a default
if (!beresp.http.Expires && !beresp.http.Surrogate-Control ~ "max-age" && !beresp.http.Cache-Control ~ "(s-maxage|max-age)") {
set beresp.ttl = 3600s;
sub vcl_error {
#FASTLY error
sub vcl_deliver {
#FASTLY deliver
sub vcl_log {

WARNING: Personal data should not be incorporated into VCL. Our Compliance and Law FAQ describes in detail how Fastly handles personal data privacy.

  1. The return state from vcl_log simply terminates request processing.
  2. Returning with return(deliver) from vcl_fetch cannot override an earlier pass, but return(pass) here will prevent the response being cached.
  3. The return(pass) exit from vcl_pass triggers a backend fetch, similarly to return(fetch) in vcl_miss but the altered return state is a reminder that the object is flagged for pass, so that it cannot be cached when processed in vcl_fetch.
  4. The only possible return state from vcl_hash is hash but it will trigger different behavior depending on the earlier return state of vcl_recv. The default return(lookup) in vcl_recv will prompt Fastly to perform a cache lookup and run vcl_hit or vcl_miss after hash. If vcl_recv returns error, then vcl_error is executed after hash. If vcl_recv returns return(pass), then vcl_pass is executed after hash. The hash process is required in all these cases to create a cache object to enable hit-for-pass.
  5. All return states from vcl_recv (except restart) pass through vcl_hash first. lookup and pass both move control to vcl_hash but flag the request differently, which will determine the exit state from vcl_hash.

User contributed notes

We welcome comments that add use cases, ideas, tips, and caveats. All comments will be moderated before publication. To post support questions, visit our support center and we'll find you the help you need.