Using VCL

Fastly VCL is a domain specific language derived from the Varnish proxy cache, which is part of Fastly's platform architecture. It's intentionally limited in range, which allows us to run it extremely fast, make it available to all requests that pass through Fastly, and maintain the security of the Fastly network. With VCL, you can do anything from adding a cookie or setting a Cache-Control header to implementing a complete paywall solution.

VCL services on Fastly do not provide a single entry point for your application code. Instead, we expose a number of hooks, in the form of built-in subroutines, and these are called at significant moments in the lifecycle of each HTTP request that passes through your service. As a result, code that you upload to a Fastly VCL service is known as a configuration, not an application.

The VCL request lifecycle

The following subroutines are triggered by Fastly if defined in your configuration:

NameTrigger pointDefault return stateAlternative return states
vcl_recvClient request receivedlookuprecv-notepass, error, restart
vcl_hashA cache key will be calculatedhash hash-note
vcl_hitAn object has been found in cachedeliverpass, error, restart
vcl_missNothing was found in the cache, preparing backend fetchfetchdeliver_stale, pass, error
vcl_passCache passed, preparing backend fetchpass pass-noteerror
vcl_fetchOrigin response headers receiveddeliver fetch-notedeliver_stale, pass, error, restart
vcl_errorError triggered (explicitly or by Fastly)deliverrestart
vcl_deliverPreparing to deliver response to clientdeliverrestart
vcl_logFinished sending response to clientdeliverlog-note

The return state of a subroutine determines the next action taken by the Fastly cache server. This is better illustrated in a flow diagram:

RECVHASHHITPASSMISSFETCHERRORDELIVERLOGFetch serverDefault pathAlternative pathErrorRestart

Adding VCL to your service configuration

Everything that your service does is powered by VCL, including any high level features that you enable in the web interface or via the API, so we need to be able to combine your own VCL with code generated by these features. To support combining your own VCL logic with Fastly's generated VCL code, we include 'macros' in the VCL program, one in each subroutine, such as #FASTLY recv.

The VCL generated by Fastly if you don't upload your own code also includes a number of small tweaks which help our default behaviors match expectations or avoid problems. When you use your own VCL these are not included, so if you want them, you need to add them yourself.

This adds up to a number of different options for you to choose from to get VCL into your configuration. Let's look at these in increasing order of control:

  1. Use VCL generative objects: Using the web interface or API, add high level objects like headers, responses and conditions. VCL will be generated for you.
  2. Use VCL snippets: By adding your custom VCL code using snippets, you can insert code into VCL subroutines without having to accommodate Fastly-generated code. Your code snippets will be added at the end of the subroutine you select.
  3. Use Custom VCL: Custom VCL allows you to upload a full VCL source file, which will entirely replace the one that would otherwise be generated by Fastly. In order that features you select in the web interface can still work, we require that custom VCL files include Fastly's code macros, one in each subroutine.

If you choose option 3, we recommend that you start from the following 'boilerplate' when writing your own VCL code.

HINT: If you use the Fastly Image Optimizer, use the image optimization VCL boilerplate instead of this one.

sub vcl_recv {
#FASTLY recv
# Normally, you should consider requests other than GET and HEAD to be uncacheable
# (to this we add the special FASTLYPURGE method)
if (req.method != "HEAD" && req.method != "GET" && req.method != "FASTLYPURGE") {
return(pass);
}
return(lookup);
}
sub vcl_hash {
#FASTLY hash
set req.hash += req.http.host;
set req.hash += req.url;
return(hash);
}
sub vcl_hit {
#FASTLY hit
return(deliver);
}
sub vcl_miss {
#FASTLY miss
return(fetch);
}
sub vcl_pass {
#FASTLY pass
return(pass);
}
sub vcl_fetch {
#FASTLY fetch
# In the event of a server-failure response from origin, retry once more
if ((beresp.status == 500 || beresp.status == 503) && req.restarts < 1 && (req.method == "GET" || req.method == "HEAD")) {
restart;
}
# Log the number of restarts for debugging purposes
if (req.restarts > 0) {
set beresp.http.Fastly-Restarts = req.restarts;
}
# If the response is setting a cookie, make sure it is not cached
if (beresp.http.Set-Cookie) {
return(pass);
}
# By default we set a TTL based on the `Cache-Control` header but we don't parse additional directives
# like `private` and `no-store`. Private in particular should be respected at the edge:
if (beresp.http.Cache-Control ~ "(private|no-store)") {
return(pass);
}
# If no TTL has been provided in the response headers, set a default
if (!beresp.http.Expires && !beresp.http.Surrogate-Control ~ "max-age" && !beresp.http.Cache-Control ~ "(s-maxage|max-age)") {
set beresp.ttl = 3600s;
}
return(deliver);
}
sub vcl_error {
#FASTLY error
return(deliver);
}
sub vcl_deliver {
#FASTLY deliver
return(deliver);
}
sub vcl_log {
#FASTLY log
}

WARNING: Personal data should not be incorporated into VCL. Our Compliance and Law FAQ describes in detail how Fastly handles personal data privacy.

Constraints and limitations

VCL services are subject to the following restrictions or limits:

ItemLimitImplications of exceeding the limit
URL size8KBVCL processing is skipped and a "Too long request string" error is emitted.
Cookie header size32KBThe cookie header will be unset and Fastly will set req.http.Fastly-Cookie-Overflow = "1", then run your VCL as normal.
Request header size69KBDepending on the circumstances, exceeding the limit can result in Fastly closing the client connection abruptly, the client receiving a 502 Gateway Error response with "I/O error" in the body, or receiving a 503 Service Unavailable response with the text "Header overflow" in the body.
Response header size69KBA 503 error is triggered with obj.response value of "backend read error". This error can be intercepted in vcl_error. See Common 503 errors for more info.
Request header count96VCL processing is skipped or aborted if in progress, and a response with "Header overflow" in the body is emitted. A number of headers are added to the request by Fastly, so the practical limit is lower, but is not a predictable constant. Assuming a practical limit of 85 is safe.
Response header count96VCL processing is skipped or aborted if in progress, and a response with "Header overflow" in the body is emitted. A number of headers are added to the response by Fastly, so the practical limit is lower, but is not a predictable constant. Assuming a practical limit of 85 is safe.
req.body size8KBLarger requests will have an empty req.body, so request body payload is available in req.body only for payloads smaller than 8KB.
Surrogate key size1KBRequests to the purge API that cite longer keys will fail, so in practical terms it is useless to tag content with keys exceeding the length limit.
Surrogate key header size16KBOnly keys that are entirely within the first 16KB of the surrogate key header value will be applied to the cache object.
VCL file size1MBAttempts to upload VCL via the API will fail if the VCL payload is larger.
VCL total size3MBAttempts to upload VCL via the API will fail if the VCL payload would cause your total service VCL to be larger than this.
restart limit3 restartsThe 4th invocation of the restart statement will trigger a 503 error. This error can be intercepted in vcl_error.
Edge dictionary item count1000Attempts to create dictionary items will fail if they exceed the limit. Contact support@fastly.com to discuss raising this limit.
Edge dictionary item key length256 charactersAttempts to create dictionary items will fail.
Edge dictionary item value length8000 charactersAttempts to create dictionary items will fail.

  1. The return state from vcl_log simply terminates request processing.
  2. Returning with return(deliver) from vcl_fetch cannot override an earlier pass, but return(pass) here will prevent the response being cached.
  3. The return(pass) exit from vcl_pass triggers a backend fetch, similarly to return(fetch) in vcl_miss but the altered return state is a reminder that the object is flagged for pass, so that it cannot be cached when processed in vcl_fetch.
  4. The only possible return state from vcl_hash is hash but it will trigger different behavior depending on the earlier return state of vcl_recv. The default return(lookup) in vcl_recv will prompt Fastly to perform a cache lookup and run vcl_hit or vcl_miss after hash. If vcl_recv returns error, then vcl_error is executed after hash. If vcl_recv returns return(pass), then vcl_pass is executed after hash. The hash process is required in all these cases to create a cache object to enable hit-for-pass.
  5. All return states from vcl_recv (except restart) pass through vcl_hash first. lookup and pass both move control to vcl_hash but flag the request differently, which will determine the exit state from vcl_hash.

User contributed notes

We welcome comments that add use cases, ideas, tips, and caveats. All comments will be moderated before publication. To post support questions, visit our support center and we'll find you the help you need.