Robots.txt at the edge
Difficulty level: Beginner
Objective
A robots.txt file specifies how search engines like Google or Bing should crawl your website. A response containing this, for example, tells crawlers not to index your site at all:
User-agent: *Disallow: /
This is a great example of something you can generate at the edge, especially since you might want to allow indexing of production domains but not preview or staging ones.
Write VCL code to respond to a request for /robots.txt
with a pre-canned response that contains the body shown above.