Robots.txt at the edge

Difficulty level: Beginner


A robots.txt file specifies how search engines like Google or Bing should crawl your website. A response containing this, for example, tells crawlers not to index your site at all:

User-agent: *
Disallow: /

This is a great example of something you can generate at the edge, especially since you might want to allow indexing of production domains but not preview or staging ones.

Write VCL code to respond to a request for /robots.txt with a pre-canned response that contains the body shown above.

Try this challenge now

For a guide to how challenges work see getting started.

User contributed notes


Do you see an error in this page? Do have an interesting use case, example or edge case people should know about? Share your knowledge and help people who are reading this page! (Comments are moderated; for support, please contact