# robots.txt # # Web Robots (also known as Web Wanderers, Crawlers, or Spiders), are programs # that traverse the Web automatically. Search engines such as Google use them # to index the web content, spammers use them to scan for email addresses, and # they have many other uses. # # Web site owners use the /robots.txt file to give instructions about their site # to web robots; this is called The Robots Exclusion Protocol. # # Don't try to use /robots.txt to hide information. # # "Disallow: /" tells the robots that it should not visit any pages on the site. # User-agent: * Disallow: /auth/ Disallow: /callimachus/