Control how search engine bots crawl your website. Build a valid robots.txt file in seconds — configure crawl rules, block paths, add sitemaps, and download instantly.
A robots.txt file is a plain-text file placed at the root of your website that tells search engine crawlers which pages or sections they can and cannot access.
Every website should have a robots.txt file. Without one, search engines will crawl everything — including admin panels, login pages, staging environments, and duplicate content — which can harm your SEO and waste your crawl budget.
The file lives at yourdomain.com/robots.txt and is the first file crawlers check before indexing your site.
Key directives:
• User-agent — specifies which bot the rule applies to (* = all bots)
• Disallow — blocks a path from being crawled
• Allow — explicitly permits a path (overrides Disallow)
• Sitemap — tells bots where your sitemap is
• Crawl-delay — throttles crawl speed to reduce server load
🤖 Generated by TrafficTool.in — Free SEO & Web Utilities Toolkit
No signup · No limits · No tracking · Built for SEOs and developers everywhere