Robots.txt Generator
/robots-genGenerate and validate robots.txt files for your site. Handles multi-bot rules, crawl-delay directives, and sitemap declarations.
Get this skill
Install all free SEO skills with one command:
curl -fsSL https://raw.githubusercontent.com/ccforseo/seo-skills/main/install.sh | bashOr install this skill manually:
$ git clone https://github.com/ccforseo/seo-skills.git
$ cp -r seo-skills/robots-txt-generator ~/.claude/skills/robots-txt-generatorWhat it does
The Robots.txt Generator skill creates properly formatted robots.txt files tailored to your site structure and crawl requirements. It generates rules for Googlebot, Bingbot, and other crawlers, sets appropriate disallow paths for admin areas and duplicate content, and includes sitemap references. The skill also validates existing robots.txt files for syntax errors.
Key features
- Generates rules for multiple user agents (Googlebot, Bingbot, etc.)
- Sets disallow paths for admin, staging, and duplicate content
- Includes sitemap and sitemap index references
- Validates existing robots.txt files for syntax errors
- Supports crawl-delay directives per bot
How to use Robots.txt Generator
Run the command
Type /robots-gen and provide your site URL. Optionally specify paths to block, sitemaps to include, or an existing robots.txt to validate.
Customize the rules
The skill generates a default configuration. Adjust user-agent rules, add disallow paths, or modify crawl-delay values as needed.
Deploy the file
Copy the output to your site's root directory as robots.txt. The skill confirms the file is accessible at your-domain.com/robots.txt.
When to use it
Setting up robots.txt for a new website launch
Blocking crawlers from staging or internal search pages
Adding AI bot restrictions (GPTBot, ClaudeBot, etc.)
Validating existing robots.txt after a site migration
Frequently asked questions
Should I block AI crawlers in robots.txt?
It depends on your strategy. Blocking GPTBot, ClaudeBot, or other AI crawlers prevents your content from training future models. If AI visibility is important to you, keep them allowed. The skill lets you set rules per bot.
Can a robots.txt file hurt my SEO?
Yes, if configured incorrectly. Accidentally blocking Googlebot from important pages removes them from search results. The skill validates your rules and warns about overly broad disallow patterns.
What is the difference between robots.txt and meta robots?
Robots.txt controls crawling (whether bots visit a page). Meta robots control indexing (whether a crawled page appears in search results). Use robots.txt to manage crawl budget and meta robots for page-level index control.
How do I reference my sitemap in robots.txt?
Add a Sitemap: directive with your full sitemap URL. The skill includes this automatically. You can reference multiple sitemaps or a sitemap index file pointing to all sub-sitemaps.
Related skills
Sitemap Auditor
/sitemap-auditValidate your XML sitemap for errors, missing pages, and inconsistencies. Checks lastmod dates, priorities, URL counts, and cross-references against actual crawlable pages.
Technical SEOSite Audit
/seo-auditRun a full technical SEO audit from your terminal. Checks crawlability, indexation issues, meta tags, page speed signals, and site structure in one pass.
Technical SEOHreflang Validator
/hreflang-checkValidate hreflang implementation across your multilingual site. Detects missing return tags, incorrect language codes, and x-default issues.
Technical SEOBroken Link Checker
/broken-linksFind all broken internal and external links on your site. Reports 404s, 5xx errors, timeout links, and redirect chains with their source pages.
Get all 50 skills
The SEO Command Center includes every skill in this library plus Python data fetchers, CLAUDE.md configs, and report templates. One-time purchase.