Technical SEO

Robots.txt Generator

/robots-gen

Generate and validate robots.txt files for your site. Handles multi-bot rules, crawl-delay directives, and sitemap declarations.

Get this skill

Install all free SEO skills with one command:

curl -fsSL https://raw.githubusercontent.com/ccforseo/seo-skills/main/install.sh | bash

Or install this skill manually:

$ git clone https://github.com/ccforseo/seo-skills.git
$ cp -r seo-skills/robots-txt-generator ~/.claude/skills/robots-txt-generator
View on GitHubFree ยท MIT License ยท 8 skills included

What it does

The Robots.txt Generator skill creates properly formatted robots.txt files tailored to your site structure and crawl requirements. It generates rules for Googlebot, Bingbot, and other crawlers, sets appropriate disallow paths for admin areas and duplicate content, and includes sitemap references. The skill also validates existing robots.txt files for syntax errors.

Key features

  • Generates rules for multiple user agents (Googlebot, Bingbot, etc.)
  • Sets disallow paths for admin, staging, and duplicate content
  • Includes sitemap and sitemap index references
  • Validates existing robots.txt files for syntax errors
  • Supports crawl-delay directives per bot

How to use Robots.txt Generator

  1. Run the command

    Type /robots-gen and provide your site URL. Optionally specify paths to block, sitemaps to include, or an existing robots.txt to validate.

  2. Customize the rules

    The skill generates a default configuration. Adjust user-agent rules, add disallow paths, or modify crawl-delay values as needed.

  3. Deploy the file

    Copy the output to your site's root directory as robots.txt. The skill confirms the file is accessible at your-domain.com/robots.txt.

When to use it

Setting up robots.txt for a new website launch

Blocking crawlers from staging or internal search pages

Adding AI bot restrictions (GPTBot, ClaudeBot, etc.)

Validating existing robots.txt after a site migration

Frequently asked questions

Should I block AI crawlers in robots.txt?

It depends on your strategy. Blocking GPTBot, ClaudeBot, or other AI crawlers prevents your content from training future models. If AI visibility is important to you, keep them allowed. The skill lets you set rules per bot.

Can a robots.txt file hurt my SEO?

Yes, if configured incorrectly. Accidentally blocking Googlebot from important pages removes them from search results. The skill validates your rules and warns about overly broad disallow patterns.

What is the difference between robots.txt and meta robots?

Robots.txt controls crawling (whether bots visit a page). Meta robots control indexing (whether a crawled page appears in search results). Use robots.txt to manage crawl budget and meta robots for page-level index control.

How do I reference my sitemap in robots.txt?

Add a Sitemap: directive with your full sitemap URL. The skill includes this automatically. You can reference multiple sitemaps or a sitemap index file pointing to all sub-sitemaps.

Get all 50 skills

The SEO Command Center includes every skill in this library plus Python data fetchers, CLAUDE.md configs, and report templates. One-time purchase.

Browse all skills