Startseite/Blog/Content SEO

How to Prevent AI Slop with Claude Code

52% of new online articles are AI-generated. Most read the same. Here's how to use CLAUDE.md rules, banned language lists, and editorial constraints to produce content that sounds human.

Zuletzt aktualisiert: 2026-03-0613 min read

Key Takeaways

  • "Slop" was Merriam-Webster's 2025 Word of the Year, defined as "digital content of low quality produced in quantity by means of artificial intelligence" (Merriam-Webster, December 2025)
  • 52% of new online articles are now AI-generated as of mid-2025, up from a minority months earlier (Futurism, 2025)
  • Google doesn't penalize AI content for being AI-generated. It penalizes low-quality content, and AI makes producing low-quality content at scale dangerously easy
  • The fix isn't avoiding AI tools. It's constraining them. A CLAUDE.md file with banned phrases, citation rules, and structural limits produces output that reads like a human wrote it
  • 77% of consumers want to know if content was AI-generated, and 60% doubt the authenticity of online content (Capgemini, 2025)
  • Every principle in this article is active in the ccforseo.com content workflow right now

The Problem: AI Made Publishing Easy and Reading Painful

AI slop is low-quality, mass-produced content generated by AI tools with minimal human oversight. Merriam-Webster named "slop" its 2025 Word of the Year, reflecting how quickly the internet noticed the flood (CNBC, December 2025). The Economist and Macquarie Dictionary made the same choice independently.

The numbers paint the picture. Over half of new articles published online are now AI-generated (Futurism, 2025). On YouTube, an estimated 21-33% of recommended content qualifies as AI slop, generating $117 million in annual ad revenue across 278 channels producing nothing but synthetic content (Search Engine Journal, 2025). TikTok disclosed that AI-generated clips surpassed 1.3 billion videos.

The problem isn't that AI writes content. The problem is that unconstrained AI writes the same content. The same cadence, the same vocabulary, the same rhetorical structures repeated across millions of pages. Readers recognize the pattern. Search engines recognize the pattern. And both are devaluing it.

Why Readers Detect AI Content (Even Without Tools)

Humans are pattern-recognition machines. After reading a few hundred AI-generated articles, most people develop an instinct for the "AI voice" even if they can't articulate what triggers it.

The tells aren't subtle. They're structural:

  • The false tricolon. "No fluff. No theory. Just results." This three-part construction appears in roughly every other AI-generated marketing page. It's Claude's and ChatGPT's favorite move.
  • Em-dash overuse. AI models reach for em-dashes the way nervous speakers reach for "um." One per article is fine. Five per paragraph signals a machine.
  • Corporate filler. "Leverage," "robust," "scalable," "streamline," "delve." These words carry zero information. They're padding that AI inserts when it has nothing specific to say.
  • Hedging phrases. "It's worth noting that..." and "You may want to consider..." are AI's way of avoiding commitment. Human experts state things directly.
  • The dramatic introduction. "Enter: Claude Code." "And here's the kicker." "X changed everything." These theatrical transitions are AI's substitute for genuine narrative flow.
  • Question hooks as section openers. "The best part?" "Want access?" "Ready to transform your workflow?" Every AI model defaults to these when transitioning between sections.

The deeper problem: these patterns are self-reinforcing. AI models trained on internet text absorb content that earlier AI models produced. Each generation amplifies the same stylistic tics. The output converges toward a mean that sounds like nobody and everybody at once.

Google's Position: Quality, Not Origin

Google does not penalize content for being AI-generated. That distinction matters. What Google targets is low-quality content at scale, and AI makes scaling low quality trivially easy.

The penalty framework works through two mechanisms:

Scaled content abuse. Publishing large volumes of low-value pages triggers algorithmic demotion regardless of whether a human or AI wrote them. AI accelerates the speed at which sites can hit this threshold (Google Search Central, 2026).

E-E-A-T signals. Google's quality raters evaluate Experience, Expertise, Authoritativeness, and Trustworthiness. The first E (Experience) is the hardest for AI to fake. It requires evidence that the author has done the thing they're writing about, not copied information from other sources (Google Search Central, 2026).

Sites that avoid quality penalties share a pattern: they edit AI drafts, add data Google can't scrape from other sites, match search intent, and update content over time. The tool doesn't matter. The editorial process does.

The CLAUDE.md Approach: Constraining AI at the Source

Claude Code reads a CLAUDE.md file at the root of every project. This file contains persistent instructions that shape every output the tool produces. Think of it as an editorial style guide that the AI follows automatically, on every task, without reminders.

This is where slop prevention starts. Not in post-production editing, but in pre-production constraints. Here are the principles that work.

Principle 1: Ban Specific Phrases and Patterns

The single most effective anti-slop measure is a banned language list. Not vague guidance like "write naturally." Specific phrases and structural patterns that the AI cannot use.

## Banned Language (example from a production CLAUDE.md)

- Em-dashes
- Filler: just, very, actually, basically
- Corporate: leverage, robust, scalable, streamline, delve
- Hedging: "It's worth noting", "You may want to consider"
- AI cliche structures:
  - "No X. No Y. Just Z."
  - "It's not just about X. It's about Y."
  - "game-changer" / "supercharge"
  - "Enter: [thing]"
  - "And here's the kicker"
  - "X changed everything"
  - Arrow formatting for lists
  - Short question hooks: "The best part?" / "Want access?"

When you ban these patterns, the AI has to find more natural ways to express ideas. The output starts reading differently from the millions of articles that share the same rhetorical DNA.

The key insight: be specific, not general. "Write in a human tone" is useless. "Never use the phrase 'It's worth noting'" is enforceable. Claude Code follows explicit rules. Vague guidance produces vague results.

Principle 2: Enforce Source Verification

AI fabricates statistics. Not maliciously, but because language models generate plausible-sounding claims that may not correspond to reality. A CLAUDE.md rule can prevent this from reaching publication.

## Source Verification Protocol

- Every external claim requires a source with URL and date
- Citation format: "81% of SEOs prioritize AI ([Source](URL), Month Year)"
- If a claim cannot be verified:
  1. Remove the claim, OR
  2. Reframe: "Many SEOs report...", OR
  3. Mark [NEEDS VERIFICATION] for manual research
- Never guess. Never fabricate statistics.

This rule transforms the AI's behavior. Instead of generating "studies show that 73% of marketers prefer..." with no source, Claude Code either finds the real study or flags the gap. The output becomes defensible because every number has a paper trail.

Principle 3: Structure for Readability, Not AI Convenience

AI defaults to long paragraphs, passive voice, and meandering sentence structures. Explicit constraints fix this.

## Content Structure Rules

- 2-3 sentences per paragraph maximum
- No sentences over 30 words
- Active voice by default
- One idea per paragraph
- Lead each paragraph with its key point
- Bold key terms on first mention in each section
- Target Flesch-Kincaid grade 8-10

These aren't style preferences. They're readability constraints backed by decades of plain language research. Short paragraphs scan better on mobile. Active voice communicates faster. One idea per paragraph prevents the "wall of text" that AI loves to produce.

The Flesch-Kincaid target (grade 8-10) is particularly useful. It pushes the AI toward concrete language and away from the abstract, qualification-heavy prose that characterizes slop.

Principle 4: Define What Stays Human

Some content categories should never be delegated to AI, regardless of how good the constraints are. Making this boundary explicit prevents scope creep.

Categories that stay human:

  • Proprietary data. Screenshots, analytics, internal metrics. Only you have access to this.
  • Contrarian opinions. Evaluative judgment ("Are these results good?") requires context that AI can't replicate.
  • Personal experience. What surprised you. What failed. What you'd do differently. This is the "Experience" in E-E-A-T.
  • Strategic framing. Deciding what to write about and why it matters now. AI can research a topic. It can't tell you which topic your audience needs this week.

The discipline here is knowing where to draw the line. Claude Code handles structure, research scaffolding, source verification, formatting, and schema markup. Humans handle voice, data, judgment, and the "does this pass the smell test" filter.

Principle 5: Lead With Data, Not Opinions

AI produces confident-sounding opinions on everything. Readers have learned to distrust that confidence. The antidote is a rule that forces every section to open with a verifiable claim.

Compare these two openings:

AI default: "Content quality is becoming increasingly important in the modern digital landscape."

Constrained output: "52% of new online articles are AI-generated as of mid-2025, up from a minority months earlier."

The first is filler. The second is a fact with a source. AI engines can cite the second version. Readers trust the second version. The first version is what millions of AI-generated articles already say, and it communicates nothing.

A simple CLAUDE.md rule enforces this:

## Writing Style
- Lead with data, not opinions. Every claim needs a source.
- No filler intros ("In today's digital landscape..."). Start with the point.

Principle 6: Use Skills for Repeatable Quality

Claude Code supports custom skills: reusable instruction sets for specific tasks. A content production skill can encode all of the above principles into a single command.

Instead of remembering to apply every rule manually, a skill chains them together. Run the skill, get a draft that already follows banned language rules, includes citation blocks, uses proper paragraph structure, and flags unverified claims.

The skill doesn't replace the human editorial pass. It makes the starting draft 80% closer to publishable, so the human can focus on the 20% that requires judgment.

This is the approach we use at ccforseo.com. Every blog post starts from a skill that enforces these constraints automatically. The CLAUDE.md provides the base rules. The skill provides the workflow. The human provides the experience and editorial decisions.

A Checklist for Your Own CLAUDE.md

If you're using Claude Code for content production, start with these rules:

  • Banned phrases list with at least 15-20 specific AI patterns
  • Citation protocol requiring URL + date for every external claim
  • Paragraph limits (2-3 sentences max, one idea each)
  • Sentence limits (30 words max, active voice default)
  • No filler intros rule (start with the point, not a warm-up)
  • Readability target (Flesch-Kincaid grade 8-10)
  • Human-only categories clearly defined (data, opinions, experience)
  • Verification fallback (remove, reframe, or flag unverified claims)

These rules take 30 minutes to write and save hours of editing on every piece of content. More importantly, they prevent the reputational cost of publishing content that reads like every other AI-generated page on the internet.

The Uncomfortable Truth About AI Content

The problem with AI slop isn't technical. It's editorial. The tools are capable of producing high-quality output when properly constrained. The slop comes from skipping the constraint step because it's faster to hit publish than to build a system that enforces quality.

Every site publishing AI-generated content faces a choice: invest time upfront in editorial rules that prevent slop, or invest time downstream in reputation repair when readers and search engines notice.

The CLAUDE.md approach picks the first option. It embeds quality control into the tool itself, so the default output is closer to human editorial standards than to the median AI-generated article.

That's not a guarantee against slop. The human still needs to read every piece, add experience-based content, and make judgment calls. But it shifts the starting point from "generic AI draft that needs heavy editing" to "structured, source-verified draft that needs human polish."

The gap between those two starting points is the difference between content that gets cited and content that gets scrolled past.

CC for SEO Command Center

Pre-built Claude Code skills for technical audits, keyword clustering, and GSC/GA4 analysis.

Join the Waitlist

Be the first to get access

FAQ

What is AI slop?

AI slop is low-quality digital content produced in quantity by artificial intelligence with minimal human oversight. Merriam-Webster named "slop" its 2025 Word of the Year, defining it as content that floods platforms with generic, repetitive material that adds little value for readers (Merriam-Webster, December 2025).

Does Google penalize AI-generated content?

Google does not penalize content specifically for being AI-generated. Google's algorithms target low-quality content and scaled content abuse regardless of how the content was produced. The distinction matters: a well-edited, source-verified AI-assisted article performs the same as a human-written one. The penalty risk comes from publishing low-quality AI output at volume without editorial oversight (Google Search Central, 2026).

What is a CLAUDE.md file?

A CLAUDE.md file is a configuration file placed at the root of a Claude Code project. It contains persistent instructions that Claude Code follows on every task within that project. For content production, it functions as an automated editorial style guide, enforcing rules about language, structure, citations, and formatting without requiring manual reminders on each task.

How do you prevent AI from using cliche phrases?

The most effective method is a banned language list in your CLAUDE.md file. Instead of vague instructions like "write naturally," list specific phrases and structural patterns the AI cannot use. Claude Code follows explicit prohibitions reliably. Ban the false tricolon ("No X. No Y. Just Z."), dramatic introductions ("Enter: [thing]"), em-dash overuse, and corporate filler words. The AI adapts by finding more natural alternatives.

Can AI content rank well in Google Search?

Yes. Google evaluates content quality through E-E-A-T signals (Experience, Expertise, Authoritativeness, Trustworthiness), not by detecting whether AI was involved. AI-assisted content that includes first-hand experience, verified sources, proper E-E-A-T signals, and genuine expertise can rank as well as fully human-written content. The December 2025 core update expanded E-E-A-T evaluation beyond YMYL topics to e-commerce, reviews, SaaS, and how-to content.

How much of the internet is AI-generated content?

As of mid-2025, approximately 52% of new online articles are AI-generated, with earlier predictions from Europol and others estimating up to 90% of online content could be synthetically generated by 2026 (Futurism, 2025). On YouTube, an estimated 21-33% of recommended feed content is AI-generated slop (Search Engine Journal, 2025). TikTok reports over 1.3 billion AI-generated video clips on its platform.

Artikel teilen
LinkedInXThreads
Vytas Dargis
Vytas Dargis

Founder, CC for SEO

Martech PM & SEO automation builder. Bridges marketing, product, and engineering teams. Builds CC for SEO to help SEO professionals automate workflows with Claude Code.

Automatisieren Sie Ihre SEO-Workflows

Fertige Claude Code Skills für technische Audits, Keyword-Clustering, Content-Optimierung und GSC/GA4-Analyse.

Join the Waitlist