TL;DR
Add explicit robots.txt Allow rules for ClaudeBot, Claude-SearchBot, and Claude-User to unlock Claude.ai citations and shopping cards. Anthropic's bots respect the non-standard Crawl-delay, so you can throttle bursts without blocking visibility. Follow the quick-start snippet below, test with cURL, and enrich pages with Product/FAQ schema.
Why ClaudeBot Matters to Revenue
Anthropic's conversational assistant Claude leapt from 18% to 29% enterprise market share in 2025. Its crawler, ClaudeBot, now sits among the five busiest AI spiders—logging million-hit days on knowledge bases and e-commerce catalogs. If your products or guides aren't fetchable, a rival's will fill the answer pane.
2025 Market & Traffic Stats
| Metric (May 2024 → May 2025) | YoY Change | Why It Matters |
|---|---|---|
| AI & search crawler traffic share | +18% | Non-human visits now rival human traffic |
| ClaudeBot support for Crawl-delay | Yes | Rare among AI bots; lets you pace requests |
| Sites explicitly allowing ClaudeBot | 43% of top-10K domains | Early adopters own Claude answer boxes |
Meet Anthropic's Crawlers
ClaudeBot vs Claude-SearchBot vs Claude-User
| Purpose | User-agent | Behaviour | Robots.txt Support |
|---|---|---|---|
| Model training | ClaudeBot | Wide, steady crawl | Allow/Disallow, Crawl-delay |
| Search index | Claude-SearchBot | Spike bursts tied to queries | Allow/Disallow |
| On-demand fetch | Claude-User | Low-volume, real-time pulls | Allow/Disallow |
Key point: ClaudeBot is the only major AI crawler that honours the Crawl-delay directive.
How to Spot Them in Logs
grep -E "ClaudeBot|Claude-SearchBot|Claude-User" access.logLook for bursts exceeding 10 req/s as a signal you may need throttling.
Robots.txt Configuration
Quick-Start Allow Block
# — Anthropic allow-list (training + search) —
User-agent: ClaudeBot
Allow: /
User-agent: Claude-SearchBot
Allow: /
User-agent: Claude-User
Allow: /Place this block above any wildcard User-agent: * rules so it isn't overridden.
Crawl-delay & Burst Throttling
ClaudeBot honours the non-standard Crawl-delay directive. Start conservative:
User-agent: ClaudeBot
Crawl-delay: 2If spikes persist, return HTTP 429 with a Retry-After header or apply CDN rules.
Troubleshooting Flowchart
- Add rules to robots.txt
- Test:
curl -A "ClaudeBot" https://yoursite.com/robots.txt— check 200 vs 403 - Observe logs for Claude-SearchBot within 24 hours
- Burst >10 req/s? Add Crawl-delay or 429 gating
Schema & Content Optimisation
Product / Article JSON-LD Essentials
Embed Product, Offer, and AggregateRating within 32 KB to avoid truncation. Claude uses this markup to build price-and-rating cards in chat snippets:
{
"@context": "https://schema.org",
"@type": "Product",
"name": "Eco-Blend Running Socks",
"sku": "ECO-SOCK-001",
"offers": {
"@type": "Offer",
"price": "18.00",
"priceCurrency": "USD"
},
"aggregateRating": {
"@type": "AggregateRating",
"ratingValue": "4.8",
"reviewCount": "127"
}
}For blog and documentation pages, add Article plus an FAQPage block. Claude-SearchBot pulls FAQ answers verbatim—prime real estate for your messaging.
Cross-Industry Mini Cases
| Sector | Quick Win | Result |
|---|---|---|
| D2C Retail | Serve variant URLs with stock schema | 8% uplift in Claude-sourced sessions (GWA client, Q2 2025) |
| B2B SaaS | Allow public API docs, throttle bursts | 38% less bot bandwidth, citations maintained |
| Healthcare | Peer-review citations + HIPAA notice | Complies with June 2025 core update trust focus |
Risk & Compliance
Bandwidth — Use Crawl-delay plus 429 gating to prevent hammering.
IP Ranges — Not published; robots.txt is the main control mechanism.
Licensing — State AI-training permissions clearly in your Terms of Service.
CAPTCHA / Auth — Claude bots do not bypass CAPTCHAs or login walls.
Implementation Checklist
- Back up your current robots.txt
- Insert allow block for all three Anthropic agents
- Run
curl -A "ClaudeBot"tests - Monitor logs for new hits
- Create GA4 filter for
utm_source=claude.ai - Audit schema coverage across key pages
- Review server load after 14 days
- Document steps in SOP
- Schedule quarterly crawl audit
- Book an expert SEO Audit to benchmark AI readiness
FAQs
What is ClaudeBot?
ClaudeBot is Anthropic's primary web crawler, gathering public pages for model training and Claude.ai answer generation.
Is ClaudeBot safe for SEO?
Yes. It respects robots.txt—including Crawl-delay—so Google rankings remain unaffected.
How do I block ClaudeBot?
Add User-agent: ClaudeBot plus Disallow: /. Leave Claude-SearchBot allowed if you still want visibility in Claude answers.
Does ClaudeBot ignore crawl-delay?
No. Anthropic confirms compliance with this directive, making it unique among major AI crawlers.
Where can I see ClaudeBot traffic?
Filter server logs or create a GA4 custom dimension for utm_source=claude.ai.
Next Steps
Ready to translate AI crawler traffic into revenue? Start with a comprehensive SEO Audit—our team benchmarks crawl health, schema depth, and Claude eligibility in two weeks. Need content that earns citations? Explore our data-driven SEO programs that turn insights into demand.