Robots.txt Tester

Inspect how driving-license-germany.beto.world controls crawler access, blocked paths, sitemap references and AI crawler rules.

Robots.txt Status

Robots.txt Status Present Score 92 /100 · Strong
Domain driving-license-germany.beto.world
Last analyzed May 9, 2026

View Full Robots.txt

Robots.txt Content Preview

# Block data scrapers
User-agent: CCBot
Disallow: /
User-agent: Bytespider
Disallow: /

# Allow AI Search & Citation bots
User-agent: GPTBot
Allow: /
User-agent: ChatGPT-User
Allow: /
User-agent: OAI-SearchBot
Allow: /
User-agent: PerplexityBot
Allow: /
User-agent: ClaudeBot
Allow: /
User-agent: Claude-SearchBot
Allow: /
User-agent: Claude-User
Allow: /
User-agent: anthropic-ai
Allow: /
User-agent: Google-Extended
Allow: /
User-agent: Applebot
Allow: /
User-agent: Applebot-Extended
Allow: /
User-agent: Bingbot
Allow: /
User-agent: Meta-ExternalAgent
Allow: /

# Allow standard search engine crawlers
User-agent: *
Allow: /

Sitemap: https://driving-license-germany.beto.world/sitemap.xml

User-agent Rules

User-agent(s) Allowed paths Disallowed paths
ccbot No explicit Allow rules.
  • /
bytespider No explicit Allow rules.
  • /
gptbot
  • /
No explicit Disallow rules.
chatgpt-user
  • /
No explicit Disallow rules.
oai-searchbot
  • /
No explicit Disallow rules.
perplexitybot
  • /
No explicit Disallow rules.
claudebot
  • /
No explicit Disallow rules.
claude-searchbot
  • /
No explicit Disallow rules.
claude-user
  • /
No explicit Disallow rules.
anthropic-ai
  • /
No explicit Disallow rules.
google-extended
  • /
No explicit Disallow rules.
applebot
  • /
No explicit Disallow rules.
applebot-extended
  • /
No explicit Disallow rules.
bingbot
  • /
No explicit Disallow rules.
meta-externalagent
  • /
No explicit Disallow rules.
*
  • /
No explicit Disallow rules.

Blocked and Allowed Paths

Blocked paths
  • /
Allowed paths
  • /
Crawl-delay No Crawl-delay directive detected.

Sitemaps Detected

AI Crawler Policy

No explicit blocks were detected for common AI crawlers (GPTBot, ChatGPT-User, ClaudeBot, PerplexityBot, Google-Extended).

Issues Found

  • At least one user-agent has Disallow: / which blocks the entire site.

Recommendations

  • Avoid blocking the entire site (Disallow: /); restrict only sensitive or low-value paths instead.
  • Document your AI crawler policy explicitly in robots.txt so future bots know how to treat your content.
  • Ensure important pages, CSS and JavaScript assets are crawlable so search engines can fully render your site.

Analyze this site with other tools

Want a website that actually generates leads?

Start a conversion-focused website project with a team that builds fast, SEO-optimized sites for real businesses.