Robots.txt Tester
Inspect how querysearcher.cloud controls crawler access, blocked paths, sitemap references and AI crawler rules.
Robots.txt Status
Robots.txt Status
Present
Score
100
/100
ยท Strong
View Full Robots.txt
Robots.txt Content Preview
User-agent: * Allow: / # Explicitly allow AI/LLM crawlbots for citations/mentions User-agent: GPTBot Allow: / User-agent: ChatGPT-User Allow: / User-agent: ClaudeBot Allow: / User-agent: Google-Extended Allow: / User-agent: PerplexityBot Allow: / Sitemap: https://querysearcher.cloud/sitemap.xml # Block common bot traps Disallow: /cgi-bin/ Disallow: /tmp/
User-agent Rules
| User-agent(s) | Allowed paths | Disallowed paths |
|---|---|---|
| * |
|
No explicit Disallow rules. |
| gptbot |
|
No explicit Disallow rules. |
| chatgpt-user |
|
No explicit Disallow rules. |
| claudebot |
|
No explicit Disallow rules. |
| google-extended |
|
No explicit Disallow rules. |
| perplexitybot |
|
|
Blocked and Allowed Paths
| Blocked paths |
|
|---|---|
| Allowed paths |
|
| Crawl-delay | No Crawl-delay directive detected. |
Sitemaps Detected
AI Crawler Policy
At least one AI crawler (such as GPTBot, ChatGPT-User, ClaudeBot, PerplexityBot or Google-Extended) appears to be blocked by robots.txt.
Recommendations
- Review your AI crawler policy for GPTBot, ChatGPT-User, ClaudeBot, PerplexityBot and Google-Extended to ensure it matches your content strategy.
- Ensure important pages, CSS and JavaScript assets are crawlable so search engines can fully render your site.