Robots.txt Tester
Inspect how youtu.be controls crawler access, blocked paths, sitemap references and AI crawler rules.
Robots.txt Status
Robots.txt Status
Present
Score
92
/100
ยท Strong
View Full Robots.txt
Robots.txt Content Preview
# robots.txt file for youtu.be User-agent: * Disallow: /api/ Disallow: /comment Disallow: /feeds/videos.xml Disallow: /get_video Disallow: /get_video_info Disallow: /get_midroll_info Disallow: /live_chat Disallow: /login Disallow: /results Disallow: /signup Disallow: /t/terms Disallow: /timedtext_video Disallow: /verify_age Disallow: /watch_ajax Disallow: /watch_fragments_ajax Disallow: /watch_popup Disallow: /watch_queue_ajax Disallow: /youtubei/ Allow: /apple-app-site-association Allow: /.well-known/*
User-agent Rules
| User-agent(s) | Allowed paths | Disallowed paths |
|---|---|---|
| * |
|
|
Blocked and Allowed Paths
| Blocked paths |
|
|---|---|
| Allowed paths |
|
| Crawl-delay | No Crawl-delay directive detected. |
Sitemaps Detected
No Sitemap directives found in robots.txt.
AI Crawler Policy
No explicit blocks were detected for common AI crawlers (GPTBot, ChatGPT-User, ClaudeBot, PerplexityBot, Google-Extended).
Issues Found
- robots.txt does not reference any sitemap URLs.
Recommendations
- Add a Sitemap directive in robots.txt pointing to your primary XML sitemap.
- Document your AI crawler policy explicitly in robots.txt so future bots know how to treat your content.
- Ensure important pages, CSS and JavaScript assets are crawlable so search engines can fully render your site.