Robots.txt Tester
Inspect how 789tabien.com controls crawler access, blocked paths, sitemap references and AI crawler rules.
Robots.txt Status
Robots.txt Status
Present
Score
92
/100
ยท Strong
View Full Robots.txt
Robots.txt Content Preview
# As a condition of accessing this website, you agree to abide by the following # content signals: # (a) If a content-signal = yes, you may collect content for the corresponding # use. # (b) If a content-signal = no, you may not collect content for the # corresponding use. # (c) If the website operator does not include a content signal for a # corresponding use, the website operator neither grants nor restricts # permission via content signal with respect to the corresponding use. # The content signals and their meanings are: # search: building a search index and providing search results (e.g., returning # hyperlinks and short excerpts from your website's contents). Search does not # include providing AI-generated search summaries. # ai-input: inputting content into one or more AI models (e.g., retrieval # augmented generation, grounding, or other real-time taking of content for # generative AI search answers). # ai-train: training or fine-tuning AI models. # ANY RESTRICTIONS EXPRESSED VIA CONTENT SIGNALS ARE EXPRESS RESERVATIONS OF # RIGHTS UNDER ARTICLE 4 OF THE EUROPEAN UNION DIRECTIVE 2019/790 ON COPYRIGHT # AND RELATED RIGHTS IN THE DIGITAL SINGLE MARKET.
User-agent Rules
| User-agent(s) | Allowed paths | Disallowed paths |
|---|---|---|
| No user-agent groups could be parsed from this robots.txt file. | ||
Blocked and Allowed Paths
| Blocked paths | No Disallow paths detected. |
|---|---|
| Allowed paths | No explicit Allow paths detected. |
| Crawl-delay | No Crawl-delay directive detected. |
Sitemaps Detected
No Sitemap directives found in robots.txt.
AI Crawler Policy
No explicit blocks were detected for common AI crawlers (GPTBot, ChatGPT-User, ClaudeBot, PerplexityBot, Google-Extended).
Issues Found
- robots.txt does not reference any sitemap URLs.
Recommendations
- Add a Sitemap directive in robots.txt pointing to your primary XML sitemap.
- Document your AI crawler policy explicitly in robots.txt so future bots know how to treat your content.
- Consider adding explicit Allow rules for important sections to clarify crawling intent for complex setups.
- Ensure important pages, CSS and JavaScript assets are crawlable so search engines can fully render your site.