Check public crawler policy
Parse robots.txt for major AI search, training, and classic search crawlers at the exact URL path.
AI crawler access audit
Test GPTBot, ClaudeBot, PerplexityBot, Google-Extended, OAI-SearchBot, robots.txt, headers, meta robots, sitemap, and llms.txt in one transparent report.
What it checks
A crawler policy can look open while a page still blocks discovery through headers, meta tags, redirects, thin rendered content, or missing AI-readable discovery files.
Parse robots.txt for major AI search, training, and classic search crawlers at the exact URL path.
Review HTTP status, redirects, meta robots, X-Robots-Tag, canonical, readable text, and JSON-LD.
Check sitemap, llms.txt, llms-full.txt, and copy-ready fixes without promising guaranteed AI visibility.
Boundaries
This tool diagnoses public access signals that site owners control. It does not bypass bot defenses, log into websites, or promise citation in any AI answer surface.
Find accidental blocks that may prevent AI search or retrieval systems from reading public pages.
See which training or retrieval bots you are allowing, then choose a policy intentionally.
Every recommendation ties back to a visible rule, header, tag, status, or missing public file.