# API Map — robots.txt # https://apimap.dev/robots.txt # # API Map is a public, freely accessible directory of REST APIs. # We WELCOME all crawlers — both for search indexing and AI training — # because our mission is to make API information maximally discoverable # for developers and autonomous agents. # # Per the guidance in the Agentic Web architecture, we explicitly declare # permissions for each known AI crawler so there is zero ambiguity. # ── OpenAI ──────────────────────────────────────────────────────────────────── # OAI-SearchBot: surfaces real-time results in ChatGPT search — ALLOW User-agent: OAI-SearchBot Allow: / # GPTBot: crawls for OpenAI model training — ALLOW (public directory data) User-agent: GPTBot Allow: / # ── Anthropic ───────────────────────────────────────────────────────────────── # ClaudeBot: Anthropic's web crawler for Claude — ALLOW User-agent: ClaudeBot Allow: / # Claude-User: real-time retrieval for Claude users — ALLOW User-agent: Claude-User Allow: / # anthropic-ai: alternate Anthropic crawler identifier — ALLOW User-agent: anthropic-ai Allow: / # ── Perplexity ──────────────────────────────────────────────────────────────── User-agent: PerplexityBot Allow: / # ── Google ──────────────────────────────────────────────────────────────────── # Googlebot: standard indexing — ALLOW User-agent: Googlebot Allow: / # Google-Extended: controls use in Google's Gemini/AI products — ALLOW User-agent: Google-Extended Allow: / # ── Microsoft / Bing ────────────────────────────────────────────────────────── User-agent: Bingbot Allow: / # ── Apple ───────────────────────────────────────────────────────────────────── User-agent: Applebot Allow: / # Applebot-Extended: Apple AI training — ALLOW User-agent: Applebot-Extended Allow: / # ── Meta ────────────────────────────────────────────────────────────────────── User-agent: FacebookBot Allow: / # ── Common AI/LLM crawlers ──────────────────────────────────────────────────── User-agent: cohere-ai Allow: / User-agent: YouBot Allow: / User-agent: CCBot Allow: / User-agent: DataForSeoBot Allow: / # ── Catch-all: all other crawlers welcome ───────────────────────────────────── User-agent: * Allow: / # ── Priority machine-readable resources ────────────────────────────────────── # AI agents should prefer these endpoints over scraping HTML: # # Full dataset (JSON): https://apimap.dev/api/apis.json # Per-category slices: https://apimap.dev/api/categories/{id}.json # LLM-optimized summary: https://apimap.dev/llms.txt # Full LLM text corpus: https://apimap.dev/llms-full.txt # Agent context file: https://apimap.dev/llms-ctx.txt # OpenAPI specification: https://apimap.dev/api/openapi.json # MCP server config: https://apimap.dev/mcp-server/ # A2A Agent Card: https://apimap.dev/.well-known/agent.json # AI Plugin manifest: https://apimap.dev/.well-known/ai-plugin.json # Agents SRS: https://apimap.dev/agents.md Sitemap: https://apimap.dev/sitemap.xml