# API Map — Full API Reference > Source: https://apimap.dev | Machine-readable JSON: https://apimap.dev/api/apis.json > Generated: 2026-03-30T23:25:37.790Z > License: CC BY 4.0 (free to index and train on) This file contains the complete API Map dataset in plain text. It is optimized for LLM ingestion — no JavaScript execution required. Use `authExample` as a literal header template. Replace placeholder values (YOUR_API_KEY, sk-..., etc.) with real credentials. --- ## Categories (14) - ai AI & LLM 90 APIs - payments Payments 29 APIs - maps Maps & Location 154 APIs - weather Weather 56 APIs - social Social Media 45 APIs - communication Communication 54 APIs - search Search 39 APIs - storage Storage & Database 70 APIs - auth Authentication 45 APIs - entertainment Entertainment 226 APIs - finance Finance 138 APIs - developer Developer Tools 333 APIs - ecommerce E-Commerce 26 APIs - security Security 44 APIs --- ## AI & LLM (90 APIs) ### OpenAI API Provider: OpenAI Base URL: https://api.openai.com/v1 Docs: https://platform.openai.com/docs Auth type: Bearer Token Auth example: Authorization: Bearer sk-proj-... Has free tier: No Starting at: Pay per use Free quota: $5 free credit for new accounts Rate limit: 3,500 RPM · 200,000 TPM (Tier 1) Tags: gpt-4o, o1, vision, embeddings, images, audio Description: OpenAI provides state-of-the-art AI models via a unified REST API. GPT-4o handles text, vision, and audio; o1 excels at advanced reasoning; DALL·E 3 generates and edits images; Whisper transcribes speech; and embedding models power semantic search and retrieval. Auth: Pass your API key as a Bearer token in the Authorization header. Keys are prefixed with sk-. Pricing: GPT-4o: $2.50/M input, $10/M output. GPT-4.1: $2/M in, $8/M out. GPT-5: $1.25/M in, $10/M out. o3: $2/M in, $8/M out. o4-mini: $1.10/M in, $4.40/M out. GPT-4o mini: $0.15/M in, $0.60/M out. Batch API: 50% off. Endpoints: POST /chat/completions Generate chat completions (GPT-4o, o1, etc.) POST /embeddings Create vector embeddings from text POST /images/generations Generate images with DALL·E 3 POST /images/edits Edit existing images using a mask POST /audio/transcriptions Transcribe audio files with Whisper POST /audio/speech Convert text to speech (TTS) GET /models List all available models Sample request: ```bash curl https://api.openai.com/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -d '{ "model": "gpt-4o", "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Say hello!"} ], "max_tokens": 100 }' ``` Sample response: ```json { "id": "chatcmpl-abc123", "object": "chat.completion", "model": "gpt-4o-2024-08-06", "choices": [{ "index": 0, "message": { "role": "assistant", "content": "Hello! How can I help you today?" }, "finish_reason": "stop" }], "usage": { "prompt_tokens": 20, "completion_tokens": 9, "total_tokens": 29 } } ``` --- ### Claude API Provider: Anthropic Base URL: https://api.anthropic.com/v1 Docs: https://docs.anthropic.com Auth type: API Key Header Auth example: x-api-key: sk-ant-api03-... anthropic-version: 2023-06-01 Has free tier: No Starting at: Pay per use Free quota: None Rate limit: 2,000 RPM · 160,000 TPM (default) Tags: claude-sonnet-4-6, tool-use, vision, 200k-context, computer-use Description: Anthropic's Claude API provides access to the Claude 4.x family of models. Claude Sonnet 4.6 is ideal for most tasks; Claude Opus 4.6 handles the most complex reasoning; Claude Haiku 4.5 is fastest and most affordable. All models support extended context windows, vision inputs, tool use, and computer use. Batch API (50% off) and prompt caching (90% off cached inputs) are available. Auth: Pass your Anthropic API key in the x-api-key header. Also include the anthropic-version header. Pricing: Claude Haiku 4.5: $1/M input, $5/M output. Claude Sonnet 4.6: $3/M in, $15/M out. Claude Opus 4.6: $5/M in, $25/M out. Batch API: 50% off. Prompt caching: 90% off cached inputs. Endpoints: POST /messages Send a message and receive a response POST /messages (streaming) Stream response tokens as SSE GET /models List available Claude models Sample request: ```bash curl https://api.anthropic.com/v1/messages \ -H "Content-Type: application/json" \ -H "x-api-key: $ANTHROPIC_API_KEY" \ -H "anthropic-version: 2023-06-01" \ -d '{ "model": "claude-3-5-sonnet-20241022", "max_tokens": 1024, "messages": [ {"role": "user", "content": "What is the capital of France?"} ] }' ``` Sample response: ```json { "id": "msg_01XFDUDYJgAACzvnptvVoYEL", "type": "message", "role": "assistant", "content": [{ "type": "text", "text": "The capital of France is Paris." }], "model": "claude-3-5-sonnet-20241022", "stop_reason": "end_turn", "usage": { "input_tokens": 14, "output_tokens": 10 } } ``` --- ### Gemini API Provider: Google DeepMind Base URL: https://generativelanguage.googleapis.com/v1beta Docs: https://ai.google.dev/gemini-api/docs Auth type: API Key Header Auth example: x-goog-api-key: AIzaSy... Has free tier: Yes Starting at: Pay per use Free quota: 15 RPM · 1M TPD (Gemini 2.5 Flash-Lite free tier) Rate limit: 15 RPM (free) · 2,000 RPM (paid) Tags: gemini-2.5-flash, multimodal, 2M-context, grounding, code Description: Google's Gemini API provides access to Gemini 2.5 Pro and Gemini 2.5 Flash — Google's latest multimodal AI models. Supports text, images, audio, video, and code with up to 2M token context windows. Integrated with Google Search grounding for real-time information access. Note: Gemini 2.0 Flash is deprecated and will be shut down June 1, 2026. Auth: Pass your Google API key as a query parameter or in the x-goog-api-key header. Pricing: Gemini 2.5 Flash: $0.15/M input, $0.60/M output. Gemini 2.5 Flash-Lite: $0.10/M input, $0.40/M output. Gemini 2.5 Pro: $1.25/M input, $5/M output (>200K context). Batch: 50% off. Note: Gemini 2.0 Flash deprecated, ends June 1, 2026. Endpoints: POST /models/{model}:generateContent Generate text, multimodal content POST /models/{model}:streamGenerateContent Stream generated content as SSE POST /models/{model}:embedContent Create text embeddings GET /models List all available models Sample request: ```bash curl "https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent" \ -H "x-goog-api-key: $GEMINI_API_KEY" \ -H "Content-Type: application/json" \ -d '{ "contents": [{ "parts": [{"text": "Explain quantum entanglement simply."}] }] }' ``` Sample response: ```json { "candidates": [{ "content": { "parts": [{"text": "Quantum entanglement is when two particles become linked..."}], "role": "model" }, "finishReason": "STOP" }], "usageMetadata": { "promptTokenCount": 9, "candidatesTokenCount": 120, "totalTokenCount": 129 } } ``` --- ### Mistral AI API Provider: Mistral AI Base URL: https://api.mistral.ai/v1 Docs: https://docs.mistral.ai Auth type: Bearer Token Auth example: Authorization: Bearer ... Has free tier: No Starting at: from $0.10/mo Free quota: Experimental models free during preview Rate limit: 1 request/s · 500,000 TPM Tags: mistral-large-2, codestral, openai-compatible, function-calling Description: Mistral AI offers frontier open and commercial models via REST API. Mistral Large 2 rivals top closed models; Codestral specializes in code generation; Mistral Embed handles semantic search. The API is OpenAI-compatible — simply change the base URL and model name. Auth: Standard Bearer token authentication using your Mistral API key. Pricing: Mistral Medium 3: $0.40/M input, $2.00/M output. Mistral Small 3.1: $0.03/M input, $0.11/M output. Mistral Nemo: $0.02/M. Codestral: $0.20/M input, $0.60/M output. Mistral Large 2: $2.00/M input, $6.00/M output. Free experiment plan available. Endpoints: POST /chat/completions OpenAI-compatible chat completions POST /embeddings Generate text embeddings POST /fim/completions Fill-in-the-middle for code (Codestral) GET /models List available models Sample request: ```bash curl https://api.mistral.ai/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $MISTRAL_API_KEY" \ -d '{ "model": "mistral-large-latest", "messages": [{"role": "user", "content": "Write a haiku about APIs."}] }' ``` Sample response: ```json { "id": "cmpl-e5cc70bb28c444948073e77776eb30ef", "object": "chat.completion", "model": "mistral-large-latest", "choices": [{ "index": 0, "message": { "role": "assistant", "content": "Keys unlock the gate / Data flows through silent paths / Endpoints never sleep" }, "finish_reason": "stop" }], "usage": {"prompt_tokens": 15, "completion_tokens": 18} } ``` --- ### Stability AI API Provider: Stability AI Base URL: https://api.stability.ai/v2beta Docs: https://platform.stability.ai/docs/api-reference Auth type: Bearer Token Auth example: Authorization: Bearer sk-... Has free tier: Yes Starting at: from $0.07/mo Free quota: 25 free credits on signup Rate limit: 150 requests/10s Tags: stable-diffusion, SDXL, text-to-image, image-to-image, video Description: Stability AI's REST API provides access to Stable Diffusion 3.5, SDXL, and Stable Video Diffusion. Generate high-quality images from text prompts, transform existing images, upscale, remove backgrounds, and generate short video clips. Auth: Bearer token authentication using your Stability AI API key. Pricing: Credits: $10 = 1,000 credits. SD3.5 Large: 6.5 credits/image. SDXL 1.0: 0.2 credits/image. Video: 20 credits/clip. Endpoints: POST /stable-image/generate/sd3 Text-to-image with Stable Diffusion 3.5 POST /stable-image/generate/core Core text-to-image generation POST /stable-image/upscale/conservative Upscale an image up to 4x POST /stable-image/edit/remove-background Remove image background POST /image-to-video Generate video from an image Sample request: ```bash curl https://api.stability.ai/v2beta/stable-image/generate/sd3 \ -H "Authorization: Bearer $STABILITY_API_KEY" \ -H "Accept: image/*" \ -F prompt="A photorealistic mountain lake at golden hour" \ -F aspect_ratio="16:9" \ -F model="sd3.5-large" \ --output image.png ``` Sample response: ```json # Returns raw image bytes (PNG/JPEG) # Headers include: Content-Type: image/png Stability-Finish-Reason: SUCCESS Stability-Seed: 3456789 ``` --- ### Perplexity AI API Provider: Perplexity AI Base URL: https://api.perplexity.ai Docs: https://docs.perplexity.ai Auth type: Bearer Token Auth example: Authorization: Bearer pplx-... Has free tier: No Starting at: from $0.20/mo Free quota: $5 free credits for new accounts Rate limit: 50 requests/min Tags: search, online, citations, real-time, research, llm Description: Perplexity AI's API provides access to its search-augmented language models. Unlike standard LLMs, Perplexity models fetch current web data to ground responses with citations, making them ideal for research, news summarization, and fact-checking workflows. Supports both online models (with live search) and offline models (standard generation). OpenAI-compatible API format. Auth: Generate an API key in Perplexity Settings → API. Pass it as a Bearer token in the Authorization header. Pricing: Sonar (online): $1 per 1M input tokens + $5 per 1,000 search queries. Sonar Pro: $3/M input + $5/1k searches. Offline models from $0.20/M tokens. Endpoints: POST /chat/completions Chat completions with optional web search grounding Sample request: ```bash curl "https://api.perplexity.ai/chat/completions" \ -H "Authorization: Bearer $PERPLEXITY_API_KEY" \ -H "Content-Type: application/json" \ -d '{"model":"sonar","messages":[{"role":"user","content":"What are the latest AI model releases in 2026?"}]}' ``` Sample response: ```json { "id": "gen-abc123", "model": "sonar", "choices": [{ "message": { "role": "assistant", "content": "In 2026, notable AI releases include... [1][2]" } }], "citations": ["https://techcrunch.com/...", "https://theverge.com/..."] } ``` --- ### Replicate API Provider: Replicate Base URL: https://api.replicate.com/v1 Docs: https://replicate.com/docs Auth type: Bearer Token Auth example: Authorization: Bearer r8_... Has free tier: No Starting at: Pay per use Free quota: No free tier (pay-as-you-go) Rate limit: No hard limit (scales with your account tier) Tags: stable-diffusion, llama, image-generation, video, audio, open-source, mlops Description: Replicate makes it easy to run machine learning models with a single API call. Access thousands of open-source models for image generation (SDXL, Flux, ControlNet), video generation (Stable Video Diffusion), audio (Whisper, MusicGen), language (Llama 3, Mistral), and specialized tasks like upscaling, background removal, and object detection. Deploy private models and fine-tunes too. Auth: Create an API token at replicate.com/account/api-tokens. Pass it as a Bearer token in the Authorization header. Pricing: Billed by the second per hardware tier: CPU $0.000225/sec, Nvidia T4 $0.0012/sec, A40 $0.0023/sec, A100 $0.0115/sec. Image gen ~$0.003–0.012/image. Endpoints: POST /predictions Run a model and create a prediction GET /predictions/{prediction_id} Get prediction status and output GET /models List available public models GET /models/{owner}/{model_name} Get model details and latest version POST /models/{owner}/{model_name}/versions/{id}/predictions Run a specific model version Sample request: ```bash curl "https://api.replicate.com/v1/predictions" \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -d '{"version":"db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf","input":{"prompt":"A photo of a cat wearing a beret in Paris"}}' ``` Sample response: ```json { "id": "xyz789abc", "status": "starting", "model": "stability-ai/sdxl", "urls": { "get": "https://api.replicate.com/v1/predictions/xyz789abc", "cancel": "https://api.replicate.com/v1/predictions/xyz789abc/cancel" } } ``` --- ### Groq API Provider: Groq Base URL: https://api.groq.com/openai/v1 Docs: https://console.groq.com/docs Auth type: Bearer Token Auth example: Authorization: Bearer gsk_... Has free tier: Yes Starting at: Pay per use Free quota: Free tier with rate limits; no credit card required Rate limit: 30 requests/min (free) · Higher on pay-as-you-go Tags: llama, mixtral, fast-inference, open-source, openai-compatible, lpu Description: Groq runs open-source LLMs at speeds previously impossible: 800+ tokens per second on their custom Language Processing Unit (LPU) hardware. The API is OpenAI-compatible, so any code targeting OpenAI's chat completions endpoint works with a one-line change. Models include Llama 3.3 70B, Llama 3.1 8B, Mixtral 8x7B, and Gemma 2. Ideal for latency-critical applications like voice assistants, real-time chat, and interactive coding tools. Auth: Generate an API key at console.groq.com. Pass it as a Bearer token in the Authorization header. The endpoint is OpenAI-compatible. Pricing: Llama 3.1 8B Instant: $0.05/M input, $0.08/M output. Llama 3.3 70B Versatile: $0.59/M input, $0.79/M output. Llama 3.1 70B: $0.59/M input, $0.79/M output. Batch API: 50% off. Endpoints: POST /chat/completions Chat completions (OpenAI-compatible) GET /models List available models and their context lengths Sample request: ```bash curl "https://api.groq.com/openai/v1/chat/completions" \ -H "Authorization: Bearer $GROQ_API_KEY" \ -H "Content-Type: application/json" \ -d '{"model":"llama-3.3-70b-versatile","messages":[{"role":"user","content":"Explain quantum computing in one paragraph."}]}' ``` Sample response: ```json { "id": "chatcmpl-abc123", "model": "llama-3.3-70b-versatile", "choices": [{ "message": { "role": "assistant", "content": "Quantum computing harnesses..." }, "finish_reason": "stop" }], "usage": { "prompt_tokens": 18, "completion_tokens": 87, "total_tokens": 105 }, "x_groq": { "id": "req_abc", "usage": { "queue_time": 0.0002 } } } ``` --- ### HuggingFace Inference API Provider: HuggingFace Base URL: https://api-inference.huggingface.co/models Docs: https://huggingface.co/docs/api-inference Auth type: Bearer Token Auth example: Authorization: Bearer hf_... Has free tier: Yes Starting at: Free tier available Free quota: Rate-limited free inference on public models Rate limit: Varies by model and plan (free tier is rate-limited) Tags: open-source, transformers, nlp, computer-vision, audio, diffusion, bert, llama Description: The HuggingFace Inference API gives serverless access to the entire HuggingFace Hub: 200,000+ models for text generation, text classification, summarization, translation, question answering, image generation, image classification, object detection, speech recognition, and audio synthesis. Models range from tiny to frontier-scale. The Inference Endpoints product offers dedicated deployment for production workloads. Auth: Create a User Access Token at huggingface.co/settings/tokens (read scope is sufficient). Pass it as a Bearer token in the Authorization header. Pricing: Free: serverless inference on public models (rate-limited). PRO: $9/mo. Dedicated Inference Endpoints from $0.06/hr. Enterprise: custom. Endpoints: POST /{model_id} Run any model with task-specific input/output POST /openai/v1/chat/completions OpenAI-compatible chat completions (TGI models) Sample request: ```bash # Text generation with Llama 3 curl "https://api-inference.huggingface.co/models/meta-llama/Meta-Llama-3-8B-Instruct" \ -H "Authorization: Bearer $HF_TOKEN" \ -H "Content-Type: application/json" \ -d '{"inputs": "What is the capital of France?", "parameters": {"max_new_tokens": 50}}' ``` Sample response: ```json [{ "generated_text": "What is the capital of France? The capital of France is Paris." }] ``` --- ### Cohere API Provider: Cohere Base URL: https://api.cohere.com/v2 Docs: https://docs.cohere.com Auth type: Bearer Token Auth example: Authorization: Bearer YOUR_COHERE_API_KEY Has free tier: Yes Starting at: Pay per use Free quota: Trial key with limited free usage Rate limit: 100 API calls/min (trial) · Higher on production Tags: rag, embeddings, rerank, enterprise, command-r, grounding, citations Description: Cohere provides production-ready language AI for enterprise teams. Command R+ leads on RAG benchmarks, making it ideal for grounded enterprise search. The Embed v3 model produces high-quality text embeddings for semantic search and retrieval. The Rerank endpoint improves search accuracy by reordering candidate results. All models have built-in tool use (function calling) and citation support for RAG pipelines. Auth: Create an API key at dashboard.cohere.com. Pass it as a Bearer token in the Authorization header. Pricing: Command R+: $2.50/M in, $10/M out. Command R: $0.15/M in, $0.60/M out. Command R7B: $0.0375/M in, $0.15/M out. Embed: $0.10/M tokens. Rerank: $2/1k queries. Endpoints: POST /chat Chat with Command R / R+ with optional tool use and RAG POST /embed Generate embeddings for semantic search and classification POST /rerank Rerank search results by relevance to a query POST /classify Classify text into predefined categories Sample request: ```bash curl "https://api.cohere.com/v2/chat" \ -H "Authorization: Bearer $COHERE_API_KEY" \ -H "Content-Type: application/json" \ -d '{"model":"command-r-plus","messages":[{"role":"user","content":"Summarize the benefits of RAG for enterprise AI."}]}' ``` Sample response: ```json { "id": "abc123", "message": { "role": "assistant", "content": [{"type":"text","text":"RAG (Retrieval-Augmented Generation) benefits include..."}] }, "finish_reason": "COMPLETE", "usage": { "billed_units": { "input_tokens": 24, "output_tokens": 112 } } } ``` --- ### Together AI API Provider: Together AI Base URL: https://api.together.xyz/v1 Docs: https://docs.together.ai Auth type: Bearer Token Auth example: Authorization: Bearer YOUR_TOGETHER_API_KEY Has free tier: No Starting at: from $0.10/mo Free quota: $1 free credit on signup Rate limit: 60 requests/min (default) · Scales with plan Tags: open-source, llama, deepseek, fine-tuning, serverless, openai-compatible, vision Description: Together AI provides cloud infrastructure for running open-source AI models at scale. The API is fully OpenAI-compatible and supports 200+ models including Llama 3.1 405B, Qwen 2.5, Mistral, DeepSeek, Stable Diffusion, and more. Features include serverless inference (pay per token), dedicated GPU clusters, fine-tuning with LORA, vision models, and function calling. Popular for research, enterprise AI, and teams migrating from proprietary models. Auth: Create an API key at api.together.ai. The endpoint is OpenAI-compatible — pass your key as a Bearer token in the Authorization header. Pricing: Llama 3.1 8B: $0.18/M · $0.18/M. Llama 3.1 70B: $0.88/M · $0.88/M. Llama 4 Scout: available. DeepSeek R1: $1.25/M · $1.25/M. Image gen: $0.008 per image. No free tier; minimum $5 credit purchase. Endpoints: POST /chat/completions Chat completions (OpenAI-compatible) POST /completions Text completions POST /embeddings Generate text embeddings POST /images/generations Image generation (Stable Diffusion, FLUX) GET /models List all available models with pricing POST /fine-tunes Start a fine-tuning job with LoRA Sample request: ```bash curl "https://api.together.xyz/v1/chat/completions" \ -H "Authorization: Bearer $TOGETHER_API_KEY" \ -H "Content-Type: application/json" \ -d '{"model":"meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo","messages":[{"role":"user","content":"Write a Python function to parse JSON safely"}],"max_tokens":200}' ``` Sample response: ```json { "id": "890ab123", "model": "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo", "choices": [{ "message": { "role": "assistant", "content": "Here's a Python function to safely parse JSON: ```python import json def safe_json_parse(data): try: return json.loads(data) except json.JSONDecodeError: return None ```" }, "finish_reason": "stop" }], "usage": { "prompt_tokens": 20, "completion_tokens": 68 } } ``` --- ### ElevenLabs API Documentation Provider: ElevenLabs API Documentation Base URL: https://api.elevenlabs.io/ Docs: https://api.elevenlabs.io/ Auth type: API Key Header Auth example: xi-api-key: YOUR_API_KEY Has free tier: Yes Starting at: Free tier available Free quota: 10,000 characters / month Rate limit: 2 concurrent requests (free); up to 10 (paid) Tags: ai, voice, text-to-speech, speech synthesis, audio Description: This is the documentation for the ElevenLabs API. You can use this API to use our service programmatically, this is done by using your xi-api-key.
You can view your xi-api-key using the 'Profile' tab on https://beta.elevenlabs.io. Our API is experimental so all endpoints are subject to change. Auth: API key in the xi-api-key request header Pricing: Free: 10k chars/mo. Starter $5/mo (30k chars). Creator $22/mo (100k chars). Pro $99/mo (500k chars). Scale $330/mo (2M chars). Endpoints: GET /v1/history Get Generated Items POST /v1/history/delete Delete History Items POST /v1/history/download Download History Items DELETE /v1/history/{history_item_id} Delete History Item GET /v1/history/{history_item_id}/audio Get Audio From History Item POST /v1/text-to-speech/{voice_id} Text To Speech POST /v1/text-to-speech/{voice_id}/stream Text To Speech GET /v1/user Get User Info Sample request: ```bash curl -X GET 'https://api.elevenlabs.io//v1/history' \ -H '# No auth required' ``` Sample response: ```json {} ``` --- ### PowerTools Developer Provider: PowerTools Developer Base URL: https://connect.apptigent.com/api/utilities Docs: https://www.apptigent.com/help/ Auth type: API Key Header Auth example: X-IBM-Client-Id: YOUR_API_KEY Has free tier: Yes Starting at: Enterprise / contact sales Free quota: None Rate limit: Not officially published Tags: ai, apptigent Description: Apptigent PowerTools Developer Edition is a powerful suite of API endpoints for custom applications running on any stack. Manipulate text, modify collections, format dates and times, convert currency, perform advanced mathematical calculations, shorten URL's, encode strings, convert text to speech, Auth: API key in the request header (X-IBM-Client-Id) Pricing: Free with account. Developer productivity toolset API. Endpoints: POST /AddToCollection Collections - Add to collection POST /CSVtoJSON Data - CSV to JSON POST /CalculateAbsolute Math - Calculate Absolute POST /CalculateAddition Math - Calculate Addition POST /CalculateAverage Math - Calculate average POST /CalculateCosine Math - Calculate Cosine POST /CalculateDivision Math - Calculate Division POST /CalculateLogarithm Math - Calculate Logarithm Sample request: ```bash curl -X GET 'https://connect.apptigent.com/api/utilities/AddToCollection' \ -H 'X-IBM-Client-Id: YOUR_API_KEY' ``` Sample response: ```json {} ``` --- ### DigitalNZ API Provider: DigitalNZ Base URL: https://api.digitalnz.org Docs: https://api.digitalnz.org Auth type: API Key Header Auth example: GET /endpoint?api_key=YOUR_KEY Has free tier: Yes Starting at: Enterprise / contact sales Free quota: None Rate limit: 10,000 req/day Tags: ai, digitalnz Description: OpenAPI specification of DigitalNZ's Record API. For more information about the API see digitalnz.org/developers. To learn more about the metadata/fields used in the API see the [Metadata Dictionary](https://docs.google.com/document/pub?id=1Z3I_ckQWjnQQ4SzpORb Auth: API key as a query parameter (api_key) Pricing: Free with API key. DigitalNZ New Zealand digital heritage collections. Endpoints: GET /records.{format} Run queries against DigitalNZ metadata search service. GET /records/{record_id}.{format} View metadata associated with a single record. GET /records/{record_id}/more_like_this.{format} The "More Like This" call returns similar records to the specified ID. Sample request: ```bash curl -X GET 'https://api.digitalnz.org/records.{format}' \ -H 'GET /endpoint?api_key=YOUR_KEY' ``` Sample response: ```json {} ``` --- ### Fire Financial Services Business API Provider: Fire Financial Services Business Base URL: https://api.fire.com/business Docs: https://docs.fire.com Auth type: Bearer Token Auth example: Authorization: Bearer YOUR_TOKEN Has free tier: No Starting at: Enterprise / contact sales Free quota: None Rate limit: Not officially published Tags: ai, fire Description: The fire.com API allows you to deeply integrate Business Account features into your application or back-office systems. The API provides read access to your profile, accounts and transactions, event-driven notifications of activity on the account and payment initiation via batches. Each feature has Auth: Bearer token in the Authorization header Pricing: Paid. Fire.com Irish/UK business banking API; pricing per transaction. Endpoints: GET /v1/accounts List all fire.com Accounts POST /v1/accounts Add a new account GET /v1/accounts/{ican} Retrieve the details of a fire.com Account GET /v1/accounts/{ican}/transactions List transactions for an account (v1) GET /v1/accounts/{ican}/transactions/filter Filtered list of transactions for an account (v1) POST /v1/apps Create a new API Application POST /v1/apps/accesstokens Authenticate with the API. GET /v1/aspsps Get list of ASPSPs / Banks Sample request: ```bash curl -X GET 'https://api.fire.com/business/v1/accounts' \ -H 'Authorization: Bearer YOUR_TOKEN' ``` Sample response: ```json {} ``` --- ### goog.io | Unoffical Google Search API Provider: goog.io | Unoffical Google Search Base URL: https://api.goog.io Docs: https://goog.io Auth type: API Key Header Auth example: apikey: YOUR_API_KEY Has free tier: Yes Starting at: Enterprise / contact sales Free quota: None Rate limit: Not officially published Tags: ai, goog Description: Intoduction This is the OpenAPI V3 documentation for https://api.goog.io An API to perform Google Searches. Extremely fast and accurate. Zero proxies. Clean USA IPs. Simple to use API, but advance enough to support special parameters such as languages, country and geographic locality. Googio i Auth: API key in the request header (apikey) Pricing: Free unofficial Google Search API. Third-party wrapper; no paid tiers. Endpoints: GET / Status GET /v1/crawl/{query} Crawl GET /v1/images/{query} Images GET /v1/news/{query} News GET /v1/search/{query} Search POST /v1/serp/ SERP Sample request: ```bash curl -X GET 'https://api.goog.io/' \ -H 'apikey: YOUR_API_KEY' ``` Sample response: ```json {} ``` --- ### Image-Charts Provider: Image-Charts Base URL: https://image-charts.com/ Docs: https://image-charts.com/ Auth type: none Auth example: # No auth required Has free tier: Yes Starting at: Enterprise / contact sales Free quota: None Rate limit: See documentation Tags: ai, image-charts Description: Charts, simple as a URL. A safe and fast replacement for Google Image Charts Auth: No authentication required Pricing: Free — open public API, no authentication required. Endpoints: GET /chart Image-Charts API GET /chart.js/2.8.0 Chart.js as image API Sample request: ```bash curl -X GET 'https://image-charts.com//chart' \ -H '# No auth required' ``` Sample response: ```json {} ``` --- ### LanguageTool API Provider: LanguageTool Base URL: https://api.languagetoolplus.com/v2 Docs: https://api.languagetoolplus.com/v2 Auth type: none Auth example: # No auth required Has free tier: Yes Starting at: Enterprise / contact sales Free quota: None Rate limit: See documentation Tags: ai, languagetool Description: Check texts for style and grammar issues with LanguageTool. Please consider the following default limitations: