Tutorial
WorldRouter is a unified API gateway that routes your requests to LLM providers — OpenAI, Anthropic, Google, and more — through a single endpoint at the lowest per-token prices. Compatible with the OpenAI SDK; most integrations only need a new base URL and API key.
Point your OpenAI-compatible client at this base URL to route requests through WorldRouter:
https://control-api.worldrouter.ai/v1Make your first API call:
curl https://control-api.worldrouter.ai/v1/chat/completions \
-H "Authorization: Bearer your_api_key" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5.4",
"messages": [{ "role": "user", "content": "Hello" }]
}'Use WorldRouter as the OpenAI-compatible endpoint for Codex CLI, the VS Code extension, or the Codex app.
Before you begin
- Install Codex CLI — follow the official installation guide at github.com/openai/codex.
- Create a WorldRouter API key in the dashboard → API Keys page.
- If Codex is already signed in with another provider, log out first before switching the endpoint.
- 1Export the WorldRouter key as OPENAI_API_KEY in your shell profile.
- 2Verify your key works with a quick cURL request.
- 3Create or update ~/.codex/config.toml to point at the WorldRouter proxy.
- 4Launch Codex from your project directory.
Environment
Add these exports to your shell profile (~/.zshrc or ~/.bashrc) and restart your shell, or run them directly in your current session.
export WORLDROUTER_API_KEY="your_api_key"
export OPENAI_API_KEY="$WORLDROUTER_API_KEY"OPENAI_API_KEY in your shell. If you also use the official OpenAI API directly, consider using a project-level .env file or a tool like direnv to scope the override to specific directories.Verify with cURL
Run a direct request to verify the key and routed model before launching Codex.
curl https://control-api.worldrouter.ai/v1/chat/completions \
-H "Authorization: Bearer your_api_key" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5.4",
"messages": [{ "role": "user", "content": "Hello" }]
}'Codex config
Add or update this block in ~/.codex/config.toml. This is required for all Codex surfaces (CLI, VS Code extension, and Codex app).
model_provider = "custom_proxy"
model = "gpt-5.4"
model_reasoning_effort = "high"
[model_providers.custom_proxy]
name = "WorldRouter custom proxy"
base_url = "https://control-api.worldrouter.ai/v1"
env_key = "OPENAI_API_KEY"Launch
Launch the Codex surface that matches how you work.
# CLI
codex
# VS Code extension
cd your-project
code .
# Codex app
open -a CodexTroubleshooting
▸Authentication error (401 Unauthorized)
▸Model not found (404)
▸Timeout or connection refused
Point Claude Code at WorldRouter through the Anthropic-compatible environment it already reads.
Before you begin
- Install Claude Code — run `npm install -g @anthropic-ai/claude-code` or follow the official docs.
- Create a WorldRouter API key in the dashboard → API Keys page.
- 1Set ANTHROPIC_AUTH_TOKEN from the WorldRouter key and use the WorldRouter Anthropic-compatible base URL.
- 2Choose the routed model in ANTHROPIC_MODEL, then launch Claude Code from the same shell.
Environment
Add these exports to your shell profile (~/.zshrc or ~/.bashrc) and restart your shell, or run them directly in your current session. ANTHROPIC_API_KEY must be set to empty — Claude Code prefers it over ANTHROPIC_AUTH_TOKEN, so leaving it non-empty would bypass WorldRouter.
export WORLDROUTER_API_KEY="your_api_key"
export ANTHROPIC_AUTH_TOKEN="$WORLDROUTER_API_KEY"
export ANTHROPIC_API_KEY=""
export ANTHROPIC_MODEL="claude-sonnet-4-6"
export ANTHROPIC_BASE_URL="https://control-api.worldrouter.ai"Launch
Launch Claude Code using the configured shell environment.
claudeTroubleshooting
▸Authentication error (401 Unauthorized)
▸Model not found
Configure OpenCode CLI with a WorldRouter provider; the same config is then available in the app.
Before you begin
- Install OpenCode — follow the instructions at opencode.ai.
- Create a WorldRouter API key in the dashboard → API Keys page.
- 1Export the WorldRouter key in your shell and reuse the same OpenAI-compatible base URL.
- 2If ~/.config/opencode/opencode.jsonc does not exist yet, create it and add a WorldRouter provider.
- 3Launch opencode, then select the WorldRouter provider model with the /model command.
Environment
Add these exports to your shell profile (~/.zshrc or ~/.bashrc) and restart your shell, or run them directly in your current session.
export WORLDROUTER_API_KEY="your_api_key"
export OPENAI_API_KEY="$WORLDROUTER_API_KEY"
export OPENAI_BASE_URL="https://control-api.worldrouter.ai/v1"OPENAI_API_KEY in your shell. If you also use the official OpenAI API directly, consider using a project-level .env file or a tool like direnv to scope the override to specific directories.OpenCode provider
Add this provider block to ~/.config/opencode/opencode.jsonc.
{
// OpenCode configuration
// Docs: https://opencode.ai/docs
"$schema": "https://opencode.ai/config.json",
"provider": {
"worldrouter": {
"name": "WorldRouter",
"npm": "@ai-sdk/openai-compatible",
"models": {
"gpt-5.4": {
"name": "WorldRouter gpt-5.4"
},
"gpt-5.4-mini": {
"name": "WorldRouter gpt-5.4-mini"
},
"claude-opus-4-6": {
"name": "WorldRouter claude-opus-4-6"
},
"claude-sonnet-4-6": {
"name": "WorldRouter claude-sonnet-4-6"
}
},
"options": {
"baseURL": "https://control-api.worldrouter.ai/v1",
"apiKey": "your_api_key"
}
}
}
}Launch
Start in your project, then pick the WorldRouter model inside OpenCode.
cd your-project
opencode
# then run /model inside OpenCodeTroubleshooting
▸Authentication error (401 Unauthorized)
▸Provider or model not showing up
Available Models
WorldRouter routes requests to upstream providers. Use the model ID below in your API calls. For live pricing and the full catalog, visit the Models page.
| Vendor | Model ID |
|---|---|
| Anthropic | claude-opus-4-6claude-sonnet-4-6 |
| OpenAI | gpt-5.4gpt-5.4-mini |
gemini-3.1-pro-previewgemini-3.1-flash-lite-preview | |
| Alibaba | qwen3.5-plusqwen3.5-flashqwen3.6-plusqwen3-coder-plus |
| Moonshot | kimi-k2.5 |
| MiniMax | MiniMax-M2.5 |
| Zhipu | glm-5 |
| DeepSeek | deepseek-v3.2 |
API Reference
WorldRouter exposes an OpenAI-compatible /chat/completions endpoint. Any library or tool that supports a custom base URL can connect with zero code changes.
curl https://control-api.worldrouter.ai/v1/chat/completions \
-H "Authorization: Bearer your_api_key" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5.4",
"messages": [{ "role": "user", "content": "Hello" }]
}'from openai import OpenAI
client = OpenAI(
api_key="your_api_key",
base_url="https://control-api.worldrouter.ai/v1",
)
response = client.chat.completions.create(
model="gpt-5.4",
messages=[{"role": "user", "content": "Hello"}],
)
print(response.choices[0].message.content)import OpenAI from "openai";
const client = new OpenAI({
apiKey: "your_api_key",
baseURL: "https://control-api.worldrouter.ai/v1",
});
const response = await client.chat.completions.create({
model: "gpt-5.4",
messages: [{ role: "user", content: "Hello" }],
});
console.log(response.choices[0].message.content);Authentication
All requests require a valid API key passed in the Authorization header as a Bearer token.
Authorization: Bearer your_api_keyAPI keys are scoped to a team. Create and manage keys in the API Keys section of the dashboard.
Troubleshooting
Common errors you may encounter when calling the WorldRouter API.
| Code | Meaning | Fix |
|---|---|---|
| 401 | Invalid or missing API key | Check that your key is set correctly and hasn’t been revoked in the dashboard. |
| 402 | Insufficient credits | Top up on the Billing page, then retry. |
| 404 | Model not found | Verify the model ID matches the Models page. IDs are case-sensitive. |
| 429 | Rate limited | Back off and retry with exponential delay. Consider spreading load across models. |
| 500 | Upstream provider error | Retry the request. If persistent, try a different model or contact support. |
Privacy
WorldRouter acts as a routing proxy — your API requests are forwarded to upstream model providers to generate responses. We collect usage metadata (token counts, model IDs, timestamps) for billing and analytics, but do not use the content of your prompts or completions to train any models.
Your source code stays on your machine. Coding agents like Codex, Claude Code, and OpenCode process files locally and only send the messages you see in the conversation to the API endpoint.
For full details on data collection, retention, and your rights, read our Privacy Policy. Questions? Contact privacy@worldclaw.ai.