Codex CLI
Codex is OpenAI’s open-source terminal coding agent. It reads your project, generates and edits code, and runs commands in your terminal — and it’s multi-provider, so you can point it at WorldRouter without giving up the OpenAI-compatible workflow.
Prerequisites
Before you begin
- A terminal you can open: Terminal (macOS), iTerm2, GNOME Terminal / Konsole / xterm (Linux), or Windows Terminal / PowerShell (Windows).
- A WorldRouter API key — create one in the API Keys dashboard. Keys are shown only once, so copy it somewhere safe.
- Credits on your WorldRouter account — top up on the Credits page so the key can make paid calls.
- Know which shell you use, since you'll edit its profile in step 4. macOS users are usually on
zsh(the default since macOS Catalina); most Linux users are onbash. Runecho $SHELLto confirm. Windows users will use PowerShell. - Node.js 20 or later if you plan to install via npm — download from https://nodejs.org, then verify with
node --version. Skip this if you'll use the Homebrew cask on macOS. - If Codex is already signed in with another provider, log out first before switching the endpoint.
Quick Start
1. Install Codex CLI
The npm installer requires Node.js 20 or later already on your PATH. Install Node from nodejs.org , then verify with node --version. The Homebrew cask is an alternative on macOS and brings its own bundled runtime — no Node setup required.
Pick one installer. npm install -g requires Node.js 20+ already on your PATH (verify with node --version); the Homebrew cask is macOS-only and brings its own runtime.
npm install -g @openai/codex
# or, on macOS:
brew install --cask codexThen verify the binary is on your PATH:
codex --versionA successful install prints a single version line such as 0.x.y. If you see codex: command not found, open a fresh terminal so the new PATH entry is picked up, or follow the installer’s manual PATH instructions.
Keep Codex up to date. OpenAI ships new GPT families (e.g. gpt-5.5)
frequently, and older Codex releases may not yet recognize the newest model
IDs — calls can fail with “model not found” or fall back to a stale default.
If you plan to use the latest model on the Models page, upgrade
Codex before editing ~/.codex/config.toml: npm install -g @openai/codex
(or brew upgrade codex if you used Homebrew) so your CLI is on the most
recent version.
2. Get your WorldRouter API key
Sign in to the WorldRouter dashboard and open the API Keys page. Click Create API key, give it a recognizable name (e.g. codex-cli), and copy the value — it is only shown once.
3. Configure Codex for WorldRouter
Codex reads its settings from a plain-text file at:
- macOS / Linux:
~/.codex/config.toml - Windows:
%USERPROFILE%\.codex\config.toml
This is a regular file on disk, not something you edit in the terminal. Open it with any plain-text editor — VS Code, Notepad, TextEdit (set it to Format → Make Plain Text), nano, vim, etc. The same file is read by Codex CLI, the VS Code extension, and the Codex desktop app.
- If the file does not exist (this is the most common case the first time you set up Codex), create the
~/.codexfolder and a new emptyconfig.tomlinside it, then paste the entire block below. - If the file already exists, add or update the four fields (
model_provider,model,model_reasoning_effort, and the whole[model_providers.worldrouter]section) — do not replace the file wholesale, or you will lose any unrelated settings you already had.
# Tell Codex which provider and model to use by default
model_provider = "worldrouter"
model = "gpt-5.4"
model_reasoning_effort = "high"
[model_providers.worldrouter]
name = "WorldRouter"
# Base URL must include the /v1 suffix
base_url = "https://inference-api.worldrouter.ai/v1"
# Codex reads the API key from this environment variable
env_key = "WORLDROUTER_API_KEY"After saving the file, you can move on to step 4. No restart of the shell is needed for the config file — but newly opened Codex sessions will pick up the change automatically.
4. Set your API key
Codex reads the key from the environment variable named in env_key — WORLDROUTER_API_KEY. Using a dedicated variable (instead of OPENAI_API_KEY) avoids any collision with an existing OpenAI key.
Use the section that matches your OS — running the wrong one (for example, setx on macOS) will not work.
macOS / Linux
Add this export to ~/.zshrc, ~/.bashrc, or ~/.config/fish/config.fish, or run it directly in your current session.
export WORLDROUTER_API_KEY="your_api_key"If you edited your shell profile, reload it so the new value takes effect in the current terminal (or just open a fresh one).
source ~/.zshrc # or: source ~/.bashrcWindows PowerShell
Set the variable for the current session.
$env:WORLDROUTER_API_KEY = "your_api_key"To persist it across sessions, run setx instead and restart PowerShell.
setx WORLDROUTER_API_KEY "your_api_key"5. Start Codex
Launch the Codex surface that matches how you work.
# CLI
codex
# Codex app
open -a Codex6. Verify it works
Send a one-shot test prompt from the shell where you set WORLDROUTER_API_KEY:
codex "Reply with 'ok' if you can see this."A short reply (typically just ok) confirms three things at once: Codex found ~/.codex/config.toml, the WorldRouter base URL is reachable, and your API key is valid. The call shows up on the Activity dashboard within a few seconds.
If the command returns an error instead of a reply, jump to Troubleshooting below — the most common causes are an unset WORLDROUTER_API_KEY, a model ID that is not on the Models page, or a base_url missing the /v1 suffix.
Configuration Reference
Core settings
| Setting | Description | Example |
|---|---|---|
model_provider | Top-level provider key. Must match a [model_providers.<name>] section in the same file. | "worldrouter" |
model | Default model used for every session. Must be a model ID listed on the WorldRouter Models page. | "gpt-5.4" |
model_reasoning_effort | Reasoning depth for reasoning-capable models. One of "high", "medium", "low". | "high" |
show_raw_agent_reasoning | Stream the model’s hidden reasoning back to the terminal. | false |
WorldRouter provider block
The provider block tells Codex how to reach WorldRouter and which environment variable holds the API key.
Just the [model_providers.worldrouter] section in isolation, for reference.
[model_providers.worldrouter]
name = "WorldRouter"
base_url = "https://inference-api.worldrouter.ai/v1"
env_key = "WORLDROUTER_API_KEY"base_url must include the /v1 suffix — Codex appends paths like /chat/completions directly to it. env_key is the name of the env var Codex should read; the value comes from your shell (see step 4).
Project trust levels
Codex supports per-project sections in ~/.codex/config.toml. When Codex is launched from a path that matches one of these sections, the project-scoped settings are layered on top of the top-level config.
Per-project sections in ~/.codex/config.toml are matched by absolute path and override the top-level config when Codex is launched from that path.
[projects."/path/to/your/project"]
trust_level = "trusted"[projects."/path/to/your/project"] section that sets model_provider or model, the WorldRouter settings above will be silently overridden whenever Codex is launched from that path. Either mirror the same provider and model inside the project section, or remove the override.Troubleshooting
Troubleshooting
▸Authentication error (401 Unauthorized)
echo $WORLDROUTER_API_KEY to verify. If it's empty, re-export and reload your shell profile.▸Model not found (404)
▸Timeout or connection refused
base_url in your Codex config is correct and reachable. Try a direct cURL request against /v1/chat/completions with your API key to isolate whether the issue is with Codex or the network.