Configuration
Config File Discovery
Section titled “Config File Discovery”Claudex searches for config files in this order:
$CLAUDEX_CONFIGenvironment variable./claudex.toml(current directory)./.claudex/config.toml(current directory)- Parent directories (up to 10 levels), checking both patterns
~/.config/claudex/config.toml(XDG — checked before platform-specific paths)
# Show loaded config path and search orderclaudex config
# Create a local config in the current directoryclaudex config --initGlobal Settings
Section titled “Global Settings”# Path to claude binary (default: "claude" from PATH)# claude_binary = "/usr/local/bin/claude"
# Proxy settingsproxy_port = 13456proxy_host = "127.0.0.1"
# Log level: trace, debug, info, warn, errorlog_level = "info"
# Model aliases (shorthand → full model name)[model_aliases]grok3 = "grok-3-beta"gpt4o = "gpt-4o"ds3 = "deepseek-chat"Profiles
Section titled “Profiles”Each profile represents an AI provider connection. There are three provider types:
DirectAnthropic
Section titled “DirectAnthropic”For providers that natively support the Anthropic Messages API. Requests are forwarded with minimal modification.
[[profiles]]name = "anthropic"provider_type = "DirectAnthropic"base_url = "https://api.anthropic.com"api_key = "sk-ant-..."default_model = "claude-sonnet-4-20250514"priority = 100enabled = trueCompatible providers: Anthropic, MiniMax
OpenAICompatible
Section titled “OpenAICompatible”For providers using the OpenAI Chat Completions API. Claudex automatically translates between Anthropic and OpenAI protocols.
[[profiles]]name = "grok"provider_type = "OpenAICompatible"base_url = "https://api.x.ai/v1"api_key = "xai-..."default_model = "grok-3-beta"backup_providers = ["deepseek"]priority = 100enabled = trueCompatible providers: Grok (xAI), OpenAI, DeepSeek, Kimi/Moonshot, GLM (Zhipu), OpenRouter, Ollama, vLLM, LM Studio
OpenAIResponses
Section titled “OpenAIResponses”For providers using the OpenAI Responses API (e.g., ChatGPT/Codex subscriptions). Claudex translates between Anthropic Messages API and the OpenAI Responses API.
[[profiles]]name = "codex-sub"provider_type = "OpenAIResponses"base_url = "https://chatgpt.com/backend-api/codex"default_model = "gpt-4o"auth_type = "oauth"oauth_provider = "openai"Compatible providers: ChatGPT/Codex subscriptions (via Codex CLI)
Profile Fields
Section titled “Profile Fields”| Field | Default | Description |
|---|---|---|
name | required | Unique profile identifier |
provider_type | DirectAnthropic | DirectAnthropic, OpenAICompatible, or OpenAIResponses |
base_url | required | Provider API endpoint |
api_key | "" | API key (plaintext) |
api_key_keyring | — | Read API key from OS keychain instead |
default_model | required | Default model to use |
auth_type | "api-key" | "api-key" or "oauth" |
oauth_provider | — | OAuth provider (claude, openai, google, qwen, kimi, github). Required when auth_type = "oauth" |
backup_providers | [] | Failover profile names |
custom_headers | {} | Extra HTTP headers |
extra_env | {} | Extra environment variables for Claude |
priority | 100 | Priority for smart routing |
enabled | true | Whether this profile is active |
[profiles.models] | — | Model slot mapping table (haiku, sonnet, opus fields) |
Interactive Setup
Section titled “Interactive Setup”The easiest way to add a profile is the interactive wizard:
claudex profile addIt guides you through provider selection, API key entry (with optional keyring storage), model selection, and connectivity testing.
Keyring Integration
Section titled “Keyring Integration”Store API keys securely in your OS keychain instead of plaintext config:
[[profiles]]name = "grok"api_key_keyring = "grok-api-key" # reads from OS keychainSupported backends:
- macOS: Keychain
- Linux: Secret Service (GNOME Keyring / KDE Wallet)
OAuth Subscription Auth
Section titled “OAuth Subscription Auth”Instead of API keys, you can authenticate with your existing provider subscription via OAuth. This is useful if you have a Claude Pro/Team, ChatGPT Plus, or other subscription plan.
- Set the profile’s
auth_typeto"oauth"and specify theoauth_provider:
[[profiles]]name = "chatgpt-oauth"provider_type = "OpenAICompatible"base_url = "https://api.openai.com/v1"default_model = "gpt-4o"auth_type = "oauth"oauth_provider = "openai"- Log in with the
authcommand:
claudex auth login openai- Check your auth status:
claudex auth statusSupported Providers
Section titled “Supported Providers”| Provider | oauth_provider | Token Source |
|---|---|---|
| Claude | claude | Reads from ~/.claude (Claude Code’s native config) |
| OpenAI | openai | Reads from ~/.codex/auth.json (Codex CLI) |
google | OAuth device code flow | |
| Qwen | qwen | OAuth device code flow |
| Kimi | kimi | OAuth device code flow |
| GitHub | github | OAuth device code flow |
Gateway Auth Mode
Section titled “Gateway Auth Mode”When using OAuth profiles, Claudex sets ANTHROPIC_AUTH_TOKEN (not ANTHROPIC_API_KEY) when launching Claude Code. This prevents conflicts with Claude Code’s own subscription login mechanism, which uses ANTHROPIC_API_KEY internally.
The proxy automatically refreshes OAuth tokens before they expire. You can also manually refresh with:
claudex auth refresh openaiModel Slot Mapping
Section titled “Model Slot Mapping”Claude Code has a built-in /model switcher with three slots: haiku, sonnet, and opus. By default, these map to Anthropic models, but with Claudex you can map them to any provider’s models.
Configuration
Section titled “Configuration”Add a [profiles.models] table to any profile:
[[profiles]]name = "grok"provider_type = "OpenAICompatible"base_url = "https://api.x.ai/v1"api_key = "xai-..."default_model = "grok-3-beta"
[profiles.models]haiku = "grok-3-mini-beta"sonnet = "grok-3-beta"opus = "grok-3-beta"When you type /model sonnet inside Claude Code, Claudex translates the request to use grok-3-beta. The /model opus command maps to grok-3-beta, and so on.
Examples for Common Providers
Section titled “Examples for Common Providers”# OpenAI model mapping[profiles.models]haiku = "gpt-4o-mini"sonnet = "gpt-4o"opus = "o1"
# DeepSeek model mapping[profiles.models]haiku = "deepseek-chat"sonnet = "deepseek-chat"opus = "deepseek-reasoner"
# Google Gemini model mapping[profiles.models]haiku = "gemini-2.0-flash"sonnet = "gemini-2.5-pro"opus = "gemini-2.5-pro"Tool Name Compatibility
Section titled “Tool Name Compatibility”Some providers (notably OpenAI) enforce a 64-character limit on tool (function) names. Claude Code can generate tool names that exceed this limit.
Claudex automatically truncates tool names longer than 64 characters when sending requests to OpenAI-compatible providers and transparently restores the original names when processing responses. This roundtrip is fully transparent — no configuration needed.
Non-Interactive Mode
Section titled “Non-Interactive Mode”Claudex supports one-shot (non-interactive) execution for use in CI/CD pipelines, scripts, and automation:
# Print response and exitclaudex run grok "Explain this codebase" --print
# Skip all permission prompts (for fully automated pipelines)claudex run grok "Fix lint errors" --print --dangerously-skip-permissionsIn non-interactive mode, logs are written to per-instance log files at ~/Library/Caches/claudex/proxy-{timestamp}-{pid}.log instead of stderr, keeping the stdout output clean for piping and automation.
Per-Instance Log Files
Section titled “Per-Instance Log Files”Each Claudex proxy instance writes logs to its own file:
~/Library/Caches/claudex/proxy-{timestamp}-{pid}.logThis avoids log interleaving when running multiple instances and keeps stderr clean during claudex run (especially in non-interactive mode). Logs include proxy startup, request translation, OAuth token refresh events, and circuit breaker state changes.
Full Example
Section titled “Full Example”See config.example.toml for a complete configuration file with all providers and options.
For step-by-step setup instructions for each provider (including API key links and OAuth flows), see the Provider Setup Guide.