codexia
Rust gateway that logs in with OpenAI Codex OAuth and exposes OpenAI- and Anthropic-compatible APIs.
Usage
# later, update to the latest published release
login prints the Codex OAuth URL. Complete the login in a browser, then paste
the full redirected URL from the browser address bar, for example
http://localhost:1455/auth/callback?code=...&state=.... This matches
OpenClaw's remote/headless fallback and does not require the gateway host to be
reachable from the public internet.
OpenAI-compatible chat request:
Anthropic-compatible Messages request:
Claude Code / Anthropic SDK setup:
ANTHROPIC_BASE_URL should point at the Codexia server root, not /v1,
because Anthropic clients append /v1/messages themselves.
Minimal Claude Code flow:
For non-interactive validation, this works:
ANTHROPIC_BASE_URL=http://127.0.0.1:14550 \
ANTHROPIC_AUTH_TOKEN=local-secret \
ANTHROPIC_MODEL="gpt-5.5" \
If Claude Code or its explore/sub-agent path still emits Anthropic-native model
ids such as claude-sonnet-*, enable a local fallback:
CODEXIA_MODEL_FALLBACK=gpt-5.5
Codexia then rewrites known unsupported Anthropic model ids to the configured fallback before calling Codex.
Common pitfalls:
- Do not set
ANTHROPIC_BASE_URLtohttp://127.0.0.1:14550/v1; Claude Code appends/v1/messagesitself. - Use a model that
/v1/modelsactually returns, such asgpt-5.5. If Claude Code defaults toclaude-sonnet-*, the request will fail because Codexia proxies Codex models, not Anthropic-hosted model IDs. ANTHROPIC_AUTH_TOKENis only the local gateway key configured with--api-key; it is not your upstream OpenAI/Codex OAuth token.- If you prefer a background service, install the daemon first and then point
ANTHROPIC_BASE_URLat the daemon address instead of runningcodexia servemanually.
Optional local API key protection:
CODEXIA_API_KEY=local-secret
You can combine it with the model fallback when running Claude Code against the gateway:
CODEXIA_API_KEY=local-secret \
CODEXIA_MODEL_FALLBACK=gpt-5.5 \
Interactive runtime configuration:
The config file is stored at ~/.codexia/config.json by default and is used as
the fallback source for codexia serve and codexia daemon install.
Refresh the stored Codex OAuth token while the server is running:
Check token expiry, account metadata, and rate-limit windows:
Fetch the same status data over HTTP:
Example response:
Install Codexia as a per-user background daemon:
On macOS, Codexia installs a LaunchAgent at
~/Library/LaunchAgents/com.codexia.daemon.plist. On Linux, it installs a
systemd user unit at ~/.config/systemd/user/codexia.service.
Windows does not currently implement native daemon/service management; use WSL
and run the Linux build there if you need codexia daemon commands.
On Linux, inspect the per-user service with:
The daemon runs codexia serve with the options passed at install time:
Models returned by /v1/models default to OpenClaw's openai-codex registry:
gpt-5.1
gpt-5.1-codex-max
gpt-5.1-codex-mini
gpt-5.2
gpt-5.2-codex
gpt-5.3-codex
gpt-5.3-codex-spark
gpt-5.4
gpt-5.4-mini
gpt-5.5
Credentials are stored at ~/.codexia/auth.json by default. Override with
--auth-file, CODEXIA_AUTH_FILE, or CODEXIA_HOME.
Runtime config also supports an optional model_fallback value, and the CLI
accepts --model-fallback / CODEXIA_MODEL_FALLBACK.
OpenAI compatibility currently covers:
GET /v1/modelsPOST /v1/chat/completionsPOST /v1/responsesPOST /v1/images/generationsPOST /v1/responses/compactPOST /v1/responses/input_tokens
On POST /v1/chat/completions, Codexia accepts common OpenAI compatibility
fields such as temperature, max_tokens, max_completion_tokens, and
max_output_tokens, but the current Codex upstream rejects those parameters.
Codexia therefore accepts them without error and omits them from the upstream
Codex request, so they should be treated as compatibility no-ops rather than
effective sampling or output-length controls.
/v1/responses currently supports previous_response_id only as an
in-memory continuation mechanism within the same running Codexia process. It is
not exposed as a public retrievable/deletable response resource, and it should
not be treated as durable storage across daemon restarts or process exits.
Image generation is exposed in two compatibility shapes:
- OpenAI-style
POST /v1/images/generations - OpenAI Responses hosted tool
{"type":"image_generation"}
Current image-generation caveats:
- OpenAI
POST /v1/responsessupports streaming image-generation events POST /v1/images/generationsremains non-streaming- Anthropic
POST /v1/messagesimage generation streaming is exposed as a Codexia extension that emitsimagecontent blocks only once the upstream response completes - generated images are returned as base64 payloads
- Anthropic compatibility uses a Codexia extension that returns
content: [{"type":"image","source":{"type":"base64",...}}]onPOST /v1/messageswhen the request includes a tool namedimage_generation
Anthropic compatibility currently covers:
GET /v1/modelswith ananthropic-versionheaderPOST /v1/messagesPOST /v1/messages/count_tokensPOST /v1/messages/batchesGET /v1/messages/batchesGET /v1/messages/batches/{batch_id}POST /v1/messages/batches/{batch_id}/cancelDELETE /v1/messages/batches/{batch_id}GET /v1/messages/batches/{batch_id}/resultsx-api-keyorauthorization: Bearer ...local auth- Anthropic-style SSE events for streaming text and tool use
Message batches execute asynchronously in a background task. Cancellation is
best-effort at request boundaries inside the batch worker: requests that have
already started are allowed to finish, while not-yet-started requests are
marked as canceled.
The implementation intentionally follows Ollama's compatibility strategy where possible: Anthropic headers are accepted, locally configured auth is enforced, and unsupported advanced Anthropic-only features are ignored rather than rejected when possible.
The OAuth flow follows OpenClaw/pi-ai's Codex flow: PKCE, manual paste of the
http://localhost:1455/auth/callback?... redirect URL, token exchange at
https://auth.openai.com/oauth/token, and Codex requests to
https://chatgpt.com/backend-api/codex/responses.
Disclaimer
Codexia is an unofficial compatibility tool. It is not affiliated with, endorsed by, or supported by OpenAI or Anthropic.
You are responsible for making sure your usage complies with the terms, policies, account restrictions, and data-handling obligations that apply to your upstream account and deployment environment. In particular, do not assume that personal OAuth-backed access can be shared, resold, or safely exposed as a multi-user hosted service. The LGPLv3 license for this repository does not change those upstream restrictions.
License
Copyright (c) 2026 Codexia contributors.
Licensed under the GNU Lesser General Public License v3.0 only. See LICENSE.