codex-helper (Codex CLI Local Helper / Proxy)
Put Codex / Claude Code behind a small local “bumper”:
centralize all your relays / keys / quotas, auto-switch when an upstream is exhausted or failing, and get handy CLI helpers for sessions, filtering, and diagnostics.
中文说明:
README.md
Why codex-helper?
codex-helper is a good fit if any of these sound familiar:
-
You’re tired of hand-editing
~/.codex/config.toml
Changingmodel_provider/base_urlby hand is easy to break and annoying to restore. -
You juggle multiple relays / keys and switch often
You’d like OpenAI / Packy / your own relays managed in one place, and a single command to select the “current” one. -
You discover exhausted quotas only after 401/429s
You’d prefer “auto-switch to a backup upstream when quota is exhausted” instead of debugging failures. -
You want a CLI way to quickly resume Codex sessions
For example: “show me the last session for this project and give mecodex resume <ID>.” -
You want a local layer for redaction + logging
Requests go through a filter first, and all traffic is logged to a JSONL file for analysis and troubleshooting.
Quick Start (TL;DR)
1. Install (recommended: cargo-binstall)
This installs codex-helper and ch into your Cargo bin directory (usually ~/.cargo/bin).
Make sure that directory is on your PATH so you can run them from anywhere.
Prefer building from source?
Runcargo build --releaseand usetarget/release/codex-helper/ch.
2. One-command helper for Codex (recommended)
# or shorter:
This will:
- Start a Codex proxy on
127.0.0.1:3211; - Guard and, if needed, rewrite
~/.codex/config.tomlto point Codex at the local proxy (backing up the original config on first run); - If
~/.codex-helper/config.jsonis still empty, bootstrap a default upstream from~/.codex/config.toml+auth.json; - On Ctrl+C, attempt to restore the original Codex config from the backup.
After that, you keep using your usual codex ... commands; codex-helper just sits in the middle.
3. Optional: switch the default target to Claude (experimental)
By default, commands assume Codex. If you primarily use Claude Code, you can flip the default:
After this:
codex-helper serve(without flags) will start a Claude proxy on127.0.0.1:3210;codex-helper config list/add/set-active(without--codex/--claude) will operate on Claude configs.
You can always check the current default with:
Common configuration: multi-upstream failover
The most common and powerful way to use codex-helper is to let it fail over between multiple upstreams automatically when one is failing or out of quota.
The key idea: put your primary and backup upstreams in the same config’s upstreams array, instead of as separate configs.
Example ~/.codex-helper/config.json:
{
"version": 1,
"codex": {
"active": "codex-main",
"configs": {
"codex-main": {
"name": "codex-main",
"alias": null,
"upstreams": [
{
"base_url": "https://codex-api.packycode.com/v1",
"auth": { "auth_token": "sk-packy-..." },
"tags": { "provider_id": "packycode", "source": "codex-config" }
},
{
"base_url": "https://co.yes.vg/v1",
"auth": { "auth_token": "cr-..." },
"tags": { "provider_id": "yes", "source": "codex-config" }
}
]
}
}
},
"claude": { "active": null, "configs": {} },
"default_service": null
}
With this layout:
active = "codex-main"→ the load balancer chooses betweenupstreams[0](Packy) andupstreams[1](Yes);- when an upstream either:
- exceeds the failure threshold (
FAILURE_THRESHOLDinsrc/lb.rs:6), or - is marked
usage_exhausted = truebyusage_providers, the LB will prefer the other upstream whenever possible.
- exceeds the failure threshold (
Command cheatsheet
Daily use
- Start Codex helper (recommended):
codex-helper/ch
- Explicit Codex / Claude proxy:
codex-helper serve(Codex, default port 3211)codex-helper serve --codexcodex-helper serve --claude(Claude, default port 3210)
Turn Codex / Claude on/off via local proxy
-
Switch Codex / Claude to the local proxy:
-
Restore original configs from backup:
-
Inspect current switch status:
Manage upstream configs (providers / relays)
-
List configs (defaults to Codex, can target Claude explicitly):
-
Add a new config:
# Codex # Claude (experimental) -
Set the active config:
Sessions, usage, diagnostics
-
Session helpers (Codex):
-
Usage & logs:
-
Status & doctor:
# JSON outputs for scripts / UI integration | |
Example workflows
Scenario 1: Manage multiple relays / keys and switch quickly
# 1. Add configs for different providers
# 2. Select which config is active
# 3. Point Codex at the local proxy (once)
# 4. Start the proxy with the current active config
Scenario 2: Resume Codex sessions by project
You can also query sessions for any directory without cd:
This is especially handy when juggling multiple side projects: you don’t need to remember session IDs, just tell codex-helper which directory you care about and it will find the most relevant sessions and suggest codex resume <ID>.
Advanced configuration (optional)
Most users do not need to touch these. If you want deeper customization, these files are relevant:
- Main config:
~/.codex-helper/config.json - Filter rules:
~/.codex-helper/filter.json - Usage providers:
~/.codex-helper/usage_providers.json - Request logs:
~/.codex-helper/logs/requests.jsonl
Codex official files:
~/.codex/auth.json: managed bycodex login; codex-helper only reads it.~/.codex/config.toml: managed by Codex CLI; codex-helper touches it only viaswitch on/off.
config.json structure (brief)
{
"codex": {
"active": "openai-main",
"configs": {
"openai-main": {
"name": "openai-main",
"alias": "Main OpenAI quota",
"upstreams": [
{
"base_url": "https://api.openai.com/v1",
"auth": {
"auth_token": "sk-...",
"api_key": null
},
"tags": {
"source": "codex-config",
"provider_id": "openai"
}
}
]
}
}
}
}
Key ideas:
active: the name of the currently active config;configs: a map of named configs;- each
upstreamis one endpoint, ordered by priority (primary → backups).
usage_providers.json
Path: ~/.codex-helper/usage_providers.json. If it does not exist, codex-helper will write a default file similar to:
{
"providers": [
{
"id": "packycode",
"kind": "budget_http_json",
"domains": ["packycode.com"],
"endpoint": "https://www.packycode.com/api/backend/users/info",
"token_env": null,
"poll_interval_secs": 60
}
]
}
For budget_http_json:
- up to date usage is obtained by calling
endpointwith a Bearer token (fromtoken_envor the associated upstream’sauth_token); - the response is inspected for fields like
monthly_budget_usd/monthly_spent_usdto decide if the quota is exhausted; - associated upstreams are then marked
usage_exhausted = truein LB state; when possible, LB avoids these upstreams.
Filtering & logging
-
Filter rules:
~/.codex-helper/filter.json, e.g.:[ { "op": "replace", "source": "your-company.com", "target": "[REDACTED_DOMAIN]" }, { "op": "remove", "source": "super-secret-token" } ]Filters are applied to the request body before sending it upstream; rules are reloaded based on file mtime.
-
Logs:
~/.codex-helper/logs/requests.jsonl, each line is a JSON object like:{ "timestamp_ms": 1730000000000, "service": "codex", "method": "POST", "path": "/v1/responses", "status_code": 200, "duration_ms": 1234, "config_name": "openai-main", "upstream_base_url": "https://api.openai.com/v1", "usage": { "input_tokens": 123, "output_tokens": 456, "reasoning_tokens": 0, "total_tokens": 579 } }
These fields form a stable contract: future versions will only add fields, not remove or rename existing ones, so you can safely build scripts and dashboards on top of them.
Relationship to cli_proxy and cc-switch
- cli_proxy: a multi-service daemon + Web UI with a broader scope (Codex, Claude, etc.) and centralized monitoring.
- cc-switch: a desktop GUI supplier/MCP manager focused on “manage configs in one place, apply to many clients”.
codex-helper takes inspiration from both, but stays deliberately lightweight:
- focused on Codex CLI (with experimental Claude support);
- single binary, no daemon, no Web UI;
- designed to be a small CLI companion you can run ad hoc, or embed into your own scripts and tooling.