codex-cli-sdk
Strongly-typed, async-first Rust SDK for building agents on the OpenAI Codex CLI.
Features
- One-shot queries —
query()sends a prompt and returns a collectedTurn - Streaming —
query_stream()/thread.run_streamed()yield events as they arrive - Multi-turn threads —
start_thread()/resume_thread()for persistent conversations - Item lifecycle events — structured
ThreadEventstream (ItemStarted → ItemUpdated → ItemCompleted) - Approval callbacks —
ApprovalCallbackfor programmatic per-command and per-patch approval - Sandbox control —
SandboxPolicyfromRestrictedtoDangerFullAccess - Reasoning effort —
ReasoningEffortfromMinimaltoXHigh - Web search —
WebSearchMode(Disabled / Cached / Live) - Structured output —
OutputSchema(inline JSON or file path) for typed responses - Local providers — point at lmstudio, ollama, or other local models
- Multimodal input — attach image files to any prompt
- Config overrides — pass flat or nested JSON overrides to the CLI via
-c key=val - Stderr callback — capture CLI debug output for logging/diagnostics
- Testing framework —
MockTransport, event builders, and fixtures for unit tests without a live CLI - Cross-platform — macOS, Linux, and Windows
Quick Start
[]
= "0.1"
= { = "1", = ["macros", "rt-multi-thread"] }
use ;
async
Architecture
CodexConfig
(cli_path, env, overrides)
│
▼
┌─────────────┐ ┌─────────────────┐ ┌─────────────────┐ ┌────────────┐
│ Your Code │──▶│ Codex │──▶│ CliTransport │──▶│ Codex CLI │
│ │ │ │ │ (or Mock) │◀──│ │
│ query() │ │ start_thread() │ │ stdin/stdout │ │ JSONL │
│ query_ │ │ resume_thread()│ └─────────────────┘ │ stdio │
│ stream() │ └────────┬────────┘ └────────────┘
└─────────────┘ │
▼
Thread::run() → Turn
Thread::run_streamed() → StreamedTurn (yields ThreadEvents)
Examples
All examples require a working Codex CLI installation (npm install -g @openai/codex).
| Example | Feature | Run |
|---|---|---|
01_basic_query |
One-shot query + token usage | cargo run --example 01_basic_query |
02_streaming |
Real-time event stream + item lifecycle | cargo run --example 02_streaming |
03_multi_turn |
Thread resumption across turns | cargo run --example 03_multi_turn |
04_approval |
Exec approval callbacks + all four decisions | cargo run --example 04_approval |
05_sandbox |
Sandbox policies, extra dirs, network access | cargo run --example 05_sandbox |
06_structured_output |
JSON Schema output + typed deserialization | cargo run --example 06_structured_output |
07_local_provider |
lmstudio / ollama + reasoning effort | cargo run --example 07_local_provider |
08_cancellation |
CancellationToken mid-stream abort |
cargo run --example 08_cancellation |
Core API
One-shot query
use ;
let turn = query.await?;
println!;
println!;
Streaming
use ;
use StreamExt;
let mut stream = query_stream.await?;
while let Some = stream.next.await
Multi-turn threads
use ;
let codex = new?;
// First turn
let mut thread = codex.start_thread;
let turn = thread.run.await?;
let thread_id = thread.id.unwrap;
// Resume in a later session
let mut thread = codex.resume_thread;
let turn = thread.run.await?;
Approval callbacks
use ;
use ApprovalPolicy;
use ;
use Arc;
let options = builder
.approval
.build;
let callback: ApprovalCallback = new;
let codex = new?;
let mut thread = codex
.start_thread
.with_approval_callback;
let turn = thread.run.await?;
Event Model
run_streamed() yields a sequence of ThreadEvents following this lifecycle:
ThreadStarted
└─ TurnStarted
├─ ItemStarted { item: AgentMessage | CommandExecution | Reasoning | ... }
│ ItemUpdated (streaming text deltas)
│ ItemCompleted
│
├─ ApprovalRequest ←── responded to via ApprovalCallback
└─ TurnCompleted { usage }
ThreadItem variants
| Variant | Description |
|---|---|
AgentMessage |
Text output from the agent (streams via ItemUpdated) |
Reasoning |
Extended reasoning/thinking block |
CommandExecution |
Shell command run by the agent, with output and exit code |
FileChange |
File creation, modification, or deletion (patch) |
McpToolCall |
MCP tool invocation with server, tool name, args, and result |
WebSearch |
Web search query |
TodoList |
Agent-managed task list |
Error |
Error item from the CLI |
Advanced Features
Sandbox policy
use SandboxPolicy;
let options = builder
.sandbox // read-only
// .sandbox(SandboxPolicy::WorkspaceWrite) // default — workspace writable
// .sandbox(SandboxPolicy::DangerFullAccess) // no restrictions
.build;
Reasoning effort
use ReasoningEffort;
let options = builder
.reasoning_effort
.build;
Available levels: Minimal, Low, Medium, High, XHigh.
Structured output
use OutputSchema;
let options = builder
.output_schema
.build;
An inline schema is automatically written to a temp file and cleaned up after the turn.
Web search
use WebSearchMode;
let options = builder
.web_search
.build;
Local providers
let options = builder
.local_provider // or "ollama"
.model
.build;
Config overrides
use ConfigOverrides;
// Nested JSON — auto-flattened to dot-notation `-c key=val` pairs
let config = builder
.config_overrides
.build;
// Or flat key-value pairs
let config = builder
.config_overrides
.build;
Images (multimodal)
use PathBuf;
let options = builder
.images
.build;
let turn = thread.run.await?;
Stderr debugging
use Arc;
let config = builder
.stderr_callback
.build;
Cancellation
use CancellationToken;
use TurnOptions;
let cancel = new;
let turn_opts = TurnOptions ;
// Cancel from another task:
cancel.cancel;
let result = thread.run_streamed.await;
CodexConfig Reference
| Field | Type | Default | Description |
|---|---|---|---|
cli_path |
Option<PathBuf> |
None |
Path to codex binary; auto-discovered if None |
env |
HashMap<String, String> |
{} |
Extra environment variables for the subprocess |
config_overrides |
ConfigOverrides |
None |
Flat or nested JSON overrides (-c key=val) |
profile |
Option<String> |
None |
Config profile name (--profile) |
connect_timeout |
Option<Duration> |
30s |
Deadline for process spawn and init |
close_timeout |
Option<Duration> |
10s |
Deadline for graceful shutdown |
version_check_timeout |
Option<Duration> |
5s |
Deadline for codex --version check |
stderr_callback |
Option<StderrCallback> |
None |
Invoked with each line of CLI stderr |
Set any Option<Duration> to None to wait indefinitely.
ThreadOptions Reference
| Field | Type | Default | Description |
|---|---|---|---|
working_directory |
Option<PathBuf> |
None |
Working directory (--cd) |
model |
Option<String> |
None |
Model name, e.g. "o4-mini", "gpt-5-codex" |
sandbox |
SandboxPolicy |
WorkspaceWrite |
Sandbox isolation level |
approval |
ApprovalPolicy |
Never |
When to request approval for actions |
additional_directories |
Vec<PathBuf> |
[] |
Extra writable directories (--add-dir) |
skip_git_repo_check |
bool |
false |
Skip git repository validation |
reasoning_effort |
Option<ReasoningEffort> |
None |
Reasoning effort level |
network_access |
Option<bool> |
None |
Enable network access inside sandbox |
web_search |
Option<WebSearchMode> |
None |
Web search mode |
output_schema |
Option<OutputSchema> |
None |
JSON Schema for structured output |
ephemeral |
bool |
false |
Don't persist session to disk |
images |
Vec<PathBuf> |
[] |
Image files to include with the prompt |
local_provider |
Option<String> |
None |
Local provider name (lmstudio, ollama) |
Testing
Enable the testing feature for unit tests without a live CLI:
[]
= { = "0.1", = ["testing"] }
use ;
// Use a pre-built fixture
let transport = simple_text_response;
// Or assemble events manually
use ;
let mut transport = new;
transport.enqueue_events;
Available fixtures: simple_text_response, tool_call_session, approval_session, error_session, streaming_session, reasoning_session.
Troubleshooting
| Problem | Cause | Fix |
|---|---|---|
CliNotFound error |
Codex CLI not on PATH |
Install: npm install -g @openai/codex |
| Timeout on first run | CLI slow to start | Increase connect_timeout or check CLI health |
| Approval request not handled | ApprovalCallback not set |
Call .with_approval_callback() or use ApprovalPolicy::Never |
VersionCheck error |
CLI version check failed or couldn't parse output | Update: npm update -g @openai/codex or check CLI health |
| Noisy stderr output | CLI debug output | Set stderr_callback to capture/silence it |
Feature Flags
| Feature | Description |
|---|---|
testing |
MockTransport, event builders, and fixtures for unit tests |
integration |
Integration test helpers (requires a live CLI) |
Platform Support
macOS, Linux, and Windows.
Disclaimer
This is an unofficial, community-developed SDK and is not affiliated with, endorsed by, or sponsored by OpenAI. "Codex" is a trademark of OpenAI. This crate interacts with the Codex CLI but does not contain any OpenAI proprietary code.
License
Licensed under either of Apache License, Version 2.0 or MIT License at your option.
Maintained by the POM team.