openai-core
openai-core is an async Rust SDK for the OpenAI-compatible ecosystem.
[!IMPORTANT]
openai-coreis a community-maintained, unofficial library.It is a Rust rewrite heavily informed by openai-node: its resource layout, capability coverage, README structure, and example topics were all reviewed against
openai-node, then adapted into Rust-native builders, types, and async streams. It is not affiliated with OpenAI and does not represent an official OpenAI SDK.
Positioning
The project aims to:
- cover the major capability surface already available in
openai-node - provide Rust-native builders, typed models, async streams, and error handling
- support OpenAI, Azure OpenAI, and common OpenAI-compatible providers
If you already know openai-node, the rough mental model is:
- the capability surface tries to stay close to
openai-node - the public API is intentionally Rust-flavored instead of mirroring TypeScript shapes
- streaming uses
futures::Stream - raw HTTP, SSE, and WebSocket primitives are still accessible when needed
Versioning and Compatibility
- Current line:
0.1.x - MSRV:
1.94.1 - Rust edition:
2024
The crate is still in 0.x:
- patch releases should not intentionally introduce breaking changes
- minor releases may still reshape parts of the public API, with migration notes when practical
- internal transport logic, provider profile internals, and private module structure are not part of the stability contract
Longer-term planning lives in specs/0003_improve.md.
Installation
The default feature set is intentionally lighter and focuses on HTTP, SSE, multipart, and webhooks:
[]
= "0.1"
If you also need structured output, tool runners, or WebSocket support:
[]
= { = "0.1", = ["structured-output", "tool-runner", "realtime", "responses-ws"] }
If you want full control over features:
[]
= { = "0.1", = false, = ["stream", "multipart", "rustls-tls"] }
Feature Flags
| Feature | Enabled by default | Purpose |
|---|---|---|
stream |
Yes | SSE and streaming response support |
multipart |
Yes | File uploads and multipart requests |
webhooks |
Yes | Webhook HMAC verification |
rustls-tls |
Yes | rustls-based TLS for reqwest and WebSockets |
structured-output |
No | parse::<T>(), JSON Schema helpers, structured outputs |
tool-runner |
No | tool registration, tool execution loops, runner traces |
realtime |
No | Realtime WebSocket support |
responses-ws |
No | Responses WebSocket support |
Notes:
tool-runnerdepends onstructured-output- WebSocket APIs such as
ws(),RealtimeSocket, andResponsesSocketare only exported when their features are enabled
Quick Start
Responses API
The Responses API is the primary path, matching the role it plays in the openai-node README.
use Client;
async
Chat Completions API
use Client;
async
Streaming Responses
Like openai-node, openai-core supports Server-Sent Events. The Rust-facing shape uses async streams instead of emitters.
use StreamExt;
use ;
async
Related examples:
File Uploads
Just like openai-node, openai-core exposes a unified upload helper.
to_file() currently accepts:
PathBufbytes::Bytesstd::io::Readtokio::io::AsyncReadreqwest::ResponseUploadSource
use Bytes;
use ;
async
Related examples:
Audio
openai-core now covers:
audio.speech.create- SSE streaming for
audio.speech audio.transcriptions.create- SSE streaming for
audio.transcriptions audio.translations.create- local helper utilities:
play_audio()andrecord_audio()
use ;
async
Related examples:
Typed Long-tail Resources
Phase 3 promotes the main long-tail namespaces away from raw Value builders:
imagesaudiofine_tuningbatchesconversationsevalscontainersskillsvideos
For the high-frequency paths, you can now use typed responses plus either dedicated builder methods or typed request structs with json_body(...).
use ;
let client = builder
.api_key
.build?;
let conversation = client
.conversations
.create
.json_body?
.send
.await?;
println!;
Webhook Verification
As in openai-node, openai-core provides both:
- signature-only verification via
verify_signature() - verify-and-parse via
unwrap()
use BTreeMap;
use Duration;
use Client;
let client = builder
.webhook_secret
.build?;
let raw_body = r#"{"type":"response.completed","data":{"id":"resp_123"}}"#;
let headers = from;
let event: Value = client
.webhooks
.unwrap?;
Related example:
Error Handling
Failures are exposed through the unified openai_core::Error type. API-level failures are represented as Error::Api(ApiError).
use ;
match client
.chat
.completions
.create
.model
.message_user
.send
.await
Related example:
Request IDs, Raw Responses, and Response Metadata
openai-node emphasizes request ids and raw response access. openai-core exposes the same debugging surface through two methods:
send_with_meta()returnsApiResponse<T>so you can readmeta.request_idsend_raw()returnshttp::Response<Bytes>
let raw = client
.chat
.completions
.create
.model
.message_user
.send_raw
.await?;
println!;
let response = client
.chat
.completions
.create
.model
.message_user
.send_with_meta
.await?;
println!;
Related example:
Retries and Timeouts
The default behavior is intentionally close to the behavior documented in openai-node:
- default timeout: 10 minutes
- default retries: 2
- connection errors, timeouts,
408,409,429, and5xxresponses are retried by default
Client-wide configuration:
use Duration;
let client = builder
.api_key
.timeout
.max_retries
.build?;
Per-request overrides:
use Duration;
let response = client
.responses
.create
.model
.input_text
.timeout
.max_retries
.send
.await?;
Auto Pagination
List APIs return CursorPage<T>, which supports:
has_next_page()next_page().awaitinto_stream()
use StreamExt;
let first_page = client.models.list.limit.send.await?;
for model in &first_page.data
if first_page.has_next_page
let mut stream = client.models.list.limit.send.await?.into_stream;
while let Some = stream.next.await
Related example:
Logging
Matching the openai-node README topic, openai-core supports:
OPENAI_LOGClientBuilder::log_level(...)ClientBuilder::logger(...)
Available levels:
offerrorwarninfodebug
use ;
use ;
let records: = new;
let sink = clone;
let client = builder
.api_key
.log_level
.logger
.build?;
Related example:
Realtime and Responses WebSocket
openai-core currently supports:
client.realtime().ws()client.responses().ws()OpenAIRealtimeWebSocketOpenAIRealtimeWSOpenAIResponsesWebSocket
Realtime example:
use StreamExt;
use SocketStreamMessage;
let socket = client
.realtime
.ws
.model
.connect
.await?;
let mut stream = socket.stream;
socket.send_json.await?;
while let Some = stream.next.await
Related examples:
More background: docs/realtime-and-streaming.md.
Azure OpenAI
openai-core does not expose a separate AzureOpenAI class. Instead, the same capability is configured through ClientBuilder:
azure_endpoint(...)azure_api_version(...)azure_deployment(...)azure_ad_token(...)azure_ad_token_provider(...)
use Client;
let client = builder
.azure_endpoint
.azure_api_version
.azure_deployment
.api_key
.build?;
let response = client
.responses
.create
.input_text
.send
.await?;
Related example:
More background: docs/azure.md.
Structured Output and Tool Runners
This is the Rust-side answer to the helpers/zod, parse(), and runTools() story in openai-node.
Structured output:
use JsonSchema;
use Deserialize;
let parsed = client
.chat
.completions
.
.model
.messages
.send
.await?;
Tool runner:
use ToolDefinition;
use json;
let tool = new;
Related examples:
- examples/parsing.rs
- examples/parsing_stream.rs
- examples/tool_runner.rs
- examples/function_call.rs
- examples/function_call_stream.rs
- examples/function_call_stream_raw.rs
- examples/responses_streaming_tools.rs
- examples/responses_structured_outputs.rs
- examples/responses_structured_outputs_tools.rs
More background: docs/structured-output-and-tools.md.
Examples
Running Examples
Full index: docs/examples.md
Coverage Mapping Against openai-node/examples
The table below maps openai-node/examples topics to their Rust equivalents. The Rust side does not mechanically duplicate every Node runtime wrapper, but the underlying capabilities are represented here.
Notes:
- examples that are strongly tied to Node, browsers, or web frameworks are represented on the Rust side using framework-neutral raw forwarding patterns
- Node examples that rely on
zodmap toserde+schemars+parse::<T>()in Rust - emitter-based examples map to
Streamplus runtime event enums
Provider Support Matrix
| Provider | Support level | Notes |
|---|---|---|
| OpenAI | First-class | The main compatibility target and the primary focus for behavior and tests |
| Azure OpenAI | First-class | Supports endpoint, deployment, api-version, api-key, and Azure AD tokens |
| Zhipu | Compatibility | Routed through the compatibility layer; real behavior depends on the provider |
| MiniMax | Compatibility | Routed through the compatibility layer; real behavior depends on the provider |
| ZenMux | Compatibility | Routed through the compatibility layer; real behavior depends on the provider |
| Custom providers | Extensible | The SDK exposes stable integration points; final compatibility depends on the integrator |
More detail: docs/provider-capability-matrix.md
Topic Guides
- Azure integration
- API reference
- Examples index
- FAQ
- OpenAPI contract maintenance
- Streaming and Realtime
- Structured output and tools
- Migration notes
- Observability
- Provider capability matrix
- Public API maintenance
- Release checklist
- ADR: codegen strategy
Development Checks
Common verification commands:
Additional notes:
- live smoke tests under
tests/provider_live/are#[ignore]by default - when required environment variables are missing, those live tests auto-skip
FAQ
The short version:
openai-coreis a community SDK, not an official OpenAI SDK- default features are intentionally small; realtime / responses-ws / structured-output stay opt-in
- use
chat().completions()for legacy-compatible migrations, andresponses()when you want the newer API surface - live provider tests are manual by design because they consume real credentials and may incur cost
More detail: docs/faq.md
Project Status
The crate already has a publishable SDK baseline. The next round of work is focused more on refinement than on basic capability gaps:
- further tightening the stable public API surface
- continuing to strongly type long-tail resources
- expanding docs and examples
- hardening feature-matrix and public-API stability checks
If your goal is:
- a Rust SDK that tracks the functional surface of
openai-nodeclosely - a community-maintained, unofficial implementation
- Rust-native builders, types, and async streams instead of TypeScript emitter semantics
then openai-core is already a strong primary SDK candidate.