Expand description
§neuron-provider-openai
OpenAI provider for the neuron agent blocks ecosystem. Implements the Provider
and EmbeddingProvider traits from neuron-types against the OpenAI Chat
Completions and Embeddings APIs, supporting synchronous completions, server-sent
event (SSE) streaming, and text embeddings.
The default completion model is gpt-4o. The default embedding model is
text-embedding-3-small. The default base URL is https://api.openai.com.
All can be overridden with the builder API. The base_url override also makes
this client usable with Azure OpenAI and compatible third-party endpoints.
§Key Types
OpenAi– client struct with builder methods (new,model,base_url,organization). ImplementsProviderandEmbeddingProviderfromneuron-types.ProviderError– re-exported error type for provider failures.EmbeddingError– re-exported error type for embedding failures.EmbeddingRequest,EmbeddingResponse– re-exported embedding types.StreamHandle– returned bycomplete_stream, yieldsStreamEventitems as the model generates tokens.
§Features
- Full message mapping: text, tool calls, tool results, images.
- Organization header support for multi-org OpenAI accounts.
ToolChoicesupport:Auto,Any,Required,Specific(name).- SSE streaming parsed from raw byte stream with
data: [DONE]sentinel handling. - Usage statistics included in streaming responses via
stream_options. EmbeddingProvidersupport for generating text embeddings with optional dimension control.
§Usage
§Completions
use neuron_provider_openai::OpenAi;
use neuron_types::{CompletionRequest, ContentBlock, Message, Provider, Role, SystemPrompt};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let provider = OpenAi::new("sk-...")
.model("gpt-4o")
.organization("org-...");
let request = CompletionRequest {
messages: vec![Message {
role: Role::User,
content: vec![ContentBlock::Text("Explain Rust's ownership model.".into())],
}],
system: Some(SystemPrompt::Text("You are a helpful assistant.".into())),
max_tokens: Some(1024),
..Default::default()
};
let response = provider.complete(request).await?;
for block in &response.message.content {
println!("{block:?}");
}
Ok(())
}§Embeddings
use neuron_provider_openai::OpenAi;
use neuron_types::{EmbeddingProvider, EmbeddingRequest};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let provider = OpenAi::new("sk-...");
let request = EmbeddingRequest {
input: vec!["Hello world".to_string(), "Rust is great".to_string()],
dimensions: Some(256),
..Default::default()
};
let response = provider.embed(request).await?;
println!("Model: {}", response.model);
println!("Embeddings: {} vectors", response.embeddings.len());
println!("Dimensions: {}", response.embeddings[0].len());
println!("Usage: {} tokens", response.usage.total_tokens);
Ok(())
}§Part of neuron
This crate is one block in the neuron
composable agent toolkit. It depends only on neuron-types.
§License
Licensed under either of Apache License, Version 2.0 or MIT License at your option.
Re-exports§
pub use client::OpenAi;
Modules§
- client
- OpenAI API client struct and builder.
- embeddings
- OpenAI Embeddings API implementation.
- mapping
- Request/response mapping between neuron-types and the OpenAI Chat Completions API format.
Structs§
- Embedding
Request - A request to an embedding model.
- Embedding
Response - Response from an embedding request.
- Stream
Handle - Handle to a streaming completion response.
Enums§
- Embedding
Error - Errors from embedding provider operations.
- Provider
Error - Errors from LLM provider operations.
- Stream
Event - An event emitted during streaming completion.
Traits§
- Embedding
Provider - Embedding provider trait. Implement this for providers that support text embeddings.