LLM Kit OpenAI
OpenAI provider for LLM Kit - Complete integration with OpenAI's chat completion API.
Note: This provider uses the standardized builder pattern. See the Quick Start section for the recommended usage.
Features
- Text Generation: Generate text using GPT models with support for all OpenAI chat models
- Streaming: Stream responses in real-time
- Tool Calling: Support for function calling
- Multi-modal: Support for text, images, audio, and PDFs
- Reasoning Models: Special handling for o1, o3, and other reasoning models
- Provider Options: Logprobs, reasoning effort, service tiers, and more
- Type-safe Configuration: Builder pattern for easy setup
Installation
Add this to your Cargo.toml:
[]
= "0.1"
= "0.1"
= "0.1"
= { = "1", = ["full"] }
Quick Start
Using the Client Builder (Recommended)
use OpenAIClient;
use LanguageModel;
async
Using Settings Directly (Alternative)
use ;
use LanguageModel;
async
Configuration
Environment Variables
Set your OpenAI API key as an environment variable:
# Optional
Using the Client Builder
use OpenAIClient;
let provider = new
.api_key
.base_url
.organization
.project
.header
.name
.build;
Using Settings Directly
use ;
let settings = new
.with_api_key
.with_base_url
.with_organization
.with_project
.add_header
.with_name;
let provider = new;
Builder Methods
The OpenAIClient builder supports:
.api_key(key)- Set the API key.base_url(url)- Set custom base URL.organization(org)- Set OpenAI organization ID.project(project)- Set OpenAI project ID.name(name)- Set provider name.header(key, value)- Add a single custom header.headers(map)- Add multiple custom headers.build()- Build the provider
Supported Models
All OpenAI chat models are supported, including:
GPT-4 Family
gpt-4- Most capable GPT-4 modelgpt-4-turbo- Faster GPT-4 variantgpt-4o- Optimized GPT-4 modelgpt-4o-mini- Smaller, faster GPT-4o variant
GPT-3.5 Family
gpt-3.5-turbo- Fast and efficient model
Reasoning Models
o1- Latest reasoning modelo1-preview- Preview version of o1o1-mini- Smaller o1 varianto3-mini- Next-generation reasoning model
GPT-5 Family
- Future models will be supported as they become available
For a complete list of available models, see the OpenAI Models documentation.
OpenAI-Specific Features
Reasoning Models
OpenAI reasoning models (o1, o1-preview, o1-mini, o3-mini) have special handling:
- Developer role: System messages automatically use the "developer" role instead of "system"
- Parameter filtering: Unsupported settings (temperature, top_p, presence_penalty, frequency_penalty, etc.) are automatically removed
- Token limits: Uses
max_completion_tokensinstead ofmax_tokens
These adjustments happen automatically when you use a reasoning model, so you don't need to make any code changes.
Provider-Specific Options
OpenAI supports additional options beyond the standard LLM Kit parameters:
Reasoning Effort
Control the computational effort for reasoning models:
use ;
let options = OpenAIChatLanguageModelOptions ;
Available values:
ReasoningEffort::Low- Faster, less thorough reasoningReasoningEffort::Medium- Balanced reasoningReasoningEffort::High- More thorough, slower reasoning
Logprobs
Request log probabilities for generated tokens:
use ;
let options = OpenAIChatLanguageModelOptions ;
Service Tier
Select the service tier for processing:
use ;
let options = OpenAIChatLanguageModelOptions ;
Available values:
ServiceTier::Auto- Automatic tier selectionServiceTier::Default- Standard processing tier
Organization and Project
Configure organization and project IDs:
let provider = new
.api_key
.organization
.project
.build;
Usage Examples
Basic Text Generation
See examples/chat.rs for a complete example.
Streaming Responses
See examples/stream.rs for a complete example.
Tool Calling
OpenAI supports function calling for tool integration. See examples/chat_tool_calling.rs for a complete example.
Examples
See the examples/ directory for complete examples:
chat.rs- Basic chat completionstream.rs- Streaming responseschat_tool_calling.rs- Tool calling with chat modelsstream_tool_calling.rs- Streaming with tool calling
Run examples with:
Documentation
License
Licensed under:
- MIT license (LICENSE-MIT)
Contributing
Contributions are welcome! Please see the Contributing Guide for more details.