GPT-5 Rust Client Library
โ ๏ธ IN ACTIVE IMPROVEMENT โ ๏ธ
This library is actively being improved and may have breaking changes.
Perfect for experimentation, learning, and development projects!Latest Release: v0.2.1 - Leaner networking stack with rustls-only TLS and refreshed deps!
A comprehensive Rust client library for OpenAI's GPT-5 API with full support for function calling, reasoning capabilities, and type-safe enums.
Features
๐ Core Capabilities
- Type-safe API - All parameters use strongly-typed enums for compile-time safety
- Function calling - Full support for OpenAI's function calling system with custom tools
- Reasoning capabilities - Configurable reasoning effort levels (Low, Medium, High)
- Verbosity control - Fine-tune response detail levels for different use cases
- Multiple models - Support for GPT-5, GPT-5 Mini, GPT-5 Nano, and custom models
- Built-in web search - Enable OpenAI's web search assistance with custom queries and result limits
โก Performance & Developer Experience
- Async/await - Built on tokio for high performance and concurrency
- Error handling - Comprehensive error types and validation with helpful messages
- Response parsing - Easy access to text, function calls, and metadata
- Request builder - Fluent API for building complex requests
- Validation - Built-in request validation with helpful warnings
๐ Documentation & Examples
- Comprehensive examples - 6 practical examples from basic to advanced
- Interactive chat - Ready-to-run chat loop example
- Function calling demos - Calculator and weather tool examples
- Error handling patterns - Production-ready error handling examples
- Quick start guide - Get running in minutes with minimal code
๐ฎ Coming Soon
- Streaming responses - Real-time response streaming for better UX
- Retry mechanisms - Automatic retry with exponential backoff
- Rate limiting - Built-in rate limiting and quota management
- Response caching - Optional response caching for cost optimization
- WebSocket support - Real-time bidirectional communication
- More examples - Advanced use cases and integration patterns
- CLI tool - Command-line interface for quick testing
- Benchmarks - Performance benchmarks and optimization guides
Quick Start
Add this to your Cargo.toml
:
[]
= "0.2.1"
= { = "1.0", = ["rt-multi-thread", "macros"] }
= "1.0" # For function calling examples
๐ Try the Examples
The fastest way to get started is with our examples:
# Clone and run examples
See the examples/ directory for more detailed examples including function calling, error handling, and interactive chat.
Basic Usage
use ;
async
Advanced Usage with Function Calling
use ;
use json;
async
Enable Web Search Assistance
use ;
async
API Reference
Models
The library supports all GPT-5 models:
use Gpt5Model;
let model = Gpt5; // Main model - most capable
let mini = Gpt5Mini; // Balanced performance and cost
let nano = Gpt5Nano; // Fastest and most cost-effective
let custom = Custom;
Reasoning Effort
Control how much computational effort GPT-5 puts into reasoning:
use ReasoningEffort;
let low = Low; // Fast, basic reasoning
let medium = Medium; // Balanced performance
let high = High; // Thorough analysis
Verbosity Levels
Control the detail level of responses:
use VerbosityLevel;
let low = Low; // Concise responses
let medium = Medium; // Balanced detail
let high = High; // Detailed responses
Response Status
Check response completion and status:
let response = client.request.await?;
if response.is_completed else
// Get usage statistics
println!;
if let Some = response.reasoning_tokens
Error Handling
The library provides comprehensive error handling:
use ;
match client.simple.await
The client now detects HTTP status failures from the OpenAI API and surfaces detailed error messages, making it easier to debug
authentication or quota issues. If you need full control over networking (custom proxies, retry middleware, etc.), pass your own
configured reqwest::Client
via with_http_client
and keep using the same high-level interface.
Validation
The library includes built-in validation for requests:
let request = new
.input // Empty input will trigger a warning
.max_output_tokens // Very low token count will trigger a warning
.build; // Validation runs automatically
License
This project is licensed under the MIT License - see the LICENSE file for details.
Examples
We provide comprehensive examples to help you get started quickly:
Example | Description | Run Command |
---|---|---|
quick_start.rs |
Minimal 3-line example | cargo run --example quick_start |
basic_usage.rs |
Different models demo | cargo run --example basic_usage |
simple_chat.rs |
Interactive chat loop | cargo run --example simple_chat |
function_calling.rs |
Advanced function calling | cargo run --example function_calling |
error_handling.rs |
Production error handling | cargo run --example error_handling |
web_search.rs |
Enable web search assistance with custom queries | cargo run --example web_search |
Prerequisites for Examples
Set your OpenAI API key:
Contributing
๐ We're actively looking for contributors! This is a fresh library with lots of room for improvement.
Areas where we'd love help:
- ๐ Bug fixes and edge case handling
- ๐ Documentation improvements
- ๐งช More comprehensive tests
- โก Performance optimizations
- ๐ง Additional features and examples
- ๐ Better error messages and validation
How to contribute:
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a Pull Request
Questions or ideas? Open an issue and let's discuss! We're very responsive and would love to hear from you.
Contributions are welcome! Please feel free to submit a Pull Request.
Changelog
See CHANGELOG.md for detailed release notes and version history.