chat-gpt-lib-rs
A Rust client library for the OpenAI API.
Supports multiple OpenAI endpoints, including Chat, Completions, Embeddings, Models, Moderations, Files, Fine-tunes, and more. Built with async-first design using Tokio and Reqwest, featuring robust error handling and SSE streaming for real-time responses.
Important: This release introduces breaking changes. The project has been significantly refactored, and these updates are too complex for a standard migration guide. If you have existing code, you will likely need to adapt function calls and data structures to the new design. See the updated examples folder or the documentation for guidance.
Table of Contents
- Features
- Installation
- Quick Start
- API Highlights
- Environment Variables
- Streaming (SSE)
- Example Projects
- Contributing
- License
Features
- Async-first: Built on Tokio + Reqwest.
- Complete Coverage of major OpenAI API endpoints:
- Chat (with streaming SSE for partial responses)
- Completions
- Models (list and retrieve)
- Embeddings
- Moderations
- Files (upload, list, download, delete)
- Fine-Tunes (create, list, retrieve, cancel, events, delete models)
- Rustls for TLS: avoids system dependencies like OpenSSL.
- Thorough Error Handling with custom
OpenAIError
. - Typed request/response structures (Serde-based).
- Extensive Documentation and usage examples, including SSE streaming.
Installation
In your Cargo.toml
, under [dependencies]
:
= "x.y.z" # Replace x.y.z with the latest version
Then build your project:
Quick Start
Below is a minimal example using Completions:
use ;
use ;
async
API Highlights
Models
use models;
let all_models = list_models.await?;
println!;
let model_details = retrieve_model.await?;
println!;
Completions
use ;
let req = CreateCompletionRequest ;
let resp = create_completion.await?;
println!;
Chat Completions
use ;
let chat_req = CreateChatCompletionRequest ;
let response = create_chat_completion.await?;
println!;
Embeddings
use ;
let emb_req = CreateEmbeddingsRequest ;
let emb_res = create_embeddings.await?;
println!;
Moderations
use ;
let mod_req = CreateModerationRequest ;
let mod_res = create_moderation.await?;
println!;
Files
use ;
use PathBuf;
let file_path = from;
let upload = upload_file.await?;
println!;
let all_files = list_files.await?;
println!;
let content = retrieve_file_content.await?;
println!;
delete_file.await?;
Fine-Tunes
use ;
let ft_req = CreateFineTuneRequest ;
let job = create_fine_tune.await?;
println!;
let all_jobs = list_fine_tunes.await?;
println!;
Environment Variables
By default, the library reads your OpenAI API key from OPENAI_API_KEY
:
Or use a .env
file with dotenvy.
Alternatively, provide a key directly:
let client = new?;
Streaming (SSE)
For real-time partial responses, pass stream = true
to Chat or Completions endpoints and process the resulting stream:
use StreamExt;
use ;
async
Example Projects
Check the examples/
folder for CLI chat demos and more.
Third-Party Usage:
- techlead uses this library for advanced AI-driven chat interactions.
Contributing
We welcome contributions and feedback! To get started:
- Fork this repository and clone your fork locally.
- Create a branch for your changes or fixes.
- Make & test your changes.
- Submit a pull request describing the changes.
Because this release is a major refactor, please note that much of the code has changed. If you’re updating older code, see the new examples and docs for updated usage patterns.
License
Licensed under the Apache License 2.0—see LICENSE for full details.
Breaking Changes Note:
Due to the extensive updates, we do not provide a direct migration guide. You may need to adapt your existing code to updated function signatures and data structures. Consult the new documentation and examples to get started quickly.