chat-gpt-lib-rs
A Rust client library for the OpenAI API.
Supports multiple OpenAI endpoints, including Chat, Completions, Embeddings, Models, Moderations, Files, Fine-tunes, and more (chat-gpt-lib-rs - Docs.rs). Built with an async-first design using Tokio and Reqwest, featuring robust error handling and SSE streaming for real-time responses (chat-gpt-lib-rs - Docs.rs).
Important: If you’re upgrading from 0.5.x to 0.6.x, note that this transition introduces significant breaking changes. The project has been extensively refactored, making it too complex for a simple migration guide. You will likely need to update function calls and data structures to align with the new design (chat-gpt-lib-rs - Docs.rs). Refer to the updated examples folder or the documentation for guidance.
Table of Contents
- Features
- Installation
- Quick Start
- API Highlights
- Environment Variables & Configuration
- Streaming (SSE)
- Example Projects
- Contributing
- License
Features
- Async-first – Built on Tokio + Reqwest for asynchronous networking.
- Complete Coverage of Major OpenAI API Endpoints – Supports:
- Chat (with streaming SSE for partial responses)
- Completions
- Models (list and retrieve)
- Embeddings
- Moderations
- Files (upload, list, download, delete)
- Fine-Tunes (create, list, retrieve, cancel, events, delete models) (chat-gpt-lib-rs - Docs.rs)
- TLS without OpenSSL – Uses Rustls for TLS, avoiding system OpenSSL dependencies (chat-gpt-lib-rs/README.md at main · Arend-Jan/chat-gpt-lib-rs · GitHub).
- Robust Error Handling – Custom
OpenAIError
covers HTTP errors, API errors, and JSON deserialization issues. - Strongly-Typed Requests/Responses – Serde-powered structs for all request and response bodies.
- Configurable – Builder pattern to customize timeouts, organization IDs, or base URLs for proxy/Azure endpoints (while defaulting to OpenAI’s API URL).
Installation
Add the crate to your Cargo.toml
dependencies:
[]
= "" # Use the latest version from crates.io
= { = "1", = ["full"] } # required for async runtime
Then build your project with Cargo:
Note: The library is asynchronous and requires a Tokio runtime (as shown above). Ensure you have an async executor (like Tokio) to run the async functions.
Quick Start
Below is a minimal example using Completions (with a Tokio runtime):
use ;
use ;
async
API Highlights
Below are examples of how to use various API endpoints. All calls require an OpenAIClient
(shown as client
), which handles authentication and HTTP configuration.
Models
use models;
// List available models:
let all_models = list_models.await?;
println!;
// Retrieve details for a specific model:
let model_details = retrieve_model.await?;
println!;
Completions
use ;
let req = CreateCompletionRequest ;
let resp = create_completion.await?;
println!;
Chat Completions
use ;
let chat_req = CreateChatCompletionRequest ;
let response = create_chat_completion.await?;
println!;
Embeddings
use ;
let emb_req = CreateEmbeddingsRequest ;
let emb_res = create_embeddings.await?;
println!;
Moderations
use ;
let mod_req = CreateModerationRequest ;
let mod_res = create_moderation.await?;
println!;
Files
use ;
use PathBuf;
let file_path = from;
// Upload a file for fine-tuning:
let upload = upload_file.await?;
println!;
// List all files:
let all_files = list_files.await?;
println!;
// Retrieve content of the uploaded file:
let content = retrieve_file_content.await?;
println!;
// Delete the file:
delete_file.await?;
println!;
Fine-Tunes
use ;
// Create a fine-tuning job:
let ft_req = CreateFineTuneRequest ;
let job = create_fine_tune.await?;
println!;
// List all fine-tune jobs:
let all_jobs = list_fine_tunes.await?;
println!;
// Get details or events for a specific fine-tune job:
let job_details = retrieve_fine_tune.await?;
println!;
let events = list_fine_tune_events.await?;
println!;
// Cancel an ongoing fine-tune job (if needed):
cancel_fine_tune.await?;
Environment Variables & Configuration
By default, the library reads your OpenAI API key from the OPENAI_API_KEY
environment variable (chat-gpt-lib-rs/README.md at main · Arend-Jan/chat-gpt-lib-rs · GitHub). Set it in your shell or in a .env
file (you can use the dotenvy crate to load it):
Alternatively, you can provide the API key directly in code:
let client = new?;
For advanced configuration, use the builder to customize settings:
use OpenAIClient;
use Duration;
let client = builder
.with_api_key // explicitly set API key (otherwise reads OPENAI_API_KEY)
.with_organization // set your OpenAI organization ID (if applicable)
.with_timeout // custom request timeout for API calls
// .with_base_url("https://api.openai.com/v1/") // optionally override base URL for OpenAI API (or Azure proxy)
.build
.unwrap;
If not specified, the client will default to OpenAI’s public API endpoint and no organization ID.
Streaming (SSE)
For real-time partial responses, you can request streaming results. Set stream: true
in your request to the Chat or Completions API, which will return a stream that yields incremental updates (chat-gpt-lib-rs/README.md at main · Arend-Jan/chat-gpt-lib-rs · GitHub). You can then process the stream as each chunk arrives:
use StreamExt; // for StreamExt::next()
use ; // for flushing stdout
use ;
use ;
async
In the above example, as the stream yields each chunk
of the completion, we immediately print the delta.content
(partial message) without waiting for the full response. This provides a real-time typing effect until the stream ends with a final [DONE]
message.
Example Projects
Check out the examples/
directory in this repository for more comprehensive examples, including a CLI chat demo and usage of streaming. These examples demonstrate common patterns and can be used as a starting point for your own applications.
Third-Party Usage: The techlead
crate (an AI chat CLI) uses chat-gpt-lib-rs for its OpenAI interactions (chat-gpt-lib-rs/README.md at main · Arend-Jan/chat-gpt-lib-rs · GitHub). You can refer to it as a real-world example of integrating this library.
Contributing
Contributions and feedback are welcome! To get started:
- Fork the repository and clone your fork.
- Create a new branch for your feature or fix.
- Implement your changes, with tests if applicable.
- Run tests to ensure nothing is broken (
cargo test
). - Open a pull request describing your changes.
Given that the 0.6.x release was a major refactor, much of the code has changed from earlier versions. If you are updating older code, please refer to the new examples and documentation for the updated usage patterns (chat-gpt-lib-rs/README.md at main · Arend-Jan/chat-gpt-lib-rs · GitHub). This will help in understanding how to migrate to the latest API.
License
This project is licensed under the Apache License 2.0. See the LICENSE file for details.