Expand description
Rust library for OpenAI
§Creating client
use async_openai::{Client, config::OpenAIConfig};
// Create a OpenAI client with api key from env var OPENAI_API_KEY and default base url.
let client = Client::new();
// Above is shortcut for
let config = OpenAIConfig::default();
let client = Client::with_config(config);
// OR use API key from different source and a non default organization
let api_key = "sk-..."; // This secret could be from a file, or environment variable.
let config = OpenAIConfig::new()
.with_api_key(api_key)
.with_org_id("the-continental");
let client = Client::with_config(config);
// Use custom reqwest client
let http_client = reqwest::ClientBuilder::new().user_agent("async-openai").build().unwrap();
let client = Client::new().with_http_client(http_client);§Making requests
use async_openai::{Client, types::responses::{CreateResponseArgs}};
// Create client
let client = Client::new();
// Create request using builder pattern
// Every request struct has companion builder struct with same name + Args suffix
let request = CreateResponseArgs::default()
.model("gpt-5-mini")
.input("tell me the recipe of pav bhaji")
.max_output_tokens(512u32)
.build()?;
// Call API
let response = client
.responses() // Get the API "group" (responses, images, etc.) from the client
.create(request) // Make the API call in that "group"
.await?;
println!("{:?}", response.output_text());§Bring Your Own Types
To use custom types for inputs and outputs, enable byot feature which provides additional generic methods with same name and _byot suffix.
This feature is available on methods whose return type is not Bytes
use async_openai::Client;
use serde_json::{Value, json};
let client = Client::new();
let response: Value = client
.chat()
.create_byot(json!({
"messages": [
{
"role": "developer",
"content": "You are a helpful assistant"
},
{
"role": "user",
"content": "What do you think about life?"
}
],
"model": "gpt-4o",
"store": false
}))
.await?;
if let Some(content) = response["choices"][0]["message"]["content"].as_str() {
println!("{}", content);
}References: Borrow Instead of Move
With byot use reference to request types
let response: Response = client
.responses()
.create_byot(&request).await?;§Rust Types
To only use Rust types from the crate - use feature flag types.
There are granular feature flags like response-types, chat-completion-types, etc.
These granular types are enabled when the corresponding API feature is enabled - for example response will enable response-types.
§Configurable Requests
Individual Request
Certain individual APIs that need additional query or header parameters - these can be provided by chaining .query(), .header(), .headers() on the API group.
For example:
client
.chat()
// query can be a struct or a map too.
.query(&[("limit", "10")])?
// header for demo
.header("key", "value")?
.list()
.await?;All Requests
Use Config, OpenAIConfig etc. for configuring url, headers or query parameters globally for all requests.
§OpenAI-compatible Providers
Even though the scope of the crate is official OpenAI APIs, it is very configurable to work with compatible providers.
Configurable Path
In addition to .query(), .header(), .headers() a path for individual request can be changed by using .path(), method on the API group.
For example:
client
.chat()
.path("/v1/messages")?
.create(request)
.await?;Dynamic Dispatch
This allows you to use same code (say a fn) to call APIs on different OpenAI-compatible providers.
For any struct that implements Config trait, wrap it in a smart pointer and cast the pointer to dyn Config
trait object, then create a client with Box or Arc wrapped configuration.
For example:
use async_openai::{Client, config::{Config, OpenAIConfig}};
// Use `Box` or `std::sync::Arc` to wrap the config
let config = Box::new(OpenAIConfig::default()) as Box<dyn Config>;
// create client
let client: Client<Box<dyn Config>> = Client::with_config(config);
// A function can now accept a `&Client<Box<dyn Config>>` parameter
// which can invoke any openai compatible api
fn chat_completion(client: &Client<Box<dyn Config>>) {
todo!()
}§Microsoft Azure
use async_openai::{Client, config::AzureConfig};
let config = AzureConfig::new()
.with_api_base("https://my-resource-name.openai.azure.com")
.with_api_version("2023-03-15-preview")
.with_deployment_id("deployment-id")
.with_api_key("...");
let client = Client::with_config(config);
// Note that `async-openai` only implements OpenAI spec
// and doesn't maintain parity with the spec of Azure OpenAI service.
§Examples
For full working examples for all supported features see examples directory in the repository.
Modules§
- config
_api - Client configurations: OpenAIConfig for OpenAI, AzureConfig for Azure OpenAI Service.
- error
- Errors originating from API calls, parsing responses, and reading-or-writing to the file system.
- traits
_api - types
- Types used in OpenAI API requests and responses. These types are created from component schemas in the OpenAPI spec
- webhooks
webhook - Support for webhook event types, signature verification, and building webhook events from payloads.
Structs§
- Admin
administration - Admin group for all administration APIs. This groups together admin API keys, invites, users, projects, audit logs, certificates, roles, and groups.
- AdminAPI
Keys administration - Admin API keys enable Organization Owners to programmatically manage various aspects of their organization, including users, projects, and API keys. These keys provide administrative capabilities, allowing you to automate organization management tasks.
- Assistants
assistant - Build assistants that can call models and use tools to perform tasks.
- Audio
audio - Turn audio into text or text into audio. Related guide: Speech to text
- Audit
Logs administration - Logs of user actions and configuration changes within this organization. To log events, you must activate logging in the Organization Settings. Once activated, for security reasons, logging cannot be deactivated.
- Batches
batch - Create large batches of API requests for asynchronous processing. The Batch API returns completions within 24 hours for a 50% discount.
- Certificates
administration - Certificates enable Mutual TLS (mTLS) authentication for your organization. Manage certificates at the organization level.
- Chat
chat-completion - Given a list of messages comprising a conversation, the model will return a response.
- Chatkit
chatkit - ChatKit API for managing sessions and threads.
- Client
_api - Client is a container for config, backoff and http_client used to make API calls.
- Completions
completions - Given a prompt, the model will return one or more predicted completions, and can also return the probabilities of alternative tokens at each position. We recommend most users use our Chat completions API. Learn more
- Container
Files container - Create and manage container files for use with the Code Interpreter tool.
- Containers
container - Conversation
Items responses - Conversation items represent items within a conversation.
- Conversations
responses - Embeddings
embedding - Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.
- Eval
RunOutput Items evals - Eval
Runs evals - Evals
evals - Create, manage, and run evals in the OpenAI platform. Related guide: Evals
- Files
file - Files are used to upload documents that can be used with features like Assistants and Fine-tuning.
- Fine
Tuning finetuning - Manage fine-tuning jobs to tailor a model to your specific training data.
- Group
Roles administration - Manage role assignments for groups in the organization.
- Group
Users administration - Manage users within a group, including adding and removing users.
- Groups
administration - Manage reusable collections of users for organization-wide access control and maintain their membership.
- Images
image - Given a prompt and/or an input image, the model will generate a new image.
- Invites
administration - Invite and manage invitations for an organization. Invited users are automatically added to the Default project.
- Messages
assistant - Represents a message within a thread.
- Models
model - List and describe the various models available in the API. You can refer to the Models documentation to understand what models are available and the differences between them.
- Moderations
moderation - Given text and/or image inputs, classifies if those inputs are potentially harmful across several categories.
- ProjectAPI
Keys administration - Manage API keys for a given project. Supports listing and deleting keys for users. This API does not allow issuing keys for users, as users need to authorize themselves to generate keys.
- Project
Certificates administration - Manage certificates for a given project. Supports listing, activating, and deactivating certificates.
- Project
Group Roles administration - Manage role assignments for groups in a project.
- Project
Groups administration - Manage which groups have access to a project and the role they receive.
- Project
Rate Limits administration - Manage rate limits for a given project. Supports listing and updating rate limits per model.
- Project
Roles administration - Manage custom roles that can be assigned to groups and users at the project level.
- Project
Service Accounts administration - Manage service accounts within a project. A service account is a bot user that is not associated with a user. If a user leaves an organization, their keys and membership in projects will no longer work. Service accounts do not have this limitation. However, service accounts can also be deleted from a project.
- Project
User Roles administration - Manage role assignments for users in a project.
- Project
Users administration - Manage users within a project, including adding, updating roles, and removing users. Users cannot be removed from the Default project, unless they are being removed from the organization.
- Projects
administration - Manage the projects within an organization includes creation, updating, and archiving or projects. The Default project cannot be modified or archived.
- Realtime
realtime - Realtime API for creating sessions, managing calls, and handling WebRTC connections. Related guide: Realtime API
- Request
Options _api - Responses
responses - Roles
administration - Manage custom roles that can be assigned to groups and users at the organization or project level.
- Runs
assistant - Represents an execution run on a thread.
- Speech
audio - Steps
assistant - Represents a step in execution of a run.
- Threads
assistant - Create threads that assistants can interact with.
- Transcriptions
audio - Translations
audio - Uploads
upload - Allows you to upload large files in multiple parts.
- Usage
administration - Manage organization usage data. Get usage details for various API endpoints including completions, embeddings, images, audio, moderations, vector stores, and code interpreter sessions.
- User
Roles administration - Manage role assignments for users in the organization.
- Users
administration - Manage users and their role in an organization. Users will be automatically added to the Default project.
- Vector
Store File Batches vectorstore - Vector store file batches represent operations to add multiple files to a vector store.
- Vector
Store Files vectorstore - Vector store files represent files inside a vector store.
- Vector
Stores vectorstore - Videos
video - Video generation with Sora Related guide: Video generation