Crate async_openai

source ·
Expand description

Rust library for OpenAI

§Creating client

use async_openai::{Client, config::OpenAIConfig};

// Create a OpenAI client with api key from env var OPENAI_API_KEY and default base url.
let client = Client::new();

// Above is shortcut for
let config = OpenAIConfig::default();
let client = Client::with_config(config);

// OR use API key from different source and a non default organization
let api_key = "sk-..."; // This secret could be from a file, or environment variable.
let config = OpenAIConfig::new()
    .with_api_key(api_key)
    .with_org_id("the-continental");

let client = Client::with_config(config);

// Use custom reqwest client
let http_client = reqwest::ClientBuilder::new().user_agent("async-openai").build().unwrap();
let client = Client::new().with_http_client(http_client);

§Microsoft Azure Endpoints

use async_openai::{Client, config::AzureConfig};

let config = AzureConfig::new()
    .with_api_base("https://my-resource-name.openai.azure.com")
    .with_api_version("2023-03-15-preview")
    .with_deployment_id("deployment-id")
    .with_api_key("...");

let client = Client::with_config(config);

// Note that Azure OpenAI service does not support all APIs and `async-openai`
// doesn't restrict and still allows calls to all of the APIs as OpenAI.

§Making requests


 use async_openai::{Client, types::{CreateCompletionRequestArgs}};

 // Create client
 let client = Client::new();

 // Create request using builder pattern
 // Every request struct has companion builder struct with same name + Args suffix
 let request = CreateCompletionRequestArgs::default()
     .model("gpt-3.5-turbo-instruct")
     .prompt("Tell me the recipe of alfredo pasta")
     .max_tokens(40_u16)
     .build()
     .unwrap();

 // Call API
 let response = client
     .completions()      // Get the API "group" (completions, images, etc.) from the client
     .create(request)    // Make the API call in that "group"
     .await
     .unwrap();

 println!("{}", response.choices.first().unwrap().text);

§Examples

For full working examples for all supported features see examples directory in the repository.

Modules§

  • Client configurations: OpenAIConfig for OpenAI, AzureConfig for Azure OpenAI Service.
  • Errors originating from API calls, parsing responses, and reading-or-writing to the file system.
  • Types used in OpenAI API requests and responses. These types are created from component schemas in the OpenAPI spec

Structs§

  • Files attached to an assistant.
  • Build assistants that can call models and use tools to perform tasks.
  • Turn audio into text Related guide: Speech to text
  • Create large batches of API requests for asynchronous processing. The Batch API returns completions within 24 hours for a 50% discount.
  • Given a list of messages comprising a conversation, the model will return a response.
  • Client is a container for config, backoff and http_client used to make API calls.
  • Given a prompt, the model will return one or more predicted completions, and can also return the probabilities of alternative tokens at each position. We recommend most users use our Chat completions API. Learn more
  • Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.
  • Files are used to upload documents that can be used with features like Assistants and Fine-tuning.
  • Manage fine-tuning jobs to tailor a model to your specific training data.
  • Given a prompt and/or an input image, the model will generate a new image.
  • Files attached to a message.
  • Represents a message within a thread.
  • List and describe the various models available in the API. You can refer to the Models documentation to understand what models are available and the differences between them.
  • Given some input text, outputs if the model classifies it as potentially harmful across several categories.
  • Represents an execution run on a thread.
  • Represents a step in execution of a run.
  • Create threads that assistants can interact with.
  • Vector store file batches represent operations to add multiple files to a vector store.
  • Vector store files represent files inside a vector store.