Docs.rs
  • async-openai-0.25.0
    • async-openai 0.25.0
    • Docs.rs crate page
    • MIT
    • Links
    • Homepage
    • Repository
    • crates.io
    • Source
    • Owners
    • 64bit
    • Dependencies
      • async-convert ^1.0.0 normal
      • backoff ^0.4.0 normal
      • base64 ^0.22.1 normal
      • bytes ^1.6.0 normal
      • derive_builder ^0.20.0 normal
      • eventsource-stream ^0.2.3 normal
      • futures ^0.3.30 normal
      • rand ^0.8.5 normal
      • reqwest ^0.12.4 normal
      • reqwest-eventsource ^0.6.0 normal
      • secrecy ^0.8.0 normal
      • serde ^1.0.203 normal
      • serde_json ^1.0.117 normal
      • thiserror ^1.0.61 normal
      • tokio ^1.38.0 normal
      • tokio-stream ^0.1.15 normal
      • tokio-tungstenite ^0.24.0 normal optional
      • tokio-util ^0.7.11 normal
      • tracing ^0.1.40 normal
      • tokio-test ^0.4.4 dev
    • Versions
    • 63.41% of the crate is documented
  • Go to latest version
  • Platform
    • i686-unknown-linux-gnu
    • x86_64-unknown-linux-gnu
  • Feature flags
  • docs.rs
    • About docs.rs
    • Privacy policy
  • Rust
    • Rust website
    • The Book
    • Standard Library API Reference
    • Rust by Example
    • The Cargo Guide
    • Clippy Documentation

Crate async_openai

async_openai0.25.0

  • All Items

Sections

  • Creating client
  • Microsoft Azure Endpoints
  • Making requests
  • Examples

Crate Items

  • Modules
  • Structs

Crates

  • async_openai

Crate async_openai

source
Expand description

Rust library for OpenAI

§Creating client

use async_openai::{Client, config::OpenAIConfig};

// Create a OpenAI client with api key from env var OPENAI_API_KEY and default base url.
let client = Client::new();

// Above is shortcut for
let config = OpenAIConfig::default();
let client = Client::with_config(config);

// OR use API key from different source and a non default organization
let api_key = "sk-..."; // This secret could be from a file, or environment variable.
let config = OpenAIConfig::new()
    .with_api_key(api_key)
    .with_org_id("the-continental");

let client = Client::with_config(config);

// Use custom reqwest client
let http_client = reqwest::ClientBuilder::new().user_agent("async-openai").build().unwrap();
let client = Client::new().with_http_client(http_client);

§Microsoft Azure Endpoints

use async_openai::{Client, config::AzureConfig};

let config = AzureConfig::new()
    .with_api_base("https://my-resource-name.openai.azure.com")
    .with_api_version("2023-03-15-preview")
    .with_deployment_id("deployment-id")
    .with_api_key("...");

let client = Client::with_config(config);

// Note that `async-openai` only implements OpenAI spec
// and doesn't maintain parity with the spec of Azure OpenAI service.

§Making requests


 use async_openai::{Client, types::{CreateCompletionRequestArgs}};

 // Create client
 let client = Client::new();

 // Create request using builder pattern
 // Every request struct has companion builder struct with same name + Args suffix
 let request = CreateCompletionRequestArgs::default()
     .model("gpt-3.5-turbo-instruct")
     .prompt("Tell me the recipe of alfredo pasta")
     .max_tokens(40_u32)
     .build()
     .unwrap();

 // Call API
 let response = client
     .completions()      // Get the API "group" (completions, images, etc.) from the client
     .create(request)    // Make the API call in that "group"
     .await
     .unwrap();

 println!("{}", response.choices.first().unwrap().text);

§Examples

For full working examples for all supported features see examples directory in the repository.

Modules§

  • config
    Client configurations: OpenAIConfig for OpenAI, AzureConfig for Azure OpenAI Service.
  • error
    Errors originating from API calls, parsing responses, and reading-or-writing to the file system.
  • types
    Types used in OpenAI API requests and responses. These types are created from component schemas in the OpenAPI spec

Structs§

  • AssistantFiles
    Files attached to an assistant.
  • Assistants
    Build assistants that can call models and use tools to perform tasks.
  • Audio
    Turn audio into text or text into audio. Related guide: Speech to text
  • Batches
    Create large batches of API requests for asynchronous processing. The Batch API returns completions within 24 hours for a 50% discount.
  • Chat
    Given a list of messages comprising a conversation, the model will return a response.
  • Client
    Client is a container for config, backoff and http_client used to make API calls.
  • Completions
    Given a prompt, the model will return one or more predicted completions, and can also return the probabilities of alternative tokens at each position. We recommend most users use our Chat completions API. Learn more
  • Embeddings
    Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.
  • Files
    Files are used to upload documents that can be used with features like Assistants and Fine-tuning.
  • FineTuning
    Manage fine-tuning jobs to tailor a model to your specific training data.
  • Images
    Given a prompt and/or an input image, the model will generate a new image.
  • MessageFiles
    Files attached to a message.
  • Messages
    Represents a message within a thread.
  • Models
    List and describe the various models available in the API. You can refer to the Models documentation to understand what models are available and the differences between them.
  • Moderations
    Given some input text, outputs if the model classifies it as potentially harmful across several categories.
  • Runs
    Represents an execution run on a thread.
  • Steps
    Represents a step in execution of a run.
  • Threads
    Create threads that assistants can interact with.
  • VectorStoreFileBatches
    Vector store file batches represent operations to add multiple files to a vector store.
  • VectorStoreFiles
    Vector store files represent files inside a vector store.
  • VectorStores

Results

Settings
Help

Query parser error: "Unexpected - (did you mean ->?)".