Crate openai_interface

Crate openai_interface 

Source
Expand description

A low-level Rust interface for interacting with OpenAI’s API.

This crate provides a simple, efficient, and low-level way to interact with OpenAI’s API, supporting both streaming and non-streaming responses. It leverages Rust’s powerful type system for safety and performance, while exposing the full flexibility of the API.

§Features

  • Chat Completions: Full support for OpenAI’s chat completion and completion API, including both streaming and non-streaming responses.
  • File: Full support for OpenAI’s file API.
  • Streaming and Non-streaming: Support for both streaming and non-streaming responses.
  • Strong Typing: Complete type definitions for all API requests and responses, utilizing Rust’s powerful type system.
  • Error Handling: Comprehensive error handling with detailed error types defined in the errors module.
  • Async/Await: Built with async/await support.
  • Musl Support: Designed to work with musl libc out-of-the-box.
  • Multiple Provider Support: Expected to work with OpenAI, DeepSeek, Qwen, and other compatible API providers.

§Implemented APIs

  • Chat Completions
  • Completions

§Developing

  • Files

§Examples

§Non-streaming Chat Completion

This example demonstrates how to make a non-streaming request to the chat completion API.

use std::sync::LazyLock;
use openai_interface::chat::request::{Message, RequestBody};
use openai_interface::chat::response::no_streaming::ChatCompletion;
use openai_interface::rest::post::PostNoStream;

// You need to provide your own DeepSeek API key at /keys/deepseek_domestic_key
const DEEPSEEK_API_KEY: LazyLock<&str> =
   LazyLock::new(|| include_str!("../keys/deepseek_domestic_key").trim());
const DEEPSEEK_CHAT_URL: &'static str = "https://api.deepseek.com/chat/completions";
const DEEPSEEK_MODEL: &'static str = "deepseek-chat";

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let request = RequestBody {
        messages: vec![
            Message::System {
                content: "You are a helpful assistant.".to_string(),
                name: None,
            },
            Message::User {
                content: "Hello, how are you?".to_string(),
                name: None,
            },
        ],
        model: DEEPSEEK_MODEL.to_string(),
        stream: false,
        ..Default::default()
    };

    // Send the request
    let chat_completion: ChatCompletion = request
        .get_response(DEEPSEEK_CHAT_URL, &*DEEPSEEK_API_KEY)
        .await?;
    let text = chat_completion.choices[0]
        .message
        .content
        .as_deref()
        .unwrap();
    println!("{:?}", text);
    Ok(())
}

§Streaming Chat Completion

This example demonstrates how to handle streaming responses from the API. As with the non-streaming example, all API parameters can be adjusted directly through the request struct.

use openai_interface::chat::response::streaming::{CompletionContent, ChatCompletionChunk};
use openai_interface::chat::request::{Message, RequestBody};
use openai_interface::rest::post::PostStream;
use futures_util::StreamExt;

use std::sync::LazyLock;

// You need to provide your own DeepSeek API key at /keys/deepseek_domestic_key
const DEEPSEEK_API_KEY: LazyLock<&str> =
   LazyLock::new(|| include_str!("../keys/deepseek_domestic_key").trim());
const DEEPSEEK_CHAT_URL: &'static str = "https://api.deepseek.com/chat/completions";
const DEEPSEEK_MODEL: &'static str = "deepseek-chat";

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let request = RequestBody {
        messages: vec![
            Message::System {
                content: "You are a helpful assistant.".to_string(),
                name: None,
            },
            Message::User {
                content: "Who are you?".to_string(),
                name: None,
            },
        ],
        model: DEEPSEEK_MODEL.to_string(),
        stream: true,
        ..Default::default()
    };

    // Send the request
    let mut response_stream = request
        .get_stream_response(DEEPSEEK_CHAT_URL, *DEEPSEEK_API_KEY)
        .await?;

    let mut message = String::new();

    while let Some(chunk_result) = response_stream.next().await {
        let chunk: ChatCompletionChunk = chunk_result?;
        let content = match chunk.choices[0].delta.content.as_ref().unwrap() {
            CompletionContent::Content(s) => s,
            CompletionContent::ReasoningContent(s) => s,
        };
        println!("lib::test_streaming message: {}", content);
        message.push_str(content);
    }

    println!("lib::test_streaming message: {}", message);
    Ok(())
}

§Musl Build

This crate is designed to work with musl libc, making it suitable for lightweight deployments in containerized environments. Longer compile times may be required as OpenSSL needs to be built from source.

To build for musl:

rustup target add x86_64-unknown-linux-musl
cargo build --target x86_64-unknown-linux-musl

Modules§

chat
Response to a given chat conversation.
completions
Given a prompt, the model will return one or more predicted completions, and can also return the probabilities of alternative tokens at each position. Compared to the chat API, this one does not provide the ability to have multiple rounds of conversation. This API is getting deprecated in favor of the chat API.
errors
files
File management module for OpenAI API integration.
rest
REST API client module for OpenAI interface