llm-sdk-rs 0.2.0

A Rust library that enables the development of applications that can interact with different language models through a unified interface.
Documentation

llm-sdk for Rust

A Rust library that enables the development of applications that can interact with different language models through a unified interface.

Installation

cargo add llm-sdk-rs

Usage

All models implement the LanguageModel trait:

use llm_sdk::{
    anthropic::{AnthropicModel, AnthropicModelOptions},
    google::{GoogleModel, GoogleModelOptions},
    openai::{OpenAIChatModel, OpenAIChatModelOptions, OpenAIModel, OpenAIModelOptions},
    LanguageModel,
};

pub fn get_model(provider: &str, model_id: &str) -> Box<dyn LanguageModel> {
    match provider {
        "openai" => Box::new(OpenAIModel::new(
            model_id.to_string(),
            OpenAIModelOptions {
                api_key: std::env::var("OPENAI_API_KEY")
                    .expect("OPENAI_API_KEY environment variable must be set"),
                ..Default::default()
            },
        )),
        "openai-chat-completion" => Box::new(OpenAIChatModel::new(
            model_id.to_string(),
            OpenAIChatModelOptions {
                api_key: std::env::var("OPENAI_API_KEY")
                    .expect("OPENAI_API_KEY environment variable must be set"),
                ..Default::default()
            },
        )),
        "anthropic" => Box::new(AnthropicModel::new(
            model_id.to_string(),
            AnthropicModelOptions {
                api_key: std::env::var("ANTHROPIC_API_KEY")
                    .expect("ANTHROPIC_API_KEY environment variable must be set"),
                ..Default::default()
            },
        )),
        "google" => Box::new(GoogleModel::new(
            model_id.to_string(),
            GoogleModelOptions {
                api_key: std::env::var("GOOGLE_API_KEY")
                    .expect("GOOGLE_API_KEY environment variable must be set"),
                ..Default::default()
            },
        )),
        _ => panic!("Unsupported provider: {provider}"),
    }
}

Below is an example to generate text:

use dotenvy::dotenv;
use llm_sdk::{LanguageModelInput, Message, Part};

mod common;

#[tokio::main]
async fn main() {
    dotenv().ok();

    let model = common::get_model("openai", "gpt-4o");

    let response = model
        .generate(LanguageModelInput {
            messages: vec![
                Message::user(vec![Part::text("Tell me a story.")]),
                Message::assistant(vec![Part::text(
                    "Sure! What kind of story would you like to hear?",
                )]),
                Message::user(vec![Part::text("a fairy tale")]),
            ],
            ..Default::default()
        })
        .await
        .unwrap();

    println!("{response:#?}");
}

Examples

Find examples in the examples folder to learn how to:

cargo run --example generate-text

Migration

To 0.2.0

  • image_data and audio_data have been renamed to just data in ImagePart and AudioPart.

License

MIT