lmkit 0.1.0

Multi-provider AI API client (OpenAI, Anthropic, Google Gemini, Aliyun, Ollama, Zhipu; chat, embed incl. Gemini, rerank, image, audio stubs)
Documentation

lmkit

Crates.io Docs.rs MIT License

One config. Every major AI provider.

δΈ­ζ–‡ | English

A unified Rust client for OpenAI, Anthropic, Google Gemini, Aliyun, Ollama, and Zhipu β€” built around a single trait and factory pattern. Switch providers by changing one config. Your business logic stays untouched.

Why use lmkit

  • πŸ”Œ Unified interface β€” ChatProvider, EmbedProvider and friends abstract away provider differences; your code never talks to raw HTTP
  • πŸ”€ One-line switching β€” swap ProviderConfig to move from OpenAI to Aliyun or a local Ollama, zero other changes
  • πŸ“¦ Compile only what you need β€” providers and modalities are Cargo features; unused ones add zero dependencies
  • 🌊 Streaming + tool calls β€” native SSE streaming; ChatChunk carries both text delta and tool_call_deltas in one unified type
  • πŸ” Precise errors β€” ProviderDisabled / Unsupported / Api tell you exactly what went wrong and where

Quick Start

Add the dependency

[dependencies]
lmkit = { version = "0.1", features = ["openai", "chat", "embed"] }
tokio = { version = "1", features = ["rt-multi-thread", "macros"] }

The defaults already include openai + chat + embed. Mix and match features as needed:

# Aliyun + multi-turn chat + embeddings + reranking
lmkit = { version = "0.1", features = ["aliyun", "chat", "embed", "rerank"] }

Send a message

use lmkit::{create_chat_provider, ChatRequest, Provider, ProviderConfig};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let cfg = ProviderConfig::new(
        Provider::OpenAI,
        std::env::var("OPENAI_API_KEY")?,
        "https://api.openai.com/v1",
        "gpt-4o-mini",
    );

    let chat = create_chat_provider(&cfg)?;
    let out = chat
        .complete(&ChatRequest::single_user("Explain Rust in one sentence."))
        .await?;
    println!("{}", out.content.unwrap_or_default());
    Ok(())
}

Stream the response

use futures::StreamExt;
use lmkit::{create_chat_provider, ChatRequest, Provider, ProviderConfig};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let cfg = ProviderConfig::new(
        Provider::OpenAI,
        std::env::var("OPENAI_API_KEY")?,
        "https://api.openai.com/v1",
        "gpt-4o-mini",
    );

    let chat = create_chat_provider(&cfg)?;
    let mut stream = chat
        .complete_stream(&ChatRequest::single_user("Tell me a joke."))
        .await?;

    while let Some(chunk) = stream.next().await {
        let chunk = chunk?;
        if let Some(text) = chunk.delta {
            print!("{text}");
        }
    }
    println!();
    Ok(())
}

Switch providers

Change Provider::OpenAI to your target, update base_url and the API key β€” everything else stays the same:

// Aliyun Qwen
let cfg = ProviderConfig::new(
    Provider::Aliyun,
    std::env::var("DASHSCOPE_API_KEY")?,
    "https://dashscope.aliyuncs.com/compatible-mode/v1",
    "qwen-turbo",
);

// Local Ollama (no key required)
let cfg = ProviderConfig::new(
    Provider::Ollama,
    String::new(),
    "http://127.0.0.1:11434/v1",
    "llama3",
);

Provider & Capability Matrix

Provider Chat Embed Rerank Image
OpenAI βœ… βœ… β€” βœ…
Anthropic βœ… β€” β€” β€”
Google Gemini βœ… βœ… β€” β€”
Aliyun DashScope βœ… βœ… βœ… βœ…
Ollama βœ… βœ… β€” β€”
Zhipu βœ… βœ… βœ… β€”

Chat primary API: complete (blocking) and complete_stream (SSE). chat / chat_stream are single-turn convenience wrappers.

Documentation

  • πŸ“– Usage Guide β€” getting started, features, provider config, error handling
  • πŸ”§ API Reference β€” Rust traits, factory functions, type definitions
  • 🌐 HTTP Endpoints β€” per-provider request / response shapes
  • πŸ—οΈ Design Guidelines β€” architecture and extension principles
  • 🀝 Contributing β€” how to add providers or modalities

License

MIT