Crate ai_types

Source
Expand description

Β§ai-types

Write AI applications that work with any provider πŸš€

This crate provides unified trait abstractions for AI models, letting you write code once and switch between providers (OpenAI, Anthropic, local models, etc.) without changing your application logic.

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Your App      │───▢│    ai-types      │◀───│   Providers     β”‚
β”‚                 β”‚    β”‚   (this crate)   β”‚    β”‚                 β”‚
β”‚ - Chat bots     β”‚    β”‚                  β”‚    β”‚ - openai        β”‚
β”‚ - Search        β”‚    β”‚ - LanguageModel  β”‚    β”‚ - anthropic     β”‚
β”‚ - Content gen   β”‚    β”‚ - EmbeddingModel β”‚    β”‚ - llama.cpp     β”‚
β”‚ - Voice apps    β”‚    β”‚ - ImageGenerator β”‚    β”‚ - whisper       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Β§Supported AI Capabilities

CapabilityTraitDescription
Language ModelsLanguageModelText generation, conversations, structured output
Text StreamingTextStreamUnified interface for streaming text responses
EmbeddingsEmbeddingModelConvert text to vectors for semantic search
Image GenerationImageGeneratorCreate images with progressive quality improvement
Text-to-SpeechAudioGeneratorGenerate speech audio from text
Speech-to-TextAudioTranscriberTranscribe audio to text
Content ModerationModerationDetect policy violations with confidence scores

Β§Examples

Β§Basic Chat Bot

use ai_types::{LanguageModel, llm::{Message, Request}};
use futures_lite::StreamExt;

async fn chat_example(model: impl LanguageModel) -> ai_types::Result {
    let messages = [
        Message::system("You are a helpful assistant"),
        Message::user("What's the capital of France?")
    ];
     
    let request = Request::new(messages);
    let mut response = model.respond(request);
     
    Ok(response.await?)
}

Β§Structured Output with Tools

use ai_types::{LanguageModel, llm::{Message, Request, Tool}};
use serde::{Deserialize, Serialize};
use schemars::JsonSchema;

#[derive(JsonSchema, Deserialize, Serialize)]
struct WeatherQuery {
    location: String,
    units: Option<String>,
}

struct WeatherTool;

impl Tool for WeatherTool {
    const NAME: &str = "get_weather";
    const DESCRIPTION: &str = "Get current weather for a location";
    type Arguments = WeatherQuery;
     
    async fn call(&mut self, args: Self::Arguments) -> ai_types::Result {
        Ok(format!("Weather in {}: 22Β°C, sunny", args.location))
    }
}

async fn weather_bot(model: impl LanguageModel) -> ai_types::Result {
    let request = Request::new(vec![
        Message::user("What's the weather like in Tokyo?")
    ]).with_tool(WeatherTool);
     
    // Model can now call the weather tool automatically
    let response: String = model.generate(request).await?;
    Ok(response)
}

See llm::tool for more details on using tools with language models.

Β§Semantic Search with Embeddings

use ai_types::EmbeddingModel;

async fn find_similar_docs(
    model: impl EmbeddingModel,
    query: &str,
    documents: &[&str]
) -> ai_types::Result<Vec<f32>> {
    // Convert query to vector
    let query_embedding = model.embed(query).await?;
     
    // In a real app, you'd compare with document embeddings
    // and find the most similar ones using cosine similarity
     
    Ok(query_embedding)
}

Β§Progressive Image Generation

use ai_types::{ImageGenerator, image::{Prompt, Size}};
use futures_lite::StreamExt;

async fn generate_image(generator: impl ImageGenerator) -> Result<Vec<u8>, Box<dyn std::error::Error>> {
    let prompt = Prompt::new("A beautiful sunset over mountains");
    let size = Size::square(1024);
     
    let mut image_stream = generator.create(prompt, size);
    let mut final_image = Vec::new();
     
    // Each iteration gives us a complete image with progressively better quality
    while let Some(image_result) = image_stream.next().await {
        let current_image = image_result?;
        final_image = current_image; // Keep the latest (highest quality) version
         
        // Optional: Display preview of current quality level
        println!("Received image update, {} bytes", final_image.len());
    }
     
    Ok(final_image) // Return the final highest-quality image
}

ModulesΒ§

audio
Audio generation and transcription.
embedding
Text embeddings.
image
Text-to-image generation.
llm
Language Models and Conversation Management
moderation
Content moderation utilities.

StructsΒ§

Error
The Error type, a wrapper around a dynamic error type.

TraitsΒ§

AudioGenerator
Generates audio from text prompts.
AudioTranscriber
Transcribes audio to text.
EmbeddingModel
Converts text to vector representations.
ImageGenerator
Trait for generating and editing images from prompts and masks.
LanguageModel
Language models for text generation and conversation.
Moderation
Trait for content moderation services.
TextStream
A trait for streaming text responses from language models.

Type AliasesΒ§

Result
Result type used throughout the crate.

Attribute MacrosΒ§

tool
Converts an async function into an AI tool that can be called by language models.