Expand description
§Language Models and Conversation Management
This module provides everything you need to work with language models in a provider-agnostic way. Build chat applications, generate structured output, and integrate tools without being tied to any specific AI service.
§Core Components
LanguageModel
- The main trait for text generation and conversationTextStream
- Unified streaming interface for text responses with dual Stream/Future supportRequest
- Encapsulates messages, tools, and parameters for model callsMessage
- Represents individual messages in a conversationTool
- Function calling interface for extending model capabilities
§Quick Start
§Basic Conversation
use ai_types::llm::{LanguageModel, Request, Message};
use futures_lite::StreamExt;
async fn chat_with_model(model: impl LanguageModel) -> Result<String, Box<dyn std::error::Error>> {
// Create a simple conversation
let request = Request::oneshot(
"You are a helpful assistant",
"What's the capital of Japan?"
);
// Stream the response
let mut response = model.respond(request);
let mut full_text = String::new();
while let Some(chunk) = response.next().await {
full_text.push_str(&chunk?);
}
Ok(full_text)
}
§Multi-turn Conversation
use ai_types::llm::{Request, Message};
let messages = [
Message::system("You are a helpful coding assistant"),
Message::user("How do I create a vector in Rust?"),
Message::assistant("You can create a vector using `Vec::new()` or the `vec!` macro..."),
Message::user("Can you show me an example?"),
];
let request = Request::new(messages);
§Structured Output Generation
use ai_types::llm::{LanguageModel, Request, Message};
use serde::{Deserialize, Serialize};
use schemars::JsonSchema;
#[derive(JsonSchema, Deserialize, Serialize)]
struct WeatherResponse {
temperature: f32,
condition: String,
humidity: i32,
}
async fn get_weather_data(model: impl LanguageModel) -> ai_types::Result<WeatherResponse> {
let request = Request::oneshot(
"Extract weather information from the following text",
"It's 22°C and sunny with 65% humidity today"
);
model.generate::<WeatherResponse>(request).await
}
§Function Calling with Tools
use ai_types::llm::{Request, Message, Tool};
use schemars::JsonSchema;
use serde::Deserialize;
#[derive(JsonSchema, Deserialize)]
struct CalculatorArgs {
operation: String, // "add", "subtract", "multiply", "divide"
x: f64,
y: f64,
}
struct Calculator;
impl Tool for Calculator {
const NAME: &str = "calculator";
const DESCRIPTION: &str = "Performs basic arithmetic operations";
type Arguments = CalculatorArgs;
async fn call(&mut self, args: Self::Arguments) -> ai_types::Result {
let result = match args.operation.as_str() {
"add" => args.x + args.y,
"subtract" => args.x - args.y,
"multiply" => args.x * args.y,
"divide" => args.x / args.y,
_ => return Err(anyhow::anyhow!("Unknown operation")),
};
Ok(result.to_string())
}
}
// Usage
let request = Request::new([
Message::user("What's 15 multiplied by 23?")
]).with_tool(Calculator);
§Model Configuration
use ai_types::llm::{Request, Message, model::Parameters};
let request = Request::new([
Message::user("Write a creative story")
]).with_parameters(
Parameters::default()
.temperature(0.8) // More creative
.top_p(0.9) // Nucleus sampling
.frequency_penalty(0.5) // Reduce repetition
);
§Advanced Features
§Working with Text Streams
The TextStream
trait provides a unified interface for handling streaming text responses.
It implements both Stream<Item = Result<String, Error>>
for chunk-by-chunk processing
and IntoFuture<Output = Result<String, Error>>
for collecting complete responses.
use ai_types::llm::{LanguageModel, TextStream, Request, Message};
use futures_lite::StreamExt;
// Process text as it streams in (useful for real-time display)
async fn stream_chat_response(model: impl LanguageModel) -> ai_types::Result {
let request = Request::new([Message::user("Tell me a story about robots")]);
let mut stream = model.respond(request);
let mut complete_story = String::new();
while let Some(chunk) = stream.next().await {
let text = chunk?;
print!("{}", text); // Display each chunk as it arrives
complete_story.push_str(&text);
}
Ok(complete_story)
}
// Collect complete response using IntoFuture (simpler for batch processing)
async fn get_complete_response(model: impl LanguageModel) -> ai_types::Result {
let request = Request::new([Message::user("Explain machine learning")]);
let stream = model.respond(request);
// TextStream implements IntoFuture, so you can await it directly
let explanation = stream.await?;
Ok(explanation)
}
// Generic function that works with any TextStream implementation
async fn process_any_stream<S: TextStream>(stream: S) -> Result<String, S::Error> {
// Can either iterate through chunks...
let mut result = String::new();
let mut stream = stream;
while let Some(chunk) = stream.next().await {
result.push_str(&chunk?);
}
Ok(result)
// ...or collect everything at once
// stream.await
}
// Convert any Stream<Item = Result<String, E>> into a TextStream
use futures_lite::stream;
async fn custom_text_stream() {
let chunks = vec!["Hello, ", "streaming ", "world!"];
let chunk_stream = stream::iter(chunks).map(|s| Ok::<String, std::io::Error>(s.to_string()));
let text_stream = ai_types::llm::stream::text_stream(chunk_stream);
let complete_text = text_stream.await.unwrap();
assert_eq!(complete_text, "Hello, streaming world!");
}
§Text Summarization
use ai_types::llm::LanguageModel;
use futures_lite::StreamExt;
async fn summarize_text(model: impl LanguageModel, text: &str) -> Result<String, Box<dyn std::error::Error>> {
let mut summary_stream = model.summarize(text);
let mut summary = String::new();
while let Some(chunk) = summary_stream.next().await {
summary.push_str(&chunk?);
}
Ok(summary)
}
§Text Categorization
use ai_types::llm::LanguageModel;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
#[derive(JsonSchema, Deserialize, Serialize)]
enum DocumentCategory {
Technical,
Marketing,
Legal,
Support,
Internal,
}
#[derive(JsonSchema, Deserialize, Serialize)]
struct ClassificationResult {
category: DocumentCategory,
confidence: f32,
reasoning: String,
}
async fn categorize_document(model: impl LanguageModel, text: &str) -> ai_types::Result<ClassificationResult> {
model.categorize::<ClassificationResult>(text).await
}
§Message Types and Annotations
Messages support rich content including file attachments and URL annotations:
use ai_types::llm::{Message, UrlAnnotation, Annotation};
use url::Url;
let message = Message::user("Check this documentation")
.with_attachment("file:///path/to/doc.pdf")
.with_annotation(
Annotation::url(
"https://docs.rs/ai-types",
"AI Types Documentation",
"Rust crate for AI model abstractions",
0,
25,
)
);
Re-exports§
pub use stream::TextStream;
pub use message::Annotation;
pub use message::Message;
pub use message::Role;
pub use message::UrlAnnotation;
pub use tool::Tool;
Modules§
- message
- Message types and conversation handling. Message types for AI language model conversations.
- model
- Model profiles and capabilities. AI language model configuration and profiling types.
- stream
- Text streaming utilities and the
TextStream
trait. - tool
- Tool system for function calling.
Structs§
- Request
- A request to a language model.
Traits§
- Language
Model - Language models for text generation and conversation.
- Language
Model Provider - Trait for AI service providers that can list and provide language models.