Expand description
§AI SDK Core
High-level, ergonomic APIs for building applications with large language models.
This crate provides production-ready abstractions over the provider specification layer, offering builder-based APIs, automatic tool execution, structured output generation, and comprehensive error handling.
§Core Features
- Text Generation:
generate_text()andstream_text()for chat completion - Tool Execution: Automatic multi-step tool calling with custom functions
- Embeddings:
embed()andembed_many()for semantic vector generation - Structured Output:
generate_object()for schema-validated JSON - Middleware: Extensible hooks for logging, caching, and custom behavior
- Multi-Provider: Registry system for managing multiple provider configurations
§Example: Text Generation
Generate text using a simple builder pattern with provider-agnostic configuration:
ⓘ
use ai_sdk_core::generate_text;
use ai_sdk_openai::openai;
let result = generate_text()
.model(openai("gpt-4").api_key(api_key))
.prompt("Explain the fundamentals of quantum computing")
.temperature(0.7)
.max_tokens(500)
.execute()
.await?;
println!("Response: {}", result.text());
println!("Tokens used: {}", result.usage.total_tokens);§Example: Tool Calling
Implement custom tools that the model can call during generation. The framework handles the execution loop automatically:
ⓘ
use ai_sdk_core::{generate_text, Tool, ToolContext};
use ai_sdk_openai::openai;
use async_trait::async_trait;
use std::sync::Arc;
struct WeatherTool;
#[async_trait]
impl Tool for WeatherTool {
fn name(&self) -> &str { "get_weather" }
fn description(&self) -> &str {
"Retrieves current weather conditions for a specified location"
}
fn input_schema(&self) -> serde_json::Value {
serde_json::json!({
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name or coordinates"
}
},
"required": ["location"]
})
}
async fn execute(&self, input: serde_json::Value, _ctx: &ToolContext)
-> Result<serde_json::Value, ai_sdk_core::ToolError> {
let location = input["location"].as_str().unwrap_or("unknown");
Ok(serde_json::json!({
"location": location,
"temperature": 72,
"conditions": "sunny"
}))
}
}
let result = generate_text()
.model(openai("gpt-4").api_key(api_key))
.prompt("What's the weather like in Tokyo?")
.tools(vec![Arc::new(WeatherTool)])
.max_steps(5)
.execute()
.await?;§Example: Streaming
Process responses incrementally as they arrive for real-time user feedback:
ⓘ
use ai_sdk_core::stream_text;
use tokio_stream::StreamExt;
let result = stream_text()
.model(openai("gpt-4").api_key(api_key))
.prompt("Write a creative short story about time travel")
.temperature(0.9)
.execute()
.await?;
let mut stream = result.into_stream();
while let Some(part) = stream.next().await {
match part? {
TextStreamPart::TextDelta(delta) => print!("{}", delta),
TextStreamPart::FinishReason(reason) => {
println!("\nFinished: {:?}", reason);
}
_ => {}
}
}Re-exports§
pub use error::EmbedError;pub use error::Error;pub use error::GenerateError;pub use error::Result;pub use error::ToolError;
Modules§
- error
- Error definitions for the crate.
- generate_
object - Generate structured objects with schema validation Generate structured objects from language models.
- middleware
- Middleware system for customizing language model behavior Middleware system for composable language model behavior customization.
- registry
- Provider registry system for multi-provider management Provider registry system for multi-provider management.
- util
- Utility functions for media type detection, file download, and base64 encoding
Macros§
- impl_
builder_ core - Macro that generates the core builder pattern implementation.
Structs§
- Call
Options - Configuration options for language model generation requests.
- Embed
Builder - Builder for single value embedding
- Embed
Many Builder - Builder for embedding multiple values
- Embed
Many Result - Result of embedding multiple values
- Embed
Result - Result of embedding a single value
- Embedding
Usage - Token usage information for embeddings
- Generate
Text Builder - Builder for text generation.
- Generate
Text Result - Result of a text generation call.
- Retry
Policy - Retry policy for API calls
- Step
Result - Result of a single step in the generation process.
- Stream
Text Builder - Builder for streaming text generation.
- Stream
Text Result - Result of a streaming text generation.
- Tool
Call Part - A request from the language model to invoke an external tool or function.
- Tool
Context - Context information provided to tools during execution.
- Tool
Executor - Manages the execution of tools invoked by language models.
- Tool
Result Part - The result of executing a tool or function in response to a model’s tool call.
- Usage
- Usage information for a language model call.
Enums§
- Content
- Represents a single content element in a language model’s response or message.
- Finish
Reason - Reason why a language model finished generating a response.
- Json
Value - A JSON value can be a string, number, boolean, object, array, or null. JSON values can be serialized and deserialized by serde_json.
- Message
- A message in a conversation with a language model.
- Text
Stream Part - A part of the text stream.
- Tool
Output - Tool execution output - either a single value or a stream of values
Traits§
- Embedding
Model - The core trait for embedding model implementations following the v3 specification.
- Language
Model - Core trait for language model providers.
- Stop
Condition - Trait for determining when to stop the tool execution loop
- Tool
- Trait that tools must implement to be available for language model invocation.
Functions§
- embed
- Entry point function for embedding a single value
- embed_
many - Entry point function for embedding multiple values
- generate_
text - Generates text using a language model.
- stop_
after_ steps - Creates a stop condition that stops after a maximum number of steps
- stop_
on_ finish - Creates a stop condition that stops when the model returns a non-ToolCalls finish reason
- stream_
text - Streams text from a language model.