Crate ai_sdk_core

Crate ai_sdk_core 

Source
Expand description

§AI SDK Core

High-level, ergonomic APIs for building applications with large language models.

This crate provides production-ready abstractions over the provider specification layer, offering builder-based APIs, automatic tool execution, structured output generation, and comprehensive error handling.

§Core Features

  • Text Generation: generate_text() and stream_text() for chat completion
  • Tool Execution: Automatic multi-step tool calling with custom functions
  • Embeddings: embed() and embed_many() for semantic vector generation
  • Structured Output: generate_object() for schema-validated JSON
  • Middleware: Extensible hooks for logging, caching, and custom behavior
  • Multi-Provider: Registry system for managing multiple provider configurations

§Example: Text Generation

Generate text using a simple builder pattern with provider-agnostic configuration:

use ai_sdk_core::generate_text;
use ai_sdk_openai::openai;

let result = generate_text()
    .model(openai("gpt-4").api_key(api_key))
    .prompt("Explain the fundamentals of quantum computing")
    .temperature(0.7)
    .max_tokens(500)
    .execute()
    .await?;

println!("Response: {}", result.text());
println!("Tokens used: {}", result.usage.total_tokens);

§Example: Tool Calling

Implement custom tools that the model can call during generation. The framework handles the execution loop automatically:

use ai_sdk_core::{generate_text, Tool, ToolContext};
use ai_sdk_openai::openai;
use async_trait::async_trait;
use std::sync::Arc;

struct WeatherTool;

#[async_trait]
impl Tool for WeatherTool {
    fn name(&self) -> &str { "get_weather" }

    fn description(&self) -> &str {
        "Retrieves current weather conditions for a specified location"
    }

    fn input_schema(&self) -> serde_json::Value {
        serde_json::json!({
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "City name or coordinates"
                }
            },
            "required": ["location"]
        })
    }

    async fn execute(&self, input: serde_json::Value, _ctx: &ToolContext)
        -> Result<serde_json::Value, ai_sdk_core::ToolError> {
        let location = input["location"].as_str().unwrap_or("unknown");
        Ok(serde_json::json!({
            "location": location,
            "temperature": 72,
            "conditions": "sunny"
        }))
    }
}

let result = generate_text()
    .model(openai("gpt-4").api_key(api_key))
    .prompt("What's the weather like in Tokyo?")
    .tools(vec![Arc::new(WeatherTool)])
    .max_steps(5)
    .execute()
    .await?;

§Example: Streaming

Process responses incrementally as they arrive for real-time user feedback:

use ai_sdk_core::stream_text;
use tokio_stream::StreamExt;

let result = stream_text()
    .model(openai("gpt-4").api_key(api_key))
    .prompt("Write a creative short story about time travel")
    .temperature(0.9)
    .execute()
    .await?;

let mut stream = result.into_stream();
while let Some(part) = stream.next().await {
    match part? {
        TextStreamPart::TextDelta(delta) => print!("{}", delta),
        TextStreamPart::FinishReason(reason) => {
            println!("\nFinished: {:?}", reason);
        }
        _ => {}
    }
}

Re-exports§

pub use error::EmbedError;
pub use error::Error;
pub use error::GenerateError;
pub use error::Result;
pub use error::ToolError;

Modules§

error
Error definitions for the crate.
generate_object
Generate structured objects with schema validation Generate structured objects from language models.
middleware
Middleware system for customizing language model behavior Middleware system for composable language model behavior customization.
registry
Provider registry system for multi-provider management Provider registry system for multi-provider management.
util
Utility functions for media type detection, file download, and base64 encoding

Macros§

impl_builder_core
Macro that generates the core builder pattern implementation.

Structs§

CallOptions
Configuration options for language model generation requests.
EmbedBuilder
Builder for single value embedding
EmbedManyBuilder
Builder for embedding multiple values
EmbedManyResult
Result of embedding multiple values
EmbedResult
Result of embedding a single value
EmbeddingUsage
Token usage information for embeddings
GenerateTextBuilder
Builder for text generation.
GenerateTextResult
Result of a text generation call.
RetryPolicy
Retry policy for API calls
StepResult
Result of a single step in the generation process.
StreamTextBuilder
Builder for streaming text generation.
StreamTextResult
Result of a streaming text generation.
ToolCallPart
A request from the language model to invoke an external tool or function.
ToolContext
Context information provided to tools during execution.
ToolExecutor
Manages the execution of tools invoked by language models.
ToolResultPart
The result of executing a tool or function in response to a model’s tool call.
Usage
Usage information for a language model call.

Enums§

Content
Represents a single content element in a language model’s response or message.
FinishReason
Reason why a language model finished generating a response.
JsonValue
A JSON value can be a string, number, boolean, object, array, or null. JSON values can be serialized and deserialized by serde_json.
Message
A message in a conversation with a language model.
TextStreamPart
A part of the text stream.
ToolOutput
Tool execution output - either a single value or a stream of values

Traits§

EmbeddingModel
The core trait for embedding model implementations following the v3 specification.
LanguageModel
Core trait for language model providers.
StopCondition
Trait for determining when to stop the tool execution loop
Tool
Trait that tools must implement to be available for language model invocation.

Functions§

embed
Entry point function for embedding a single value
embed_many
Entry point function for embedding multiple values
generate_text
Generates text using a language model.
stop_after_steps
Creates a stop condition that stops after a maximum number of steps
stop_on_finish
Creates a stop condition that stops when the model returns a non-ToolCalls finish reason
stream_text
Streams text from a language model.