Skip to main content

Crate llm_stack

Crate llm_stack 

Source
Expand description

§llm-stack

Provider-agnostic types and traits for interacting with large language models.

This crate defines the shared vocabulary that every LLM provider implementation speaks: messages, responses, tool calls, streaming events, usage tracking, and errors. It intentionally contains zero provider-specific code — concrete providers live in sibling crates and implement Provider (or its object-safe counterpart DynProvider).

§Provider Crates

Official provider implementations:

CrateProviderFeatures
llm-stack-anthropicClaude (Anthropic)Streaming, tools, vision, caching
llm-stack-openaiGPT (OpenAI)Streaming, tools, structured output
llm-stack-ollamaOllama (local)Streaming, tools

§Architecture

 ┌─────────────────────┐ ┌───────────────────┐ ┌───────────────────┐
 │ llm-stack-anthropic │ │  llm-stack-openai  │ │  llm-stack-ollama │
 └──────────┬──────────┘ └─────────┬─────────┘ └─────────┬─────────┘
            │                      │                     │
            └───────────┬──────────┴──────────┬──────────┘
                        │                     │
                        ▼                     ▼
             ┌─────────────────────────────────────┐
             │             llm-stack               │  ← you are here
             │  (Provider trait, ChatParams, etc.) │
             └─────────────────────────────────────┘

§Quick start

use llm_stack::{ChatMessage, ChatParams, Provider};

let params = ChatParams {
    messages: vec![ChatMessage::user("Explain ownership in Rust")],
    max_tokens: Some(1024),
    ..Default::default()
};

let response = provider.generate(&params).await?;

§Modules

ModulePurpose
chatMessages, content blocks, tool calls, and responses
contextToken-budgeted conversation history management
errorUnified LlmError across all providers
interceptUnified interceptor system for LLM calls and tool executions
providerThe Provider trait and request parameters
streamServer-sent event types and the ChatStream alias
structuredTyped LLM responses with schema validation (feature-gated)
toolTool execution engine with registry and approval hooks
registryDynamic provider instantiation from configuration
usageToken counts and cost tracking

Re-exports§

pub use chat::ChatMessage;
pub use chat::ChatResponse;
pub use chat::ContentBlock;
pub use chat::ToolCall;
pub use chat::ToolResult;
pub use error::LlmError;
pub use provider::ChatParams;
pub use provider::DynProvider;
pub use provider::JsonSchema;
pub use provider::Provider;
pub use provider::ToolChoice;
pub use provider::ToolDefinition;
pub use registry::ProviderRegistry;
pub use stream::ChatStream;
pub use stream::StreamEvent;
pub use tool::ToolHandler;
pub use tool::ToolLoopConfig;
pub use tool::ToolRegistry;
pub use usage::Usage;

Modules§

chat
Conversation primitives: messages, content blocks, and responses.
context
Context window management with token budgeting.
error
Unified error type for all LLM operations.
intercept
Unified interceptor system for LLM calls and tool executions.
mcp
MCP (Model Context Protocol) integration for ToolRegistry.
mock
Mock provider for testing.
provider
Provider trait and request types.
registry
Dynamic provider registry for configuration-driven provider instantiation.
stream
Streaming response types.
structured
Structured output — typed LLM responses with schema validation.
test_helpers
Pre-built helpers for testing code that uses llm-core types.
tool
Tool execution engine.
usage
Token usage and cost tracking.