Expand description
Enhanced LLM Integration System
This module provides a comprehensive LLM integration system that builds on the existing SamplingHandler foundation with advanced features:
- Provider abstraction: Generic LLMProvider trait for multi-provider support
- Token management: Token counting and context window management
- Session management: Conversation tracking with history and metadata
- Streaming support: Infrastructure for streaming responses
- Smart routing: Intelligent provider selection based on request type
- Registry system: Centralized management of multiple LLM providers
§Architecture
LLMRegistry
├── LLMProvider (OpenAI, Anthropic, Custom)
├── SessionManager
│ ├── ConversationSession
│ └── ContextStrategy
├── TokenCounter
└── RequestRouter
§Usage
use turbomcp_client::llm::{LLMRegistry, OpenAIProvider, LLMProviderConfig};
use std::sync::Arc;
let mut registry = LLMRegistry::new();
// Register providers
let openai = Arc::new(OpenAIProvider::new(LLMProviderConfig {
api_key: std::env::var("OPENAI_API_KEY")?,
model: "gpt-4".to_string(),
..Default::default()
})?);
registry.register_provider("openai", openai).await?;
// Set as default provider
registry.set_default_provider("openai")?;
// List available providers
let providers = registry.list_providers();
println!("Available providers: {:?}", providers);
Re-exports§
pub use core::LLMCapabilities;
pub use core::LLMError;
pub use core::LLMProvider;
pub use core::LLMProviderConfig;
pub use core::LLMRequest;
pub use core::LLMResponse;
pub use core::LLMResult;
pub use core::ModelInfo;
pub use providers::AnthropicProvider;
pub use providers::OllamaProvider;
pub use providers::OpenAIProvider;
pub use session::ContextStrategy;
pub use session::ConversationSession;
pub use session::SessionConfig;
pub use session::SessionManager;
pub use session::SessionMetadata;
pub use tokens::ContextWindow;
pub use tokens::TokenCounter;
pub use tokens::TokenUsage;
pub use registry::LLMRegistry;
pub use registry::ProviderInfo;
pub use registry::RegistryConfig;
pub use streaming::StreamChunk;
pub use streaming::StreamingHandler;
pub use streaming::StreamingResponse;
pub use routing::RequestRouter;
pub use routing::RouteRule;
pub use routing::RoutingStrategy;
Modules§
- core
- Core LLM abstractions and types
- providers
- LLM provider implementations
- registry
- LLM Registry for managing multiple providers
- routing
- Request routing and provider selection
- session
- Session and conversation management
- streaming
- Streaming response support infrastructure
- tokens
- Token counting and context management utilities