Expand description
Client-side sampling support for handling server-initiated requests
This module provides production-grade LLM backend integration for processing sampling requests from servers, enabling bidirectional LLM interactions in the MCP protocol.
§Features
- Multi-provider support: OpenAI, Anthropic, and extensible architecture
- MCP protocol compliance: Full CreateMessageRequest → CreateMessageResult flow
- Production-grade error handling: Comprehensive error types and recovery
- Conversation context: Proper message history management
- Configuration: Flexible backend selection and parameter tuning
- Async-first: Send + Sync throughout with proper async patterns
§Example
use turbomcp_client::sampling::{LLMBackendConfig, LLMProvider, ProductionSamplingHandler};
let config = LLMBackendConfig {
provider: LLMProvider::OpenAI {
api_key: std::env::var("OPENAI_API_KEY").unwrap(),
base_url: None,
organization: None,
},
default_model: Some("gpt-4".to_string()),
timeout_seconds: 30,
max_retries: 3,
};
let handler = ProductionSamplingHandler::new(config)?;
Structs§
- Default
Sampling Handler - Default sampling handler - provides echo functionality for testing/development
- LLMBackend
Config - Comprehensive backend configuration
- MockLLM
Handler - Mock LLM handler for testing
- Production
Sampling Handler - Production-grade sampling handler with real LLM backend integration
Enums§
- LLMBackend
Error - Comprehensive error types for LLM backend operations
- LLMProvider
- Supported LLM providers
Traits§
- Sampling
Handler - Handler for server-initiated sampling requests