Module sampling

Module sampling 

Source
Expand description

Client-side sampling support for handling server-initiated requests

This module provides production-grade LLM backend integration for processing sampling requests from servers, enabling bidirectional LLM interactions in the MCP protocol.

§Features

  • Multi-provider support: OpenAI, Anthropic, and extensible architecture
  • MCP protocol compliance: Full CreateMessageRequest → CreateMessageResult flow
  • Production-grade error handling: Comprehensive error types and recovery
  • Conversation context: Proper message history management
  • Configuration: Flexible backend selection and parameter tuning
  • Async-first: Send + Sync throughout with proper async patterns

§Example

use turbomcp_client::sampling::{LLMBackendConfig, LLMProvider, ProductionSamplingHandler};

let config = LLMBackendConfig {
    provider: LLMProvider::OpenAI {
        api_key: std::env::var("OPENAI_API_KEY").unwrap(),
        base_url: None,
        organization: None,
    },
    default_model: Some("gpt-4".to_string()),
    timeout_seconds: 30,
    max_retries: 3,
};

let handler = ProductionSamplingHandler::new(config)?;

Structs§

DefaultSamplingHandler
Default sampling handler - provides echo functionality for testing/development
LLMBackendConfig
Comprehensive backend configuration
MockLLMHandler
Mock LLM handler for testing
ProductionSamplingHandler
Production-grade sampling handler with real LLM backend integration

Enums§

LLMBackendError
Comprehensive error types for LLM backend operations
LLMProvider
Supported LLM providers

Traits§

SamplingHandler
Handler for server-initiated sampling requests