Module llm

Module llm 

Source
Expand description

LLM Provider abstraction for ThinkTool Protocol Engine

Supports 18+ providers through unified OpenAI-compatible interface:

  • Anthropic, OpenAI, OpenRouter (original)
  • Google Gemini, Google Vertex AI
  • xAI (Grok), Groq, Mistral, DeepSeek
  • Together AI, Fireworks AI, Alibaba Qwen
  • Cloudflare AI, AWS Bedrock, Azure OpenAI
  • Perplexity, Cohere, Cerebras

§Architecture

Most modern LLM providers expose OpenAI-compatible /chat/completions endpoints, enabling a unified client with provider-specific configuration. This module leverages that standardization while supporting provider-specific features.

§Aggregation Layers

For maximum flexibility, consider using aggregation layers:

  • OpenRouter: 300+ models, automatic fallbacks, BYOK support
  • Cloudflare AI Gateway: Unified endpoint, 350+ models, analytics
  • LiteLLM: Python proxy for 100+ providers (external dependency)

Structs§

LlmConfig
Configuration for LLM providers
LlmRequest
Request to LLM
LlmResponse
Response from LLM
LlmUsage
Token usage from LLM call
ProviderExtra
Provider-specific extra configuration
ProviderInfo
Provider info for documentation/UI
UnifiedLlmClient
Unified LLM client supporting 18+ providers

Enums§

FinishReason
Why generation stopped
LlmProvider
Supported LLM providers (OpenAI-compatible where applicable)

Traits§

LlmClient
Trait for LLM client implementations

Functions§

create_available_client
Create a client for the first available provider
discover_available_providers
Discover available providers based on environment variables
get_provider_info
Get info for all providers