Expand description
LLM Provider abstraction for ThinkTool Protocol Engine
Supports 18+ providers through unified OpenAI-compatible interface:
- Anthropic, OpenAI, OpenRouter (original)
- Google Gemini, Google Vertex AI
- xAI (Grok), Groq, Mistral, DeepSeek
- Together AI, Fireworks AI, Alibaba Qwen
- Cloudflare AI, AWS Bedrock, Azure OpenAI
- Perplexity, Cohere, Cerebras
§Architecture
Most modern LLM providers expose OpenAI-compatible /chat/completions endpoints,
enabling a unified client with provider-specific configuration. This module
leverages that standardization while supporting provider-specific features.
§Aggregation Layers
For maximum flexibility, consider using aggregation layers:
- OpenRouter: 300+ models, automatic fallbacks, BYOK support
- Cloudflare AI Gateway: Unified endpoint, 350+ models, analytics
- LiteLLM: Python proxy for 100+ providers (external dependency)
Structs§
- LlmConfig
- Configuration for LLM providers
- LlmRequest
- Request to LLM
- LlmResponse
- Response from LLM
- LlmUsage
- Token usage from LLM call
- Provider
Extra - Provider-specific extra configuration
- Provider
Info - Provider info for documentation/UI
- Unified
LlmClient - Unified LLM client supporting 18+ providers
Enums§
- Finish
Reason - Why generation stopped
- LlmProvider
- Supported LLM providers (OpenAI-compatible where applicable)
Traits§
- LlmClient
- Trait for LLM client implementations
Functions§
- create_
available_ client - Create a client for the first available provider
- discover_
available_ providers - Discover available providers based on environment variables
- get_
provider_ info - Get info for all providers