Expand description
๐ LLM Proxy - Unified interface for multiple LLM providers
This module provides a unified proxy for calling various LLMs, including OpenAI, Anthropic, Google Gemini, and local Candle-based models.
โWhy talk to one AI when you can talk to them all?โ - The Cheet ๐บ
Re-exportsยง
pub use claude as anthropic;
Modulesยง
- candle
- ๐ค Candle Local LLM Provider Implementation
- claude
- Claude API Client - Comprehensive Anthropic Claude integration via raw reqwest
- ๐ค Google Gemini Provider Implementation
- grok
- ๐ค Grok Provider Implementation (X.AI)
- memory
- ๐ง Memory Proxy - Scoped conversation history for LLMs
- oauth
- OAuth 2.0 + PKCE framework for proxy providers.
- ollama
- ๐ฆ Ollama & LM Studio Provider - Local LLM Auto-Detection
- openai
- ๐ค OpenAI Provider Implementation
- openai_
compat - ๐ OpenAI-Compatible Request/Response Types
- openrouter
- ๐ OpenRouter Provider Implementation
- server
- OpenAI-Compatible Proxy Server
- token_
store - Secure token storage for proxy OAuth providers.
- zai
- Z.AI Provider Implementation (Zhipu / GLM family)
Structsยง
- LlmMessage
- ๐ฌ A single message in a conversation
- LlmProxy
- ๐ ๏ธ Factory for creating LLM providers
- LlmRequest
- ๐ Request structure for LLM completion
- LlmResponse
- ๐ฆ Response structure from LLM completion
- LlmUsage
- ๐ Token usage statistics
Enumsยง
- LlmRole
- ๐ญ Roles in a conversation
Traitsยง
- LlmProvider
- ๐ค Common interface for all LLM providers