Skip to main content

Crate zeph_llm

Crate zeph_llm 

Source
Expand description

LLM provider abstraction and backend implementations.

Re-exports§

pub use classifier::metrics::ClassifierMetrics;
pub use classifier::metrics::ClassifierMetricsSnapshot;
pub use classifier::metrics::TaskMetricsSnapshot;
pub use claude::ThinkingConfig;
pub use claude::ThinkingEffort;
pub use error::LlmError;
pub use extractor::Extractor;
pub use gemini::ThinkingLevel as GeminiThinkingLevel;
pub use provider::ChatStream;
pub use provider::LlmProvider;
pub use provider::StreamChunk;
pub use provider::ThinkingBlock;
pub use stt::SpeechToText;
pub use stt::Transcription;

Modules§

any
classifier
ML-backed classifier infrastructure (feature classifiers).
claude
Claude (Anthropic) LLM provider implementation.
compatible
ema
Per-provider EMA tracker for latency-aware super::router::RouterProvider ordering.
error
extractor
gemini
http
Shared HTTP client construction for consistent timeout and TLS configuration.
mock
Test-only mock LLM provider.
model_cache
Disk-backed cache for remote model listings with 24-hour TTL.
ollama
openai
provider
router
Provider router: EMA-based, Thompson Sampling, Cascade, and PILOT Bandit strategies.
stt
whisper