pub enum Provider {
OpenAI(OpenAI),
Claude(Claude),
}Expand description
Unified LLM provider enum.
The gateway constructs the appropriate variant based on ApiStandard
from the provider config. The runtime is monomorphized on Provider.
Variants§
OpenAI(OpenAI)
OpenAI-compatible API (covers OpenAI, DeepSeek, Grok, Qwen, Kimi, Ollama).
Claude(Claude)
Anthropic Messages API.
Implementations§
Source§impl Provider
impl Provider
Sourcepub fn context_length(&self, _model: &str) -> Option<usize>
pub fn context_length(&self, _model: &str) -> Option<usize>
Query the context length for a given model ID.
Local providers delegate to mistralrs; remote providers return None
(callers fall back to the static map in wcore::model::default_context_limit).
Sourcepub async fn wait_until_ready(&mut self) -> Result<()>
pub async fn wait_until_ready(&mut self) -> Result<()>
Wait until the provider is ready.
No-op for remote providers. For local providers, blocks until the model finishes loading.
Trait Implementations§
Source§impl Model for Provider
impl Model for Provider
Source§fn stream(
&self,
request: Request,
) -> impl Stream<Item = Result<StreamChunk>> + Send
fn stream( &self, request: Request, ) -> impl Stream<Item = Result<StreamChunk>> + Send
Stream a chat completion response.
Source§fn context_limit(&self, model: &str) -> usize
fn context_limit(&self, model: &str) -> usize
Resolve the context limit for a model name. Read more
Source§fn active_model(&self) -> CompactString
fn active_model(&self) -> CompactString
Get the active/default model name.
Auto Trait Implementations§
impl Freeze for Provider
impl !RefUnwindSafe for Provider
impl Send for Provider
impl Sync for Provider
impl Unpin for Provider
impl UnsafeUnpin for Provider
impl !UnwindSafe for Provider
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more