pub struct LlmClient { /* private fields */ }Expand description
Unified LLM client — routes to Claude API, Grok API, or Ollama based on model name. Supports both blocking and streaming generation.
Implementations§
Source§impl LlmClient
impl LlmClient
pub fn new(model: &str) -> Self
pub fn with_limits(model: &str, context_size: u32, max_predict: u32) -> Self
Sourcepub async fn generate(
&self,
role: &str,
system: &str,
user_prompt: &str,
) -> Result<String>
pub async fn generate( &self, role: &str, system: &str, user_prompt: &str, ) -> Result<String>
Generate text (blocking — waits for full response).
Sourcepub async fn generate_with_stats(
&self,
role: &str,
system: &str,
user_prompt: &str,
) -> Result<(String, LlmCallStats)>
pub async fn generate_with_stats( &self, role: &str, system: &str, user_prompt: &str, ) -> Result<(String, LlmCallStats)>
Generate text and return stats for reports.
Sourcepub async fn generate_live_with_stats(
&self,
role: &str,
system: &str,
user_prompt: &str,
) -> Result<(String, LlmCallStats)>
pub async fn generate_live_with_stats( &self, role: &str, system: &str, user_prompt: &str, ) -> Result<(String, LlmCallStats)>
Generate with live streaming and return stats for reports.
Sourcepub async fn generate_live(
&self,
role: &str,
system: &str,
user_prompt: &str,
) -> Result<String>
pub async fn generate_live( &self, role: &str, system: &str, user_prompt: &str, ) -> Result<String>
Generate with live token-by-token output to stdout. Shows the model’s thinking process in real-time.
Sourcepub async fn generate_streaming(
&self,
role: &str,
system: &str,
user_prompt: &str,
tx: Sender<StreamEvent>,
) -> Result<String>
pub async fn generate_streaming( &self, role: &str, system: &str, user_prompt: &str, tx: Sender<StreamEvent>, ) -> Result<String>
Generate text with streaming — tokens sent via channel as they arrive. Returns the full accumulated text when done.
Sourcepub async fn chat_with_tools(
&self,
messages: &[ChatMessage],
tools: &[OllamaTool],
) -> Result<ChatToolResponse>
pub async fn chat_with_tools( &self, messages: &[ChatMessage], tools: &[OllamaTool], ) -> Result<ChatToolResponse>
Chat with native tool calling. Returns assistant content + tool calls.
Trait Implementations§
Auto Trait Implementations§
impl Freeze for LlmClient
impl !RefUnwindSafe for LlmClient
impl Send for LlmClient
impl Sync for LlmClient
impl Unpin for LlmClient
impl UnsafeUnpin for LlmClient
impl !UnwindSafe for LlmClient
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
Converts
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
Converts
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more