pub struct Service { /* private fields */ }Expand description
Service layer - Business logic for LLM operations
This layer sits between handlers (HTTP) and client (LLM communication). It handles:
- Model selection and validation
- Delegating to the appropriate client methods
- Business-level error handling
Implementations§
Source§impl Service
impl Service
Sourcepub fn new(config: &LlmBackendSettings) -> Result<Self>
pub fn new(config: &LlmBackendSettings) -> Result<Self>
Create a new service with the specified backend configuration
Sourcepub async fn chat(
&self,
model: Option<&str>,
messages: Vec<Message>,
tools: Option<Vec<Tool>>,
) -> Result<Response>
pub async fn chat( &self, model: Option<&str>, messages: Vec<Message>, tools: Option<Vec<Tool>>, ) -> Result<Response>
Chat with a specific model (non-streaming)
If model is None, uses the default model from configuration.
Sourcepub async fn chat_stream_ollama(
&self,
model: Option<&str>,
messages: Vec<Message>,
format: StreamFormat,
) -> Result<UnboundedReceiverStream<String>>
pub async fn chat_stream_ollama( &self, model: Option<&str>, messages: Vec<Message>, format: StreamFormat, ) -> Result<UnboundedReceiverStream<String>>
Chat with streaming (Ollama format)
If model is None, uses the default model from configuration.
Sourcepub async fn chat_stream_ollama_with_tools(
&self,
model: Option<&str>,
messages: Vec<Message>,
tools: Option<Vec<Tool>>,
format: StreamFormat,
) -> Result<UnboundedReceiverStream<String>>
pub async fn chat_stream_ollama_with_tools( &self, model: Option<&str>, messages: Vec<Message>, tools: Option<Vec<Tool>>, format: StreamFormat, ) -> Result<UnboundedReceiverStream<String>>
Chat with streaming (Ollama format) with tools support
If model is None, uses the default model from configuration.
Sourcepub async fn chat_stream_openai(
&self,
model: Option<&str>,
messages: Vec<Message>,
tools: Option<Vec<Tool>>,
format: StreamFormat,
) -> Result<UnboundedReceiverStream<String>>
pub async fn chat_stream_openai( &self, model: Option<&str>, messages: Vec<Message>, tools: Option<Vec<Tool>>, format: StreamFormat, ) -> Result<UnboundedReceiverStream<String>>
Chat with streaming (OpenAI format)
If model is None, uses the default model from configuration.
Sourcepub async fn list_models(&self) -> Result<Vec<Model>>
pub async fn list_models(&self) -> Result<Vec<Model>>
List available models
Sourcepub async fn validate_model(&self, model: &str) -> Result<bool>
pub async fn validate_model(&self, model: &str) -> Result<bool>
Validate if a model is available
Auto Trait Implementations§
impl Freeze for Service
impl !RefUnwindSafe for Service
impl Send for Service
impl Sync for Service
impl Unpin for Service
impl !UnwindSafe for Service
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more