pub struct LlamaRequest {
pub model: Option<Model>,
pub messages: Vec<Message>,
pub stream: bool,
pub options: Options,
pub url: Option<Url>,
}
Expand description
Request builder for Ollama LLM interactions.
Provides a fluent interface for constructing requests to the Ollama service, handling URL management, message construction, and model configuration.
§Examples
let request =
LlamaRequest::new().with_model(Model::Llama3p2c3b).with_message("Explain how a computer works");
let response = request.send().await?;
Fields§
§model: Option<Model>
The LLM model to use for generation. If not specified, will result in an error when sending the request.
messages: Vec<Message>
Vector of conversation messages. Must contain at least one message before sending the request. Messages are processed in order to maintain conversation context.
stream: bool
Whether to stream the response. Currently not implemented, but when true will enable token-by-token streaming of the model’s response.
options: Options
Generation parameters including temperature, top-k, top-p, and maximum token count. Uses sensible defaults if not explicitly configured.
url: Option<Url>
The target URL for the request. If not specified, defaults to localhost:11434 with a warning. Skipped during serialization.
Implementations§
Source§impl LlamaRequest
impl LlamaRequest
Sourcepub fn with_endpoint(self, endpoint: OllamaEndpoint) -> Self
pub fn with_endpoint(self, endpoint: OllamaEndpoint) -> Self
Sets the API endpoint for the request.
§Arguments
endpoint
- The API endpoint to use
Note: Currently only the Chat endpoint is fully supported. Other endpoints will generate warnings about potential limitations.
Sourcepub fn with_model(self, model: Model) -> Self
pub fn with_model(self, model: Model) -> Self
Sourcepub fn with_message(self, content: &str) -> Self
pub fn with_message(self, content: &str) -> Self
Sourcepub async fn send(&self) -> Result<LlamaResponse>
pub async fn send(&self) -> Result<LlamaResponse>
Sends the request to the Ollama service.
§Returns
Returns a Result containing either:
- A
LlamaResponse
with the model’s response - A
LearnerError
if the request fails
§Errors
This function will return an error if:
- No model is specified
- No messages are provided
- The network request fails
- The response cannot be parsed
Trait Implementations§
Source§impl Default for LlamaRequest
impl Default for LlamaRequest
Source§fn default() -> LlamaRequest
fn default() -> LlamaRequest
Auto Trait Implementations§
impl Freeze for LlamaRequest
impl RefUnwindSafe for LlamaRequest
impl Send for LlamaRequest
impl Sync for LlamaRequest
impl Unpin for LlamaRequest
impl UnwindSafe for LlamaRequest
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more