Skip to main content

LlamaClient

Struct LlamaClient 

Source
pub struct LlamaClient { /* private fields */ }
Expand description

Llama.cpp server client (local or remote)

Compatible with llama.cpp’s OpenAI-compatible API server Typically runs on localhost:8080 or similar

Implementations§

Source§

impl LlamaClient

Source

pub fn new(base_url: impl Into<String>, model: impl Into<String>) -> Self

Create a new llama.cpp client

§Arguments
  • base_url - Base URL of llama.cpp server (e.g., “http://localhost:8080”)
  • model - Model name (optional, llama.cpp usually ignores this)
Source

pub fn with_http_client( base_url: impl Into<String>, model: impl Into<String>, http_client: HttpClient, ) -> Self

Create a new llama.cpp client with custom HTTP client Useful for configuring TLS, timeouts, etc.

Source

pub fn localhost() -> Self

Create a client pointing to localhost:8080 (default llama.cpp port)

Source

pub fn localhost_with_port(port: u16) -> Self

Create a client pointing to localhost with custom port

Source

pub fn insecure(base_url: impl Into<String>, model: impl Into<String>) -> Self

Create a client with insecure HTTPS (accepts self-signed certificates) Useful for local development with HTTPS servers

Source

pub fn localhost_insecure(port: u16) -> Self

Create localhost client with insecure HTTPS on custom port

Trait Implementations§

Source§

impl ChatClient for LlamaClient

Source§

fn chat<'life0, 'async_trait>( &'life0 self, request: ChatRequest, ) -> Pin<Box<dyn Future<Output = LlmResult<ChatResponse>> + Send + 'async_trait>>
where Self: 'async_trait, 'life0: 'async_trait,

Send a chat completion request
Source§

fn chat_stream<'life0, 'async_trait>( &'life0 self, request: ChatRequest, ) -> Pin<Box<dyn Future<Output = LlmResult<TextStream>> + Send + 'async_trait>>
where Self: 'async_trait, 'life0: 'async_trait,

Stream a chat completion request (yields text chunks as they arrive)
Source§

fn model(&self) -> &str

Get the model name this client uses
Source§

fn provider(&self) -> &str

Get the provider name (e.g., “openai”, “llama.cpp”)

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T> Instrument for T

Source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<T> WithSubscriber for T

Source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more