Skip to main content

Module local

Module local 

Source
Expand description

Local / OpenAI-compatible LLM provider client implementation Local / OpenAI-compatible LLM provider client.

LocalLLMClient targets any OpenAI-compatible chat-completions endpoint exposed by a local, LAN, VPN, or remote runtime (Ollama, LM Studio, llama.cpp llama-server, vLLM, text-generation-webui, etc.). It mirrors the structure of crate::services::ai::openrouter::OpenRouterClient but treats the API key as optional and emits actionable, sanitized error messages when the local endpoint is unreachable, returns a non-OpenAI response, or signals that the requested model is not loaded.

Structsยง

LocalLLMClient
Client for OpenAI-compatible local LLM runtimes.