ai_assistant_core
Simple, ergonomic Rust client for local LLMs.
Connect to Ollama, LM Studio, or any OpenAI-compatible server in a few lines of code.
Quick Start
[]
= "0.1"
= { = "1", = ["rt-multi-thread", "macros"] }
use ollama;
async
Features
- Auto-detection: Scans localhost for running providers automatically
- Multi-provider: Ollama, LM Studio, any OpenAI-compatible API
- Streaming: Token-by-token responses via
futures::Stream - Message history: Full conversation support with system/user/assistant roles
- Minimal dependencies: Just
reqwest,serde,tokio,futures,thiserror - MIT / Apache-2.0: Use it anywhere
Providers
use ;
let o = ollama; // localhost:11434
let o2 = ollama_at; // remote Ollama
let lm = lm_studio; // localhost:1234
let c = openai_compat; // any compatible API
Streaming
use ollama;
use StreamExt;
# async
Conversation History
use ;
# async
Auto-Detection
Don't know what's running? Let detect() find providers for you:
use detect;
# async
Checks OLLAMA_HOST / LM_STUDIO_URL env vars, falls back to default ports.
Pass extra URLs for custom endpoints: detect(&["http://gpu-server:11434"]).
Need More?
For advanced features like RAG, multi-agent orchestration, security guardrails, distributed clusters, MCP protocol, autonomous agents, and more, check out the full ai_assistant suite.
License
Licensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE)
- MIT License (LICENSE-MIT)
at your option.