Expand description
§neuron-provider-ollama
Ollama provider for the neuron agent blocks ecosystem. Implements the
Provider
trait from neuron-types against the Ollama Chat API, supporting both
synchronous completions and newline-delimited JSON (NDJSON) streaming.
Ollama runs locally and requires no API key or authentication. The default model
is llama3.2. The default base URL is http://localhost:11434.
§Installation
cargo add neuron-provider-ollama§Key Types
Ollama– client struct with builder methods (new,from_env,model,base_url,keep_alive). ImplementsProviderfromneuron-typesandDefault.ProviderError– re-exported error type for all provider failures.StreamHandle– returned bycomplete_stream, yieldsStreamEventitems as the model generates tokens.
§Features
- No authentication required – designed for local Ollama instances.
keep_alivecontrol for model memory residency ("5m","0"to unload immediately).- Tool call support using the OpenAI-compatible format that Ollama adopted.
- NDJSON streaming parsed line-by-line from the response byte stream.
- Tool call IDs synthesized via
uuid::Uuid::new_v4()since Ollama does not provide them natively. - Tool support varies by model — not all Ollama models support function calling. Check the Ollama model library for which models support tools.
§Usage
use neuron_provider_ollama::Ollama;
use neuron_types::{CompletionRequest, Message, Provider};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Default constructor (no auth needed for local Ollama)
let provider = Ollama::new()
.model("llama3.2")
.keep_alive("5m");
// Or from environment (reads OLLAMA_HOST if set, defaults to localhost:11434)
let provider = Ollama::from_env()?;
let request = CompletionRequest {
messages: vec![Message::user("What is the capital of France?")],
max_tokens: Some(256),
..Default::default()
};
let response = provider.complete(request).await?;
for block in &response.message.content {
println!("{block:?}");
}
Ok(())
}§Error handling
Each provider defines its own ProviderError type. If you’re using multiple
providers, pattern-match on the specific provider’s error rather than expecting
a shared error type across providers.
§Part of neuron
This crate is one block in the neuron
composable agent toolkit. It depends only on neuron-types.
§License
Licensed under either of Apache License, Version 2.0 or MIT License at your option.
Re-exports§
pub use client::Ollama;
Modules§
- client
- Ollama API client struct and builder.
- mapping
- Request/response mapping between neuron-types and the Ollama Chat API format.
Structs§
- Stream
Handle - Handle to a streaming completion response.
Enums§
- Provider
Error - Errors from LLM provider operations.
- Stream
Event - An event emitted during streaming completion.