Expand description
§neuron
Rust library for production AI agents. Add one dependency, enable the features you need, and compose providers, tools, context strategies, and a runtime into agents that work.
§Why neuron?
Most AI agent libraries are Python-first, framework-shaped, and opinionated. neuron is none of those.
- Rust-native — no Python interop, no runtime overhead
- Composable — use one crate or all of them, no buy-in required
- Model-agnostic — Anthropic, OpenAI, Ollama, or bring your own
- Context-aware — sliding window, compaction, and token counting built in
- MCP-native — first-class Model Context Protocol support
- No magic — it’s a while loop with tools attached, not a framework
§High-Level Features
- Multi-provider LLM support — Anthropic Claude, OpenAI GPT, Ollama local models, or implement the
Providertrait for your own - Composable tool middleware — axum-style middleware pipeline for tool calls: logging, auth, rate limiting, retries
- Context compaction — sliding window, tool result clearing, LLM summarization, and composite strategies to keep conversations within token limits
- Model Context Protocol — full MCP client and server, stdio and HTTP transports, automatic tool bridging
- Input/output guardrails — safety checks that run before input reaches the LLM or before output reaches the user, with tripwire semantics
- Sessions and sub-agents — persist conversations, spawn isolated sub-agents with filtered tool sets and depth guards
- Durable execution — wrap side effects for crash recovery via Temporal, Restate, or Inngest
- Streaming — real-time token streaming with hook integration across all providers
- Usage limits —
UsageLimitsonLoopConfigenforces token budget constraints;LoopError::UsageLimitExceededwhen exceeded - Tool timeouts —
TimeoutMiddlewarewraps tool calls intokio::time::timeoutto prevent runaway execution - Structured output validation —
StructuredOutputValidatorvalidates tool input against JSON Schema, returningToolError::ModelRetryfor self-correction;RetryLimitedValidatoradds a retry cap - OpenTelemetry instrumentation —
OtelHookinneuron-otelemitstracingspans following GenAI semantic conventions with opt-in content capture
§Installation
cargo add neuron # Anthropic provider included by default
cargo add neuron --features full # all providers + MCP + runtime§Quick Start
use neuron::prelude::*;
use neuron::anthropic::Anthropic;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let provider = Anthropic::new("your-api-key").model("claude-sonnet-4-20250514");
let context = SlidingWindowStrategy::new(10, 100_000);
let tools = ToolRegistry::new();
let mut agent = AgentLoop::builder(provider, context)
.system_prompt("You are a helpful assistant.")
.max_turns(10)
.tools(tools)
.build();
let ctx = ToolContext {
cwd: std::env::current_dir()?,
session_id: "demo".into(),
environment: Default::default(),
cancellation_token: Default::default(),
progress_reporter: None,
};
let result = agent.run_text("Hello!", &ctx).await?;
println!("{}", result.response);
Ok(())
}§Feature Flags
| Feature | Enables | Default |
|---|---|---|
anthropic | neuron::anthropic (Anthropic Claude) | yes |
openai | neuron::openai (OpenAI GPT) | no |
ollama | neuron::ollama (Ollama local) | no |
mcp | neuron::mcp (Model Context Protocol) | no |
runtime | neuron::runtime (sessions, guardrails) | no |
otel | neuron::otel (OpenTelemetry instrumentation) | no |
full | All of the above | no |
§Module Map
| Module | Underlying Crate | Contents |
|---|---|---|
neuron::types | neuron-types | Messages, traits, errors, streaming |
neuron::tool | neuron-tool | ToolRegistry, middleware pipeline |
neuron::context | neuron-context | Token counting, compaction strategies |
neuron::r#loop | neuron-loop | AgentLoop, LoopConfig, AgentResult |
neuron::anthropic | neuron-provider-anthropic | Anthropic client (feature-gated) |
neuron::openai | neuron-provider-openai | OpenAi client (feature-gated) |
neuron::ollama | neuron-provider-ollama | Ollama client (feature-gated) |
neuron::mcp | neuron-mcp | McpClient, McpToolBridge (feature-gated) |
neuron::runtime | neuron-runtime | Sessions, guardrails (feature-gated) |
neuron::otel | neuron-otel | OTel instrumentation (feature-gated) |
Note:
loopis a Rust keyword, so the loop module is accessed asneuron::r#loop. In practice, import types directly from the prelude or fromneuron_loop.
§Ecosystem
Each block is also available as a standalone crate:
| Crate | docs.rs |
|---|---|
neuron-types | Core traits — Provider, Tool, ContextStrategy |
neuron-tool | Tool registry with composable middleware |
neuron-tool-macros | #[neuron_tool] derive macro |
neuron-context | Token counting and compaction strategies |
neuron-loop | Agentic while loop |
neuron-provider-anthropic | Anthropic Claude (Messages API, streaming, prompt caching) |
neuron-provider-openai | OpenAI GPT (Chat Completions API, streaming) |
neuron-provider-ollama | Ollama (local NDJSON streaming) |
neuron-mcp | MCP client + server |
neuron-runtime | Sessions, sub-agents, guardrails, durability |
neuron-otel | OTel instrumentation — GenAI semantic conventions |
§Comparison
How neuron compares to the two most established Rust alternatives:
| Capability | neuron | Rig | genai |
|---|---|---|---|
| Crate independence | One crate per provider | All providers in rig-core | Single crate |
| LLM providers | Anthropic, OpenAI, Ollama | Many | Many |
| Tool middleware | Composable chain | None | None |
| Context compaction | 4 strategies, token-aware | None | None |
| MCP (full spec) | Client + server + bridge | Client (rmcp) | None |
| Durable execution | DurableContext trait | None | None |
| Guardrails / sandbox | InputGuardrail, OutputGuardrail, PermissionPolicy, Sandbox | None | None |
| Sessions | SessionStorage trait + impls | None | None |
| Vector stores / RAG | None | Many integrations | None |
| Usage limits | UsageLimits token/request budget | None | None |
| Tool timeouts | TimeoutMiddleware per-tool | None | None |
| Structured output validation | StructuredOutputValidator with self-correction | None | None |
| OpenTelemetry | GenAI semantic conventions (neuron-otel) | Full integration | None |
| Embeddings | None | EmbeddingModel trait | Yes |
Where others lead today: Rig has a larger provider and vector store ecosystem with an extensive example set. genai covers many providers in one ergonomic crate. neuron’s architecture is ahead; the ecosystem is growing.
§Prelude Contents
The neuron::prelude module re-exports the most commonly used types:
CompletionRequest,CompletionResponse,Message,Role,ContentBlock,ContentItem,SystemPrompt,TokenUsage,StopReason— conversation primitives.Provider— the LLM provider trait.Tool,ToolDyn,ToolDefinition,ToolContext,ToolOutput,ToolError— tool system types.ToolRegistry— tool registration and dispatch.SlidingWindowStrategy— context compaction.AgentLoop,AgentLoopBuilder,AgentResult,LoopConfig— the agentic loop.
§Learning Path
Run examples in this order to learn neuron incrementally:
neuron-provider-anthropic/examples/basic.rs— single completionneuron-provider-anthropic/examples/streaming.rs— real-time token streamingneuron-provider-anthropic/examples/context_management.rs— server-side compactionneuron-provider-openai/examples/basic.rs— OpenAI provider usageneuron-provider-openai/examples/embeddings.rs— EmbeddingProvider with cosine similarityneuron-provider-ollama/examples/basic.rs— local model inferenceneuron-tool/examples/custom_tool.rs— define and register toolsneuron-tool/examples/derive_tool.rs—#[neuron_tool]proc-macroneuron-tool/examples/middleware.rs— composable tool middlewareneuron-tool/examples/model_retry.rs— ToolError::ModelRetry self-correctionneuron-loop/examples/agent_loop.rs— multi-turn agent with tools (no API key)neuron-loop/examples/multi_turn.rs— conversation accumulation (no API key)neuron-loop/examples/cancellation.rs— CancellationToken stops loop cooperativelyneuron-loop/examples/parallel_tools.rs— concurrent tool execution via join_allneuron-context/examples/compaction.rs— token counting and compactionneuron/examples/full_agent.rs— end-to-end production agentneuron/examples/structured_output.rs— JSON Schema outputneuron/examples/multi_provider.rs— swap providers at runtimeneuron/examples/testing_agents.rs— mock provider patterns for unit testingneuron-runtime/examples/guardrails.rs— input/output safety checksneuron-runtime/examples/sessions.rs— conversation persistenceneuron-runtime/examples/tracing_hook.rs— structured tracing from hook eventsneuron-runtime/examples/local_durable.rs— passthrough durable contextneuron-runtime/examples/full_production.rs— sessions + guardrails + tracing composedneuron-mcp/examples/mcp_client.rs— MCP server integration
§Part of neuron
This is the root crate of neuron. For
maximum independence, depend on individual block crates (neuron-types,
neuron-provider-anthropic, etc.) directly.
§License
Licensed under either of Apache License, Version 2.0 or MIT License at your option.
Modules§
- anthropic
- Anthropic Claude provider (Messages API, streaming, prompt caching).
- context
- Context management — token counting, compaction strategies, persistent context.
- loop
- The agentic while loop — composes provider + tools + context.
- prelude
- Common imports for working with agent blocks.
- tool
- Tool registry, middleware pipeline, and built-in middleware.
- types
- Shared types and traits — the lingua franca of all blocks.