Skip to main content

Crate neuron

Crate neuron 

Source
Expand description

§neuron

crates.io docs.rs license

Rust library for production AI agents. Add one dependency, enable the features you need, and compose providers, tools, context strategies, and a runtime into agents that work.

§Why neuron?

Most AI agent libraries are Python-first, framework-shaped, and opinionated. neuron is none of those.

  • Rust-native — no Python interop, no runtime overhead
  • Composable — use one crate or all of them, no buy-in required
  • Model-agnostic — Anthropic, OpenAI, Ollama, or bring your own
  • Context-aware — sliding window, compaction, and token counting built in
  • MCP-native — first-class Model Context Protocol support
  • No magic — it’s a while loop with tools attached, not a framework

§High-Level Features

  • Multi-provider LLM support — Anthropic Claude, OpenAI GPT, Ollama local models, or implement the Provider trait for your own
  • Composable tool middleware — axum-style middleware pipeline for tool calls: logging, auth, rate limiting, retries
  • Context compaction — sliding window, tool result clearing, LLM summarization, and composite strategies to keep conversations within token limits
  • Model Context Protocol — full MCP client and server, stdio and HTTP transports, automatic tool bridging
  • Input/output guardrails — safety checks that run before input reaches the LLM or before output reaches the user, with tripwire semantics
  • Sessions and sub-agents — persist conversations, spawn isolated sub-agents with filtered tool sets and depth guards
  • Durable execution — wrap side effects for crash recovery via Temporal, Restate, or Inngest
  • Streaming — real-time token streaming with hook integration across all providers
  • Usage limitsUsageLimits on LoopConfig enforces token budget constraints; LoopError::UsageLimitExceeded when exceeded
  • Tool timeoutsTimeoutMiddleware wraps tool calls in tokio::time::timeout to prevent runaway execution
  • Structured output validationStructuredOutputValidator validates tool input against JSON Schema, returning ToolError::ModelRetry for self-correction; RetryLimitedValidator adds a retry cap
  • OpenTelemetry instrumentationOtelHook in neuron-otel emits tracing spans following GenAI semantic conventions with opt-in content capture

§Installation

cargo add neuron                    # Anthropic provider included by default
cargo add neuron --features full    # all providers + MCP + runtime

§Quick Start

use neuron::prelude::*;
use neuron::anthropic::Anthropic;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let provider = Anthropic::new("your-api-key").model("claude-sonnet-4-20250514");
    let context = SlidingWindowStrategy::new(10, 100_000);
    let tools = ToolRegistry::new();

    let mut agent = AgentLoop::builder(provider, context)
        .system_prompt("You are a helpful assistant.")
        .max_turns(10)
        .tools(tools)
        .build();

    let ctx = ToolContext {
        cwd: std::env::current_dir()?,
        session_id: "demo".into(),
        environment: Default::default(),
        cancellation_token: Default::default(),
        progress_reporter: None,
    };
    let result = agent.run_text("Hello!", &ctx).await?;
    println!("{}", result.response);
    Ok(())
}

§Feature Flags

FeatureEnablesDefault
anthropicneuron::anthropic (Anthropic Claude)yes
openaineuron::openai (OpenAI GPT)no
ollamaneuron::ollama (Ollama local)no
mcpneuron::mcp (Model Context Protocol)no
runtimeneuron::runtime (sessions, guardrails)no
otelneuron::otel (OpenTelemetry instrumentation)no
fullAll of the aboveno

§Module Map

ModuleUnderlying CrateContents
neuron::typesneuron-typesMessages, traits, errors, streaming
neuron::toolneuron-toolToolRegistry, middleware pipeline
neuron::contextneuron-contextToken counting, compaction strategies
neuron::r#loopneuron-loopAgentLoop, LoopConfig, AgentResult
neuron::anthropicneuron-provider-anthropicAnthropic client (feature-gated)
neuron::openaineuron-provider-openaiOpenAi client (feature-gated)
neuron::ollamaneuron-provider-ollamaOllama client (feature-gated)
neuron::mcpneuron-mcpMcpClient, McpToolBridge (feature-gated)
neuron::runtimeneuron-runtimeSessions, guardrails (feature-gated)
neuron::otelneuron-otelOTel instrumentation (feature-gated)

Note: loop is a Rust keyword, so the loop module is accessed as neuron::r#loop. In practice, import types directly from the prelude or from neuron_loop.

§Ecosystem

Each block is also available as a standalone crate:

Cratedocs.rs
neuron-typesCore traits — Provider, Tool, ContextStrategy
neuron-toolTool registry with composable middleware
neuron-tool-macros#[neuron_tool] derive macro
neuron-contextToken counting and compaction strategies
neuron-loopAgentic while loop
neuron-provider-anthropicAnthropic Claude (Messages API, streaming, prompt caching)
neuron-provider-openaiOpenAI GPT (Chat Completions API, streaming)
neuron-provider-ollamaOllama (local NDJSON streaming)
neuron-mcpMCP client + server
neuron-runtimeSessions, sub-agents, guardrails, durability
neuron-otelOTel instrumentation — GenAI semantic conventions

§Comparison

How neuron compares to the two most established Rust alternatives:

CapabilityneuronRiggenai
Crate independenceOne crate per providerAll providers in rig-coreSingle crate
LLM providersAnthropic, OpenAI, OllamaManyMany
Tool middlewareComposable chainNoneNone
Context compaction4 strategies, token-awareNoneNone
MCP (full spec)Client + server + bridgeClient (rmcp)None
Durable executionDurableContext traitNoneNone
Guardrails / sandboxInputGuardrail, OutputGuardrail, PermissionPolicy, SandboxNoneNone
SessionsSessionStorage trait + implsNoneNone
Vector stores / RAGNoneMany integrationsNone
Usage limitsUsageLimits token/request budgetNoneNone
Tool timeoutsTimeoutMiddleware per-toolNoneNone
Structured output validationStructuredOutputValidator with self-correctionNoneNone
OpenTelemetryGenAI semantic conventions (neuron-otel)Full integrationNone
EmbeddingsNoneEmbeddingModel traitYes

Where others lead today: Rig has a larger provider and vector store ecosystem with an extensive example set. genai covers many providers in one ergonomic crate. neuron’s architecture is ahead; the ecosystem is growing.

§Prelude Contents

The neuron::prelude module re-exports the most commonly used types:

  • CompletionRequest, CompletionResponse, Message, Role, ContentBlock, ContentItem, SystemPrompt, TokenUsage, StopReason — conversation primitives.
  • Provider — the LLM provider trait.
  • Tool, ToolDyn, ToolDefinition, ToolContext, ToolOutput, ToolError — tool system types.
  • ToolRegistry — tool registration and dispatch.
  • SlidingWindowStrategy — context compaction.
  • AgentLoop, AgentLoopBuilder, AgentResult, LoopConfig — the agentic loop.

§Learning Path

Run examples in this order to learn neuron incrementally:

  1. neuron-provider-anthropic/examples/basic.rs — single completion
  2. neuron-provider-anthropic/examples/streaming.rs — real-time token streaming
  3. neuron-provider-anthropic/examples/context_management.rs — server-side compaction
  4. neuron-provider-openai/examples/basic.rs — OpenAI provider usage
  5. neuron-provider-openai/examples/embeddings.rs — EmbeddingProvider with cosine similarity
  6. neuron-provider-ollama/examples/basic.rs — local model inference
  7. neuron-tool/examples/custom_tool.rs — define and register tools
  8. neuron-tool/examples/derive_tool.rs#[neuron_tool] proc-macro
  9. neuron-tool/examples/middleware.rs — composable tool middleware
  10. neuron-tool/examples/model_retry.rs — ToolError::ModelRetry self-correction
  11. neuron-loop/examples/agent_loop.rs — multi-turn agent with tools (no API key)
  12. neuron-loop/examples/multi_turn.rs — conversation accumulation (no API key)
  13. neuron-loop/examples/cancellation.rs — CancellationToken stops loop cooperatively
  14. neuron-loop/examples/parallel_tools.rs — concurrent tool execution via join_all
  15. neuron-context/examples/compaction.rs — token counting and compaction
  16. neuron/examples/full_agent.rs — end-to-end production agent
  17. neuron/examples/structured_output.rs — JSON Schema output
  18. neuron/examples/multi_provider.rs — swap providers at runtime
  19. neuron/examples/testing_agents.rs — mock provider patterns for unit testing
  20. neuron-runtime/examples/guardrails.rs — input/output safety checks
  21. neuron-runtime/examples/sessions.rs — conversation persistence
  22. neuron-runtime/examples/tracing_hook.rs — structured tracing from hook events
  23. neuron-runtime/examples/local_durable.rs — passthrough durable context
  24. neuron-runtime/examples/full_production.rs — sessions + guardrails + tracing composed
  25. neuron-mcp/examples/mcp_client.rs — MCP server integration

§Part of neuron

This is the root crate of neuron. For maximum independence, depend on individual block crates (neuron-types, neuron-provider-anthropic, etc.) directly.

§License

Licensed under either of Apache License, Version 2.0 or MIT License at your option.

Modules§

anthropic
Anthropic Claude provider (Messages API, streaming, prompt caching).
context
Context management — token counting, compaction strategies, persistent context.
loop
The agentic while loop — composes provider + tools + context.
prelude
Common imports for working with agent blocks.
tool
Tool registry, middleware pipeline, and built-in middleware.
types
Shared types and traits — the lingua franca of all blocks.