neuron-context
Context management crate for the neuron ecosystem. Provides token estimation, context compaction strategies, persistent context sections, and system prompt injection. These are the building blocks that keep an agent's conversation within token limits without losing critical information.
Installation
Key Types
TokenCounter-- heuristic token estimator using a configurable chars-per-token ratio (default 4.0). This is an approximation — expect 10-20% variance vs actual tokenizer counts. Fine for compaction triggers, not for exact token budgetingSlidingWindowStrategy-- keeps system messages plus the last N non-system messagesToolResultClearingStrategy-- replaces old tool results with[tool result cleared]placeholdersSummarizationStrategy<P: Provider>-- summarizes old messages via an LLM providerCompositeStrategy-- chains multiple strategies in order usingBoxedStrategyfor type erasurePersistentContext-- manages namedContextSectionentries that persist across compactionSystemInjector-- injects system reminders at configurable triggers (turn count, token threshold)
Strategies
All strategies implement the ContextStrategy trait from neuron-types:
| Strategy | Mechanism | Use case |
|---|---|---|
SlidingWindowStrategy |
Drop oldest non-system messages | Simple agents with short context needs |
ToolResultClearingStrategy |
Replace old tool outputs with placeholders | Tool-heavy agents where results grow large |
SummarizationStrategy |
LLM-powered summarization of old messages | Long-running agents needing full context awareness |
CompositeStrategy |
Apply strategies in sequence | Combine clearing + sliding window, etc. |
Usage
use ;
use ;
// Token estimation
let counter = new; // 4.0 chars/token default
let tokens = counter.estimate_text;
let custom = with_ratio; // adjust for specific models
// Sliding window: keep last 20 messages, compact above 100k tokens
let strategy = new;
let messages = vec!;
let token_count = strategy.token_estimate;
if strategy.should_compact
// Tool result clearing: keep 3 most recent tool results, compact above 80k tokens
let clearing = new;
For SummarizationStrategy, pass an LLM provider to summarize old messages:
use SummarizationStrategy;
let strategy = new;
// Keeps 5 most recent messages verbatim, summarizes everything older
Persistent Context
Build structured system prompts from independently managed sections:
use ;
let mut ctx = new;
ctx.add_section;
ctx.add_section;
let system_prompt = ctx.render;
// Produces:
// ## Role
// You are a helpful coding assistant.
//
// ## Rules
// Be concise. Avoid speculation.
System Injector
Inject reminders into the system prompt based on turn count or token thresholds:
use ;
let mut injector = new;
injector.add_rule;
injector.add_rule;
// Check each turn: returns all matching rules
let injected = injector.check;
assert!;
Part of neuron
This crate is part of neuron, a composable building-blocks library for AI agents in Rust.
License
Licensed under either of Apache License, Version 2.0 or MIT License at your option.