recallbench-0.1.0 is not a library.
Visit the last successful build:
recallbench-0.4.0
RecallBench
A universal benchmark harness for AI memory systems. Evaluate any memory system against established academic datasets with a single command.
Features
- Multi-system: Benchmark any memory system via trait implementation, HTTP adapter, or subprocess adapter
- Multi-dataset: LongMemEval, LoCoMo, ConvoMem, MemBench, MemoryAgentBench, HaluMem, custom JSON
- Multi-provider LLM: Claude, ChatGPT, Gemini, Codex — CLI subscription (no API key) or direct API
- OpenAI-compatible endpoints: Ollama, vLLM, LM Studio, DeepInfra, Together, Groq — zero-cost local iteration
- Quick mode: Stratified sampling for fast directional signal during development (
--quick) - Latency profiling: p50/p95/p99 per pipeline stage (ingest, retrieval, generation, judge)
- Dual-model judging: Primary + tiebreaker judge with calibration suite
- Longevity testing: Measure accuracy degradation over time as memory accumulates
- Web UI: Local browser interface for exploring results (
recallbench serve) - Resume: Interrupt and resume benchmark runs without losing progress
- Reports: Terminal table, Markdown, JSON, CSV output formats
Quick Start
# Install
# Download a dataset
# Run benchmark with echo adapter (test mode)
# Quick mode for fast iteration (50 stratified questions)
# Generate report from results
# Browse results in web UI
Supported Datasets
| Dataset | Source | Description |
|---|---|---|
| LongMemEval | ICLR 2025 | 500 questions, 5 memory abilities, 7 types |
| LoCoMo | Snap Research | Long-context conversation memory |
| ConvoMem | memorybench | Conversational memory evaluation |
| MemBench | ACL 2025 | Multi-aspect (effectiveness, efficiency, capacity) |
| MemoryAgentBench | ICLR 2026 | Selective forgetting, fact consolidation |
| HaluMem | MemTensor | Memory hallucination detection |
| Custom | User-defined | Any JSON dataset matching the schema |
Supported LLM Providers
All providers support CLI subscription mode (default, no API key) and/or direct API mode.
| Provider | CLI Command | API | Config Key |
|---|---|---|---|
| Claude | claude --print |
Anthropic Messages API | llm.anthropic |
| ChatGPT | chatgpt |
OpenAI Chat Completions | llm.openai |
| Gemini | gemini |
Google Generative AI | llm.gemini |
| Codex | codex |
— | — |
| Custom | — | Any OpenAI-compatible endpoint | llm.custom, llm.local |
Local Inference (Zero Cost)
# recallbench.toml
[]
= "http://localhost:11434/v1"
= ""
= "llama3.1:70b"
= 0
Adding Your Memory System
Option 1: Implement the Rust trait
use MemorySystem;
Option 2: HTTP adapter (any language)
# my-system.toml
= "my-system"
= "1.0"
[]
= "http://localhost:8080/reset"
= "http://localhost:8080/ingest"
= "http://localhost:8080/retrieve"
Option 3: Subprocess adapter
# my-cli-system.toml
= "my-cli-system"
= "1.0"
[]
= ["my-system", "reset"]
= ["my-system", "ingest"]
= ["my-system", "retrieve"]
CLI Reference
recallbench datasets List available datasets
recallbench download Download a dataset
recallbench run Run benchmark
recallbench compare Compare multiple systems
recallbench report Generate report from results
recallbench stats Show dataset statistics
recallbench validate Validate custom dataset
recallbench calibrate Run judge calibration
recallbench failures Export failure analysis
recallbench longevity Run longitudinal degradation test
recallbench serve Launch web UI
Configuration
Create recallbench.toml in your project directory:
[]
= 10
= 16384
= "claude-sonnet"
= "claude-sonnet"
= "results"
= 50
License
MIT OR Apache-2.0