Expand description
§meme
Long-term memory for AI agents.
A Rust implementation of a production-grade memory pipeline:
- Semantic Structured Compression — dialogues → compact memory entries
- Lifecycle Reconciliation — LLM-driven ADD/UPDATE/DELETE/NOOP
- Intent-Aware Retrieval Planning — multi-view hybrid retrieval
Memory is persistent across sessions — the vector store is stored on disk.
§Quick Start
use meme::{Meme, MemeBuilder};
let meme = MemeBuilder::new()
.api_key("sk-...")
.model("gpt-4.1-mini")
.build()
.await?;
// Dialogue-based ingestion
meme.add_dialogue("Alice", "Let's meet at 2pm tomorrow", None).await?;
meme.finalize().await?;
// Direct fact ingestion (skips dialogue windowing)
meme.add("Alice prefers coffee over tea").await?;
// CRUD
let results = meme.search("Alice meeting").await?;
let answer = meme.ask("When will Alice meet?").await?;Modules§
- config
- Configuration system with TOML file + environment variable support.
- embedding
- Embedding model abstraction — unified interface for API and local ONNX providers.
- error
- Unified error types for the meme library.
- http
- Shared HTTP client with production-ready defaults.
- llm
- LLM client abstraction — OpenAI-compatible async interface.
- model
- Data models for the meme memory system.
- pipeline
- Three-stage memory pipeline: compression, synthesis, and retrieval.
- store
- Storage layer —
LanceDBvector store with multi-view indexing and history tracking.
Structs§
- Meme
- The main entry point for the meme memory system.
- Meme
Builder - Builder for constructing a
Memeinstance.