Expand description
Memory and RAG (Retrieval-Augmented Generation) System
Provides persistent context, knowledge retrieval, and semantic search for agents.
§Features
- Vector Store: Semantic similarity search using embeddings
- Memory Types: Short-term, long-term, episodic, and semantic memory
- Knowledge Base: Structured document storage with chunking
- Context Window: Smart context management for LLM prompts
- Caching: Frequently accessed information caching
Structs§
- Agent
Cache - Simple LRU cache for agent computations
- Cache
Entry - Cache entry
- Chunking
Config - Configuration for document chunking
- Context
Segment - A segment of context
- Context
Window - Manages context for LLM prompts
- Document
- Document for the knowledge base
- Document
Chunk - A chunk of a document
- Knowledge
Base - Knowledge base for RAG
- Memory
Config - Configuration for the memory manager
- Memory
Entry - Memory entry representing a piece of stored knowledge
- Memory
Manager - Unified memory manager for agents
- Memory
Stats - Memory statistics
- OpenAI
Embedding - OpenAI embedding provider
- Search
Result - Search result from vector store
- Vector
Store - Vector store for semantic search
- Vector
Store Config - Configuration for the vector store
- Vector
Store Stats - Vector store statistics
Enums§
- Chunking
Strategy - Chunking strategies
- Context
Segment Type - Types of context segments
- Document
Type - Document types
- Embedding
Model - Embedding model options
- Memory
Error - Memory system errors
- Memory
Source - Source of memory entry
- Memory
Type - Types of memory
- Similarity
Metric - Similarity metrics for vector search
Traits§
- Embedding
Provider - Trait for embedding providers