Expand description
§Halldyll Memory Model
Multi-user, multi-model memory system for distributed AI agents. Designed for cloud deployment with PostgreSQL + Redis backend.
§Features
- Multi-user isolation: Complete user separation with RBAC
- Multi-model support: Manage memory across different AI models
- Distributed memory: PostgreSQL for persistence, Redis for caching
- Vector search: Semantic search with embedding similarity
- Conversation management: Full conversation history with context
- Fact extraction: Learn and store facts with confidence scoring
- Image metadata: Track generated images and their parameters
- Cloud-ready: Designed for Kubernetes/RunPods deployment
§Architecture
├── core : Types, errors, configuration
├── models : Database models and entities
├── storage : PostgreSQL + Redis persistence layer
├── embedding : Vector generation and search
├── ingest : Data ingestion and processing
├── retrieval : Semantic search and ranking
├── context : Prompt context building
├── user : User management and isolation
├── engine : Main orchestration layer
└── utils : Utilities and helpers§Quick Start
use halldyll_memory_model::{Config, MemoryEngine};
use halldyll_memory_model::models::{Conversation, Message, Role, Memory};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create engine with default config
let config = Config::default();
let engine = MemoryEngine::new(config).await?;
engine.initialize().await?;
// Store and search memories
let mut conversation = Conversation::new("Hello");
conversation.add_message(Message::new(Role::User, "Hello!"));
// engine.store_memory("user-123", Memory::Conversation(conversation)).await?;
// let results = engine.search("user-123", "hello", 10).await?;
Ok(())
}Re-exports§
pub use core::Config;pub use core::MemoryError;pub use core::MemoryId;pub use core::MemoryResult;pub use core::MemoryType;pub use engine::HealthStatus;pub use engine::MemoryEngine;pub use context::ContextBuilder;pub use context::ContextConfig;pub use context::TokenEstimationMethod;pub use embedding::EmbeddingGenerator;pub use ingest::MemoryProcessor;pub use ingest::ProcessingResult;pub use ingest::ProcessorConfig;pub use retrieval::MemorySearcher;pub use retrieval::SearchOptions;pub use retrieval::SearchResult;pub use storage::PostgresStorage;pub use storage::RedisCache;pub use storage::StoragePool;pub use user::Permission;pub use user::PermissionChecker;pub use user::UserManager;pub use models::Conversation;pub use models::Fact;pub use models::ImageMetadata;pub use models::Memory;pub use models::Message;pub use models::Role;pub use models::TranscriptionMetadata;pub use models::UserModel;pub use models::UserRole;pub use utils::Validator;
Modules§
- context
- Prompt context building with retrieved memories
- core
- Core module with fundamental types and error handling
- embedding
- Embedding generation with ONNX Runtime
- engine
- Main orchestration layer for the memory system
- ingest
- Data ingestion and processing
- models
- Database models and entities
- prelude
- Prelude module for convenient imports
- retrieval
- Memory search and retrieval
- storage
- Storage layer for PostgreSQL and Redis persistence
- user
- User management and authentication
- utils
- Utility functions and helpers
Constants§
- AGENT_
CREATOR - Creator
- AGENT_
NAME - Agent name
- VERSION
- Library version