# Changelog
All notable changes to the Halldyll Memory Model project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [0.2.0] - 2026-01-20
### Added
- Complete refactoring for multi-user, multi-model architecture
- PostgreSQL backend with pgvector for vector search
- Redis caching layer for performance optimization
- User management module with role-based access control (RBAC)
- Permission checking system for authorization
- Input validation utilities
- Context builder for prompt construction
- Docker Compose setup for local development
- Comprehensive database migrations
- Support for Kubernetes and cloud deployments
### Changed
- Migrated from SQLite to PostgreSQL
- Replaced direct caching with Redis integration
- Restructured module hierarchy for better organization
- Updated error handling with new error types
- Refactored storage layer for distributed architecture
- Updated configuration to support cloud-ready settings
### Security
- Added user isolation with complete data separation
- Implemented audit logging for compliance
- Added input validation for all user data
- Using parameterized queries to prevent SQL injection
### Migration Guide from 0.1.0
1. Update Cargo.toml dependencies
2. Run database migrations using PostgreSQL
3. Update configuration to use PostgreSQL and Redis URLs
4. Update memory storage calls to include user_id parameter
5. Review permission checks when accessing other users' data
## [0.1.0] - 2025-12-15
### Added
- Initial memory system with SQLite backend
- Vector search with ONNX embeddings
- Support for conversations, facts, images, transcriptions
- Basic in-memory user session management
- Embedding generator and tokenizer
- Simple memory processor for ingestion
- Memory searcher with ranking
- Prompt building with context
### Features
- Conversation history management
- Fact extraction and storage
- Image generation metadata tracking
- Audio transcription support
- Session summaries
- Vector similarity search
- LRU caching
---
## Future Roadmap
- [ ] GraphQL API layer for flexible querying
- [ ] WebSocket support for real-time updates
- [ ] Advanced fact extraction with LLM
- [ ] Multi-language embedding support
- [ ] Hierarchical memory compression
- [ ] Import/Export functionality
- [ ] Web dashboard for memory visualization
- [ ] Model-specific memory scoping improvements
- [ ] Distributed tracing with OpenTelemetry
- [ ] Sharding support for horizontal scaling