ThoughtChain - Autonomous Thought Tracking and Knowledge Management
Persistent reasoning, decision tracking, and dynamic knowledge management for AI systems.
Features
🧠 Autonomous Reasoning (ThoughtChain)
Track AI system's internal reasoning, decisions, and observations across sessions.
- Session-based organization: Group thoughts by user session/project
- Semantic search: Find similar thoughts using vector embeddings
- 6 thought types: Reasoning, Decision, Reflection, Observation, Question, Milestone
- Optional opcode storage: Binary decision encoding for compression
- Full-text search: Fast keyword search via SQLite FTS5
🔍 Conversation Search
Temporal-aware conversation history retrieval with intelligent query detection.
- Temporal context detection: "What did I first ask?", "What did we discuss earlier?"
- Keyword classification: Automatically detect conversation vs knowledge queries
- Semantic + chronological search: Combine embeddings with temporal ordering
- Conversation formatting: Export history for context injection
📚 Dynamic Knowledge Base (Engrams)
Hot-reloadable knowledge archives with filesystem watching.
- Auto-loading: Watch directory and load new
.engor.dbfiles automatically - Multi-engram search: Query across all loaded knowledge bases simultaneously
- CML format: Structured content with semantic embeddings
- Thread-safe: Concurrent queries with mutex-protected stores
Installation
Add to your Cargo.toml:
[]
= "0.1.0"
Dependency Architecture
frame-thoughtchain depends on:
frame-thoughtchain
└── frame-catalog (vector search, embeddings, database)
Used by: Frame core for autonomous reasoning
Position in Frame ecosystem:
frame-catalog
└→ frame-thoughtchain
Quick Start
1. Autonomous Thought Tracking
use frame-;
use Database;
use Uuid;
// Create database and initialize schema
let db = new?;
let store = new;
store.initialize_schema?;
// Create a session
let session_id = new_v4;
store.create_session?;
// Log a thought with embedding
let thought_id = store.log_thought.await?;
// Retrieve recent thoughts
let thoughts = store.get_session_thoughts?;
for thought in thoughts
// Search by semantic similarity
let query_embedding = embedder.generate?;
let results = store.search_thoughts?;
for in results
2. Conversation Search
use frame-;
let query = "What did I first ask about?";
if is_conversation_query else
3. Dynamic Engram Loading
use frame-EngramRegistry;
// Create registry with filesystem watching
let registry = new?;
println!;
println!;
// Search across all engrams
let query_embedding = embedder.generate?;
let results = registry.search_all?;
for result in results
// Add new .eng file to directory → automatically loaded!
API Reference
ThoughtChainStore
Core Methods:
new(database: &Database)- Create storeinitialize_schema()- Create tables and indicescreate_session()- Start new sessionend_session()- Mark session completelog_thought()- Store thought with embeddingget_session_thoughts()- Retrieve thoughts for sessionget_recent_thoughts()- Get recent thoughts across all sessionssearch_thoughts()- Semantic similarity search
Migration:
migrate_add_opcode_column()- Add opcode support to existing DB
ThoughtType Enum
ThoughtEntry Struct
Conversation Search Functions
is_conversation_query(query: &str) -> bool- Detect conversation referencesdetect_temporal_context(query: &str) -> TemporalContext- Extract time contextsearch_with_temporal_context()- Combined temporal + semantic searchget_first_user_message()- Get conversation startget_recent_messages()- Get last N messagesformat_conversation_context()- Export for context injection
EngramRegistry
Setup:
new(directory, watch)- Create registry with optional watchingload_all()- Manually reload all engrams
Querying:
search_all(embedding, limit)- Query all engramslist_engrams()- Get loaded engram IDsget_engram_info(id)- Get metadata for specific engramtotal_chunks()- Total chunks across all engrams
Use Cases
1. AI Assistant Long-Term Memory
Track reasoning across sessions to maintain consistency:
// Session 1: User asks about database choice
store.log_thought.await?;
// Session 2: User asks why we chose SQLite
let query_embedding = embedder.generate?;
let thoughts = store.search_thoughts?;
// Returns: "Recommended SQLite for portability..."
2. Conversation Context Retrieval
Handle temporal queries naturally:
// "What was the first thing I asked?"
let first_msg = get_first_user_message?;
// "What did we talk about earlier?"
let context = search_with_temporal_context?;
3. Hot-Reloadable Knowledge Base
Update knowledge without restarting:
# Application running with engram registry watching "engrams/"
# → Automatically detected and loaded
# → Immediately available for queries
4. Milestone Tracking
Track system achievements:
store.log_thought.await?;
Database Schema
ThoughtChain uses SQLite with the following tables:
- sessions - User sessions with start/end times
- frame-thoughtchain - Thought entries with full metadata
- frame-thoughtchain_embeddings - Vector embeddings for semantic search
- frame-thoughtchain_fts - FTS5 virtual table for keyword search
Indices on (session_id, timestamp) and (thought_type, timestamp).
Performance
- Thought logging: ~10-50ms (includes embedding generation)
- Semantic search: ~5-20ms for 1000 thoughts
- FTS5 keyword search: <1ms
- Engram hot-reload: ~100-500ms depending on file size
- Memory overhead: ~1KB per thought + embeddings (~1.5KB for 384-dim vectors)
Dependencies
Core:
rusqlite(0.31) - SQLite with FTS5serde,serde_json- Serializationchrono- Timestampsuuid- Unique identifiers
Engram Support:
notify(6.0) - Filesystem watchingcml- Content Markup Language with embeddings
Temporary (until sam-vector extraction):
sam-memory- Database and embedding generator traits
Future Work
- Extract
sam-vectorcrate (removes sam-memory dependency) - Opcode compression support (requires
sam-opcodecrate) - Multi-user support with access controls
- Thought relationship graph (parent/child thoughts)
- Export thoughts as markdown/JSON
- Thought templates for common patterns
Compatibility
- Rust Edition: 2021
- MSRV: 1.70+
- Platforms: All (platform-independent SQLite)
History
Extracted from the Frame project, where it provides persistent reasoning and knowledge management for the AI assistant.
License
MIT - See LICENSE for details.
Author
Magnus Trent magnus@blackfall.dev