Modules§
- adaptive
- adaptive_
thresholds - agents
- attention_
model - Heuristic attention prediction model for LLM context optimization.
- benchmark
- buddy
- cache
- call_
graph - cli_
cache - codebook
- compressor
- config
- deep_
queries - Tree-sitter deep queries for extracting imports, call sites, and type definitions.
- deps
- embedding_
index - Persistent, incremental embedding index.
- embeddings
- Embedding engine for semantic code search.
- entropy
- error
- events
- feedback
- filters
- gotcha_
tracker - graph_
index - hybrid_
search - Hybrid search combining BM25 (lexical) with dense vector search.
- index_
orchestrator - intent_
engine - knowledge
- litm
- loop_
detection - mode_
predictor - neural
- Neural context compression — trained models replacing heuristic filters.
- patterns
- preservation
- protocol
- quality
- route_
extractor - sandbox
- semantic_
cache - semantic_
chunks - Semantic Chunking with Attention Bridges.
- session
- signatures
- signatures_
ts - slow_
log - stats
- surprise
- Predictive Surprise Scoring — conditional entropy relative to LLM knowledge.
- symbol_
map - task_
briefing - task_
relevance - theme
- tokens
- updater
- vector_
index - version_
check - wrapped