Modulesยง
- adaptive
- adaptive_
thresholds - agents
- attention_
model - Heuristic attention prediction model for LLM context optimization.
- benchmark
- buddy
- cache
- chunks_
ts - Tree-sitter AST-aware code chunking for semantic search.
- cli_
cache - codebook
- compressor
- config
- deps
- embedding_
index - Persistent, incremental embedding index.
- embeddings
- Embedding engine for semantic code search.
- entropy
- error
- eval
- Downstream task evaluation framework for search quality.
- feedback
- filters
- gotcha_
tracker - graph_
index - hybrid_
search - Hybrid search combining BM25 (lexical) with dense vector search.
- intent_
engine - knowledge
- litm
- loop_
detection - mode_
predictor - neural
- Neural context compression โ trained models replacing heuristic filters.
- patterns
- preservation
- protocol
- provider_
cache - Provider caching awareness โ helps LLM providers cache repeated context.
- quality
- sandbox
- semantic_
cache - semantic_
chunks - Semantic Chunking with Attention Bridges.
- session
- signatures
- signatures_
ts - slow_
log - stats
- surprise
- Predictive Surprise Scoring โ conditional entropy relative to LLM knowledge.
- symbol_
map - task_
briefing - task_
relevance - telemetry
- Telemetry and metrics collection following OpenTelemetry GenAI conventions.
- theme
- tokens
- updater
- vector_
index - version_
check - watcher
- File watcher for automatic incremental re-indexing.
- wrapped