Skip to main content

Module core

Module core 

Source

Modulesยง

adaptive
adaptive_thresholds
agents
attention_model
Heuristic attention prediction model for LLM context optimization.
benchmark
buddy
cache
chunks_ts
Tree-sitter AST-aware code chunking for semantic search.
cli_cache
codebook
compressor
config
deps
embedding_index
Persistent, incremental embedding index.
embeddings
Embedding engine for semantic code search.
entropy
error
eval
Downstream task evaluation framework for search quality.
feedback
filters
gotcha_tracker
graph_index
hybrid_search
Hybrid search combining BM25 (lexical) with dense vector search.
intent_engine
knowledge
litm
loop_detection
mode_predictor
neural
Neural context compression โ€” trained models replacing heuristic filters.
patterns
preservation
protocol
provider_cache
Provider caching awareness โ€” helps LLM providers cache repeated context.
quality
sandbox
semantic_cache
semantic_chunks
Semantic Chunking with Attention Bridges.
session
signatures
signatures_ts
slow_log
stats
surprise
Predictive Surprise Scoring โ€” conditional entropy relative to LLM knowledge.
symbol_map
task_briefing
task_relevance
telemetry
Telemetry and metrics collection following OpenTelemetry GenAI conventions.
theme
tokens
updater
vector_index
version_check
watcher
File watcher for automatic incremental re-indexing.
wrapped