Expand description
Semantic Chunking with Attention Bridges.
Groups content into semantic chunks (function bodies, import blocks, type definitions) rather than treating lines independently. Orders chunks for optimal LLM attention flow:
- Most relevant chunk FIRST (high-attention position)
- Its immediate dependencies (imports, types it uses) adjacent
- Supporting context in the middle
- Tail anchor: brief reference back to the primary chunk (attention bridge)
This exploits how transformer attention actually works: local coherence + global anchors beats scattered high-importance lines.
Structs§
Enums§
Functions§
- detect_
chunks - Detect semantic boundaries in content and group lines into chunks.
- order_
for_ attention - Score chunks by task relevance and reorder for optimal attention flow.
- render_
with_ bridges - Render chunks back to text with attention bridges.