Skip to main content

Module semantic_chunks

Module semantic_chunks 

Source
Expand description

Semantic Chunking with Attention Bridges.

Groups content into semantic chunks (function bodies, import blocks, type definitions) rather than treating lines independently. Orders chunks for optimal LLM attention flow:

  1. Most relevant chunk FIRST (high-attention position)
  2. Its immediate dependencies (imports, types it uses) adjacent
  3. Supporting context in the middle
  4. Tail anchor: brief reference back to the primary chunk (attention bridge)

This exploits how transformer attention actually works: local coherence + global anchors beats scattered high-importance lines.

Structs§

SemanticChunk

Enums§

ChunkKind

Functions§

detect_chunks
Detect semantic boundaries in content and group lines into chunks.
order_for_attention
Score chunks by task relevance and reorder for optimal attention flow.
render_with_bridges
Render chunks back to text with attention bridges.