Skip to main content

Module chunking

Module chunking 

Source
Expand description

Token-aware chunking utilities for bodies that exceed the embedding window. Semantic chunking for embedding inputs (Markdown-aware, 512-token limit).

Splits bodies using text_splitter::MarkdownSplitter with overlap so multi-chunk memories preserve context across chunk boundaries.

Structs§

Chunk
A contiguous slice of a body string identified by byte offsets.

Constants§

CHUNK_OVERLAP_CHARS
Character overlap between consecutive chunks to preserve cross-boundary context.
CHUNK_SIZE_CHARS
Maximum character length of a single chunk (derived from token limit × chars-per-token).

Functions§

aggregate_embeddings
Computes the mean of chunk_embeddings and L2-normalizes the result.
chunk_text
Returns the string slice of body described by chunk’s byte offsets.
needs_chunking
Returns true when body exceeds CHUNK_SIZE_CHARS and must be split.
split_into_chunks
Splits body into overlapping Chunks using a character-based heuristic.
split_into_chunks_by_token_offsets
Splits body into Chunks using pre-computed token byte-offsets.
split_into_chunks_hierarchical
Splits body into chunks using MarkdownSplitter with a real tokenizer. Respects Markdown semantic boundaries (H1-H6, paragraphs, blocks). For plain text without Markdown markers, falls back to paragraph and sentence breaks.