you know how every chunking library claims to be fast? yeah, we actually meant it.
chunk splits text at semantic boundaries (periods, newlines, the usual suspects) and does it stupid fast. we're talking "chunk the entire english wikipedia in 120ms" fast.
want to know how? read the blog post where we nerd out about SIMD instructions and lookup tables.
📦 Installation
looking for python or javascript?
🚀 Usage
use chunk;
let text = b"Hello world. How are you? I'm fine.\nThanks for asking.";
// With defaults (4KB chunks, split at \n . ?)
let chunks: = chunk.collect;
// With custom size
let chunks: = chunk.size.collect;
// With custom delimiters
let chunks: = chunk.delimiters.collect;
// With multi-byte pattern (e.g., metaspace ▁ for SentencePiece tokenizers)
let metaspace = "▁".as_bytes;
let chunks: = chunk.pattern.prefix.collect;
// With consecutive pattern handling (split at START of runs, not middle)
let chunks: = chunk
.pattern
.consecutive
.collect;
// With forward fallback (search forward if no pattern in backward window)
let chunks: = chunk
.pattern
.forward_fallback
.collect;
📝 Citation
If you use chunk in your research, please cite it as follows:
📄 License
Licensed under either of Apache License, Version 2.0 or MIT license at your option.