memchunk 0.1.5

The fastest semantic text chunking library — up to 1TB/s chunking throughput
Documentation

you know how every chunking library claims to be fast? yeah, we actually meant it.

memchunk splits text at semantic boundaries (periods, newlines, the usual suspects) and does it stupid fast. we're talking "chunk the entire english wikipedia in 120ms" fast.

want to know how? read the blog post where we nerd out about SIMD instructions and lookup tables.

📦 Installation

cargo add memchunk

looking for python or javascript?

🚀 Usage

use memchunk::chunk;

let text = b"Hello world. How are you? I'm fine.\nThanks for asking.";

// With defaults (4KB chunks, split at \n . ?)
let chunks: Vec<&[u8]> = chunk(text).collect();

// With custom size
let chunks: Vec<&[u8]> = chunk(text).size(1024).collect();

// With custom delimiters
let chunks: Vec<&[u8]> = chunk(text).delimiters(b"\n.?!").collect();

// With both
let chunks: Vec<&[u8]> = chunk(text).size(8192).delimiters(b"\n").collect();

📝 Citation

If you use memchunk in your research, please cite it as follows:

@software{memchunk2025,
  author = {Minhas, Bhavnick},
  title = {memchunk: The fastest text chunking library},
  year = {2025},
  publisher = {GitHub},
  howpublished = {\url{https://github.com/chonkie-inc/memchunk}},
}

📄 License

Licensed under either of Apache License, Version 2.0 or MIT license at your option.