use-token
Composable tokenization primitives for RustUse.
use-token keeps tokenization explicit and small. It handles whitespace splitting, conservative
word tokenization, lightweight sentence boundaries, and character spans without claiming to be a
full NLP parser.
Included primitives
tokenize_whitespacetokenize_wordstokenize_sentencestokenize_charstoken_count
Example
use ;
assert_eq!;
assert_eq!;
assert_eq!;