Expand description
Token counting utilities using BPE tokenization.
This module provides approximate token counting for resource content using the cl100k BPE encoding, which is compatible with Claude and GPT-4 models.
§Usage
use agpm_cli::tokens;
let content = "Hello, world!";
let count = tokens::count_tokens(content);
println!("Approximate token count: {}", count);§Performance
The tokenizer is lazily initialized on first use and cached for subsequent calls. Token counting is O(n) and optimized for high throughput.
Functions§
- count_
tokens - Count approximate tokens in content using cl100k encoding.
- format_
token_ count - Format a token count for human-readable display.