pub fn encoding_for_model(model: &str) -> Option<&'static CoreBpe>Expand description
Get a cached tokenizer by model name.
Supports OpenAI, Meta, DeepSeek, Qwen, and Mistral models.
Maps model name prefixes to their encoding.
Returns None for unknown models.