Expand description
Chat template support for LLM examples
This module provides Jinja-based chat template rendering compatible with
HuggingFace’s tokenizer.apply_chat_template() functionality.
§Example
use candle_examples::chat_template::{ChatTemplate, ChatTemplateOptions, Message, Conversation};
// Load template from a model's tokenizer_config.json
let template = ChatTemplate::from_tokenizer_config("path/to/tokenizer_config.json")?;
// Or use a preset for known models
let template = ChatTemplate::chatml(); // SmolLM, Qwen, etc.
// Single-turn
let messages = vec![
Message::system("You are helpful."),
Message::user("Hello!"),
];
let prompt = template.apply_for_generation(&messages)?;
// Multi-turn conversation
let mut conv = Conversation::new(template, "You are helpful.");
let prompt = conv.user_turn("Hello!")?;
// ... generate response ...
conv.assistant_response("Hi there!");
let prompt = conv.user_turn("How are you?")?;Structs§
- Chat
Template - Chat template renderer using MiniJinja
- Chat
Template Options - Options for applying a chat template
- Conversation
- Multi-turn conversation manager
- Message
- A chat message with role and content
- Named
Template - Token
Config - Token configuration loaded from tokenizer_config.json
Enums§
- Chat
Template Config - Chat template can be a single string or multiple named templates
- Chat
Template Error - Errors that can occur with chat templates
- String
OrToken - Handle both string and object token formats in tokenizer_config.json