pub struct LocalPrompt {
pub generation_prefix: Mutex<Option<String>>,
pub built_prompt_string: Mutex<Option<String>>,
pub built_prompt_as_tokens: Mutex<Option<Vec<u32>>>,
pub total_prompt_tokens: Mutex<Option<usize>>,
/* private fields */
}Expand description
A prompt formatter for local LLMs that use chat templates.
LocalPrompt handles formatting messages according to a model’s chat template,
managing special tokens (BOS, EOS, UNK), and supporting generation prefixes.
Unlike API prompts, local prompts need to handle the specific formatting requirements
and token conventions of locally-run models.
The struct maintains both string and tokenized representations of the built prompt, along with thread-safe interior mutability for managing prompt state. It supports token counting and generation prefix management for model outputs.
Fields§
§generation_prefix: Mutex<Option<String>>§built_prompt_string: Mutex<Option<String>>§built_prompt_as_tokens: Mutex<Option<Vec<u32>>>§total_prompt_tokens: Mutex<Option<usize>>Implementations§
Source§impl LocalPrompt
impl LocalPrompt
Sourcepub fn get_built_prompt(&self) -> Result<String, Error>
pub fn get_built_prompt(&self) -> Result<String, Error>
Retrieves the built prompt as a formatted string.
Returns the complete prompt string with all messages formatted according to the chat template, including any special tokens and generation prefix.
§Returns
Returns Ok(String) containing the formatted prompt string.
§Errors
Returns an error if the prompt has not been built yet.
Sourcepub fn get_built_prompt_as_tokens(&self) -> Result<Vec<u32>, Error>
pub fn get_built_prompt_as_tokens(&self) -> Result<Vec<u32>, Error>
Retrieves the built prompt as a vector of tokens.
Returns the complete prompt converted to model tokens using the configured tokenizer. This is useful for operations that need to work directly with token IDs rather than text.
§Returns
Returns Ok(Vec<u32>) containing the token IDs for the prompt.
§Errors
Returns an error if the prompt has not been built yet.
Sourcepub fn get_total_prompt_tokens(&self) -> Result<usize, Error>
pub fn get_total_prompt_tokens(&self) -> Result<usize, Error>
Gets the total number of tokens in the built prompt.
Returns the exact token count of the built prompt, which is useful for ensuring prompts stay within model context limits. This count reflects all content, special tokens, and any generation prefix.
§Returns
Returns Ok(usize) containing the total token count.
§Errors
Returns an error if the prompt has not been built yet.