pub struct LlmPrompt {
pub local_prompt: Option<LocalPrompt>,
pub api_prompt: Option<ApiPrompt>,
pub messages: PromptMessages,
pub concatenator: TextConcatenator,
pub built_prompt_messages: Mutex<Option<Vec<HashMap<String, String>>>>,
}
Expand description
A prompt management system that supports both API-based LLMs (like OpenAI) and local LLMs.
LlmPrompt
provides a unified interface for building and managing prompts in different formats,
with support for both API-style messaging (system/user/assistant) and local LLM chat templates.
It handles token counting, message validation, and proper prompt formatting.
Fields§
§local_prompt: Option<LocalPrompt>
§api_prompt: Option<ApiPrompt>
§messages: PromptMessages
§concatenator: TextConcatenator
§built_prompt_messages: Mutex<Option<Vec<HashMap<String, String>>>>
Implementations§
Source§impl LlmPrompt
impl LlmPrompt
Sourcepub fn new_local_prompt(
tokenizer: Arc<dyn PromptTokenizer>,
chat_template: &str,
bos_token: Option<&str>,
eos_token: &str,
unk_token: Option<&str>,
base_generation_prefix: Option<&str>,
) -> LlmPrompt
pub fn new_local_prompt( tokenizer: Arc<dyn PromptTokenizer>, chat_template: &str, bos_token: Option<&str>, eos_token: &str, unk_token: Option<&str>, base_generation_prefix: Option<&str>, ) -> LlmPrompt
Creates a new prompt instance configured for local LLMs using chat templates.
§Arguments
tokenizer
- A tokenizer implementation for counting tokenschat_template
- The chat template string used to format messagesbos_token
- Optional beginning of sequence tokeneos_token
- End of sequence tokenunk_token
- Optional unknown tokenbase_generation_prefix
- Optional prefix to add before generation
§Returns
A new LlmPrompt
instance configured for local LLM usage.
Sourcepub fn new_api_prompt(
tokenizer: Arc<dyn PromptTokenizer>,
tokens_per_message: Option<u32>,
tokens_per_name: Option<i32>,
) -> LlmPrompt
pub fn new_api_prompt( tokenizer: Arc<dyn PromptTokenizer>, tokens_per_message: Option<u32>, tokens_per_name: Option<i32>, ) -> LlmPrompt
Creates a new prompt instance configured for API-based LLMs like OpenAI.
§Arguments
tokenizer
- A tokenizer implementation for counting tokenstokens_per_message
- Optional number of tokens to add per message (model-specific)tokens_per_name
- Optional number of tokens to add for names (model-specific)
§Returns
A new LlmPrompt
instance configured for API usage.
Sourcepub fn add_system_message(&self) -> Result<Arc<PromptMessage>, Error>
pub fn add_system_message(&self) -> Result<Arc<PromptMessage>, Error>
Adds a system message to the prompt.
System messages must be the first message in the sequence. Returns an error if attempting to add a system message after other messages.
§Returns
A reference to the newly created message for setting content, or an error if validation fails.
Sourcepub fn add_user_message(&self) -> Result<Arc<PromptMessage>, Error>
pub fn add_user_message(&self) -> Result<Arc<PromptMessage>, Error>
Adds a user message to the prompt.
Cannot add a user message directly after another user message. Returns an error if attempting to add consecutive user messages.
§Returns
A reference to the newly created message for setting content, or an error if validation fails.
Sourcepub fn add_assistant_message(&self) -> Result<Arc<PromptMessage>, Error>
pub fn add_assistant_message(&self) -> Result<Arc<PromptMessage>, Error>
Adds an assistant message to the prompt.
Cannot be the first message or follow another assistant message. Returns an error if attempting to add as first message or after another assistant message.
§Returns
A reference to the newly created message for setting content, or an error if validation fails.
Sourcepub fn set_generation_prefix<T>(&self, generation_prefix: T)
pub fn set_generation_prefix<T>(&self, generation_prefix: T)
Sets a prefix to be added before generation for local LLMs.
This is typically used to prime the model’s response. Only applies to local LLM prompts, has no effect on API prompts.
§Arguments
generation_prefix
- The text to add before generation
Sourcepub fn clear_generation_prefix(&self)
pub fn clear_generation_prefix(&self)
Clears any previously set generation prefix.
Sourcepub fn reset_prompt(&self)
pub fn reset_prompt(&self)
Resets the prompt, clearing all messages and built state.
Sourcepub fn clear_built_prompt(&self)
pub fn clear_built_prompt(&self)
Clears any built prompt state, forcing a rebuild on next access.
Sourcepub fn local_prompt(&self) -> Result<&LocalPrompt, Error>
pub fn local_prompt(&self) -> Result<&LocalPrompt, Error>
Gets and builds the local prompt if this is prompt has one. This method is required to unwrap the prompt and build it.
§Returns
A reference to the LocalPrompt
if present, otherwise returns an error
Sourcepub fn api_prompt(&self) -> Result<&ApiPrompt, Error>
pub fn api_prompt(&self) -> Result<&ApiPrompt, Error>
Gets and builds the API prompt if this is prompt has one. This method is required to unwrap the prompt and build it.
§Returns
A reference to the ApiPrompt
if present, otherwise returns an error
Sourcepub fn get_built_prompt_messages(
&self,
) -> Result<Vec<HashMap<String, String>>, Error>
pub fn get_built_prompt_messages( &self, ) -> Result<Vec<HashMap<String, String>>, Error>
Retrieves the prompt messages in a standardized format compatible with API calls.
This method returns messages in the same format as ApiPrompt::get_built_prompt()
,
making it useful for consistent message handling across different LLM implementations.
The method handles lazy building of the prompt - if the messages haven’t been built yet,
it will trigger the build process automatically.
§Returns
Returns Ok(Vec<HashMap<String, String>>)
containing the formatted messages on success.
§Errors
Returns an error if:
- The current message sequence violates prompt rules (e.g., assistant message first)
- The build process fails
- The built messages are unexpectedly None after building
Trait Implementations§
Source§impl Serialize for LlmPrompt
impl Serialize for LlmPrompt
Source§fn serialize<__S>(
&self,
__serializer: __S,
) -> Result<<__S as Serializer>::Ok, <__S as Serializer>::Error>where
__S: Serializer,
fn serialize<__S>(
&self,
__serializer: __S,
) -> Result<<__S as Serializer>::Ok, <__S as Serializer>::Error>where
__S: Serializer,
Source§impl TextConcatenatorTrait for LlmPrompt
impl TextConcatenatorTrait for LlmPrompt
Source§fn concatenator_mut(&mut self) -> &mut TextConcatenator
fn concatenator_mut(&mut self) -> &mut TextConcatenator
Source§fn clear_built(&self)
fn clear_built(&self)
Source§fn concate_deol(&mut self) -> &mut Self
fn concate_deol(&mut self) -> &mut Self
Source§fn concate_seol(&mut self) -> &mut Self
fn concate_seol(&mut self) -> &mut Self
Source§fn concate_space(&mut self) -> &mut Self
fn concate_space(&mut self) -> &mut Self
Source§fn concate_comma(&mut self) -> &mut Self
fn concate_comma(&mut self) -> &mut Self
Auto Trait Implementations§
impl !Freeze for LlmPrompt
impl !RefUnwindSafe for LlmPrompt
impl Send for LlmPrompt
impl Sync for LlmPrompt
impl Unpin for LlmPrompt
impl !UnwindSafe for LlmPrompt
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more