Struct llama_cpp_2::context::LlamaContext
source · pub struct LlamaContext<'a> {
pub model: &'a LlamaModel,
/* private fields */
}
Expand description
Safe wrapper around llama_context
.
Fields§
§model: &'a LlamaModel
a reference to the contexts model.
Implementations§
source§impl LlamaContext<'_>
impl LlamaContext<'_>
sourcepub fn copy_cache(&mut self, src: i32, dest: i32, size: i32)
pub fn copy_cache(&mut self, src: i32, dest: i32, size: i32)
Copy the cache from one sequence to another.
§Parameters
src
- The sequence id to copy the cache from.dest
- The sequence id to copy the cache to.size
- The size of the cache to copy.
sourcepub fn copy_kv_cache_seq(
&mut self,
src: i32,
dest: i32,
p0: Option<u16>,
p1: Option<u16>
)
pub fn copy_kv_cache_seq( &mut self, src: i32, dest: i32, p0: Option<u16>, p1: Option<u16> )
Copy the cache from one sequence to another.
§Parameters
src
- The sequence id to copy the cache from.dest
- The sequence id to copy the cache to.p0
- The start position of the cache to clear. IfNone
, the entire cache is copied up to [p1].p1
- The end position of the cache to clear. IfNone
, the entire cache is copied starting from [p0].
sourcepub fn clear_kv_cache_seq(&mut self, src: i32, p0: Option<u16>, p1: Option<u16>)
pub fn clear_kv_cache_seq(&mut self, src: i32, p0: Option<u16>, p1: Option<u16>)
Clear the kv cache for the given sequence.
§Parameters
src
- The sequence id to clear the cache for.p0
- The start position of the cache to clear. IfNone
, the entire cache is cleared up to [p1].p1
- The end position of the cache to clear. IfNone
, the entire cache is cleared from [p0].
sourcepub fn get_kv_cache_used_cells(&self) -> i32
pub fn get_kv_cache_used_cells(&self) -> i32
Returns the number of used KV cells (i.e. have at least one sequence assigned to them)
sourcepub fn clear_kv_cache(&mut self)
pub fn clear_kv_cache(&mut self)
Clear the KV cache
sourcepub fn llama_kv_cache_seq_keep(&mut self, seq_id: i32)
pub fn llama_kv_cache_seq_keep(&mut self, seq_id: i32)
Removes all tokens that do not belong to the specified sequence
§Parameters
seq_id
- The sequence id to keep
sourcepub fn kv_cache_seq_add(
&mut self,
seq_id: i32,
p0: Option<u16>,
p1: Option<u16>,
delta: i32
)
pub fn kv_cache_seq_add( &mut self, seq_id: i32, p0: Option<u16>, p1: Option<u16>, delta: i32 )
Adds relative position “delta” to all tokens that belong to the specified sequence and have positions in [p0, p1) If the KV cache is RoPEd, the KV data is updated accordingly:
- lazily on next
LlamaContext::decode
- explicitly with
Self::kv_cache_update
§Parameters
seq_id
- The sequence id to updatep0
- The start position of the cache to update. IfNone
, the entire cache is updated up to [p1].p1
- The end position of the cache to update. IfNone
, the entire cache is updated starting from [p0].delta
- The relative position to add to the tokens
sourcepub fn kv_cache_seq_div(
&mut self,
seq_id: i32,
p0: Option<u16>,
p1: Option<u16>,
d: NonZeroU8
)
pub fn kv_cache_seq_div( &mut self, seq_id: i32, p0: Option<u16>, p1: Option<u16>, d: NonZeroU8 )
Integer division of the positions by factor of d > 1
If the KV cache is RoPEd
, the KV data is updated accordingly:
- lazily on next
LlamaContext::decode
- explicitly with
Self::kv_cache_update
§Parameters
seq_id
- The sequence id to updatep0
- The start position of the cache to update. IfNone
, the entire cache is updated up to [p1].p1
- The end position of the cache to update. IfNone
, the entire cache is updated starting from [p0].d
- The factor to divide the positions by
sourcepub fn kv_cache_seq_pos_max(&self, seq_id: i32) -> i32
pub fn kv_cache_seq_pos_max(&self, seq_id: i32) -> i32
Returns the largest position present in the KV cache for the specified sequence
§Parameters
seq_id
- The sequence id to get the max position for
sourcepub fn kv_cache_defrag(&mut self)
pub fn kv_cache_defrag(&mut self)
Defragment the KV cache This will be applied:
- lazily on next
LlamaContext::decode
- explicitly with
Self::kv_cache_update
sourcepub fn kv_cache_update(&mut self)
pub fn kv_cache_update(&mut self)
Apply the KV cache updates (such as K-shifts, defragmentation, etc.)
sourcepub fn get_kv_cache_token_count(&self) -> i32
pub fn get_kv_cache_token_count(&self) -> i32
Returns the number of tokens in the KV cache (slow, use only for debug) If a KV cell has multiple sequences assigned to it, it will be counted multiple times
sourcepub fn new_kv_cache_view(&self, n_max_seq: i32) -> KVCacheView<'_>
pub fn new_kv_cache_view(&self, n_max_seq: i32) -> KVCacheView<'_>
Create an empty KV cache view. (use only for debugging purposes)
§Parameters
n_max_seq
- Maximum number of sequences that can exist in a cell. It’s not an error if there are more sequences in a cell than this value, however they will not be visible in the viewcells_sequences
.
source§impl LlamaContext<'_>
impl LlamaContext<'_>
sourcepub fn sample(&mut self, sampler: Sampler<'_>) -> LlamaToken
👎Deprecated since 0.1.32: this does not scale well with many params and does not allow for changing of orders.
pub fn sample(&mut self, sampler: Sampler<'_>) -> LlamaToken
sourcepub fn grammar_accept_token(
&mut self,
grammar: &mut LlamaGrammar,
token: LlamaToken
)
pub fn grammar_accept_token( &mut self, grammar: &mut LlamaGrammar, token: LlamaToken )
Accept a token into the grammar.
sourcepub fn sample_grammar(
&mut self,
llama_token_data_array: &mut LlamaTokenDataArray,
llama_grammar: &LlamaGrammar
)
pub fn sample_grammar( &mut self, llama_token_data_array: &mut LlamaTokenDataArray, llama_grammar: &LlamaGrammar )
Perform grammar sampling.
sourcepub fn sample_temp(
&self,
token_data: &mut LlamaTokenDataArray,
temperature: f32
)
pub fn sample_temp( &self, token_data: &mut LlamaTokenDataArray, temperature: f32 )
Modify [token_data
] in place using temperature sampling.
§Panics
- [
temperature
] is not between 0.0 and 1.0
sourcepub fn sample_token_greedy(&self, token_data: LlamaTokenDataArray) -> LlamaToken
pub fn sample_token_greedy(&self, token_data: LlamaTokenDataArray) -> LlamaToken
sourcepub fn sample_tail_free(
&self,
token_data: &mut LlamaTokenDataArray,
z: f32,
min_keep: usize
)
pub fn sample_tail_free( &self, token_data: &mut LlamaTokenDataArray, z: f32, min_keep: usize )
Tail Free Sampling described in Tail-Free-Sampling.
sourcepub fn sample_typical(
&self,
token_data: &mut LlamaTokenDataArray,
p: f32,
min_keep: usize
)
pub fn sample_typical( &self, token_data: &mut LlamaTokenDataArray, p: f32, min_keep: usize )
Locally Typical Sampling implementation described in the paper.
sourcepub fn sample_top_p(
&self,
token_data: &mut LlamaTokenDataArray,
p: f32,
min_keep: usize
)
pub fn sample_top_p( &self, token_data: &mut LlamaTokenDataArray, p: f32, min_keep: usize )
Nucleus sampling described in academic paper The Curious Case of Neural Text Degeneration“
sourcepub fn sample_min_p(
&self,
llama_token_data: &mut LlamaTokenDataArray,
p: f32,
min_keep: usize
)
pub fn sample_min_p( &self, llama_token_data: &mut LlamaTokenDataArray, p: f32, min_keep: usize )
Minimum P sampling as described in #3841
sourcepub fn sample_top_k(
&self,
token_data: &mut LlamaTokenDataArray,
k: i32,
min_keep: usize
)
pub fn sample_top_k( &self, token_data: &mut LlamaTokenDataArray, k: i32, min_keep: usize )
Top-K sampling described in academic paper The Curious Case of Neural Text Degeneration
sourcepub fn sample_token_softmax(&self, token_data: &mut LlamaTokenDataArray)
pub fn sample_token_softmax(&self, token_data: &mut LlamaTokenDataArray)
Sorts candidate tokens by their logits in descending order and calculate probabilities based on logits.
sourcepub fn sample_repetition_penalty(
&mut self,
token_data: &mut LlamaTokenDataArray,
last_tokens: &[LlamaToken],
penalty_last_n: usize,
penalty_repeat: f32,
penalty_freq: f32,
penalty_present: f32
)
pub fn sample_repetition_penalty( &mut self, token_data: &mut LlamaTokenDataArray, last_tokens: &[LlamaToken], penalty_last_n: usize, penalty_repeat: f32, penalty_freq: f32, penalty_present: f32 )
source§impl LlamaContext<'_>
impl LlamaContext<'_>
sourcepub fn save_session_file(
&self,
path_session: impl AsRef<Path>,
tokens: &[LlamaToken]
) -> Result<(), SaveSessionError>
pub fn save_session_file( &self, path_session: impl AsRef<Path>, tokens: &[LlamaToken] ) -> Result<(), SaveSessionError>
Save the current session to a file.
§Parameters
path_session
- The file to save to.tokens
- The tokens to associate the session with. This should be a prefix of a sequence of tokens that the context has processed, so that the relevant KV caches are already filled.
§Errors
Fails if the path is not a valid utf8, is not a valid c string, or llama.cpp fails to save the session file.
sourcepub fn load_session_file(
&mut self,
path_session: impl AsRef<Path>,
max_tokens: usize
) -> Result<Vec<LlamaToken>, LoadSessionError>
pub fn load_session_file( &mut self, path_session: impl AsRef<Path>, max_tokens: usize ) -> Result<Vec<LlamaToken>, LoadSessionError>
Load a session file into the current context.
You still need to pass the returned tokens to the context for inference to work. What this function buys you is that the KV caches are already filled with the relevant data.
§Parameters
path_session
- The file to load from. It must be a session file from a compatible context, otherwise the function will error.max_tokens
- The maximum token length of the loaded session. If the session was saved with a longer length, the function will error.
§Errors
Fails if the path is not a valid utf8, is not a valid c string, or llama.cpp fails to load the session file. (e.g. the file does not exist, is not a session file, etc.)
sourcepub fn get_state_size(&self) -> usize
pub fn get_state_size(&self) -> usize
Returns the maximum size in bytes of the state (rng, logits, embedding
and kv_cache
) - will often be smaller after compacting tokens
sourcepub unsafe fn copy_state_data(&self, dest: *mut u8) -> usize
pub unsafe fn copy_state_data(&self, dest: *mut u8) -> usize
Copies the state to the specified destination address.
Returns the number of bytes copied
§Safety
Destination needs to have allocated enough memory.
sourcepub unsafe fn set_state_data(&mut self, src: &[u8]) -> usize
pub unsafe fn set_state_data(&mut self, src: &[u8]) -> usize
Set the state reading from the specified address Returns the number of bytes read
§Safety
help wanted: not entirely sure what the safety requirements are here.
source§impl<'model> LlamaContext<'model>
impl<'model> LlamaContext<'model>
sourcepub fn decode(&mut self, batch: &mut LlamaBatch) -> Result<(), DecodeError>
pub fn decode(&mut self, batch: &mut LlamaBatch) -> Result<(), DecodeError>
sourcepub fn embeddings_seq_ith(&self, i: i32) -> Result<&[f32], EmbeddingsError>
pub fn embeddings_seq_ith(&self, i: i32) -> Result<&[f32], EmbeddingsError>
Get the embeddings for the i
th sequence in the current context.
§Returns
A slice containing the embeddings for the last decoded batch.
The size corresponds to the n_embd
parameter of the context’s model.
§Errors
- When the current context was constructed without enabling embeddings.
- If the current model had a pooling type of
llama_cpp_sys_2::LLAMA_POOLING_TYPE_NONE
- If the given sequence index exceeds the max sequence id.
§Panics
n_embd
does not fit into a usize
sourcepub fn embeddings_ith(&self, i: i32) -> Result<&[f32], EmbeddingsError>
pub fn embeddings_ith(&self, i: i32) -> Result<&[f32], EmbeddingsError>
Get the embeddings for the i
th token in the current context.
§Returns
A slice containing the embeddings for the last decoded batch of the given token.
The size corresponds to the n_embd
parameter of the context’s model.
§Errors
- When the current context was constructed without enabling embeddings.
- When the given token didn’t have logits enabled when it was passed.
- If the given token index exceeds the max token id.
§Panics
n_embd
does not fit into a usize
sourcepub fn candidates_ith(
&self,
i: i32
) -> impl Iterator<Item = LlamaTokenData> + '_
pub fn candidates_ith( &self, i: i32 ) -> impl Iterator<Item = LlamaTokenData> + '_
sourcepub fn get_logits_ith(&self, i: i32) -> &[f32]
pub fn get_logits_ith(&self, i: i32) -> &[f32]
Get the logits for the ith token in the context.
§Panics
i
is greater thann_ctx
n_vocab
does not fit into a usize- logit
i
is not initialized.
sourcepub fn reset_timings(&mut self)
pub fn reset_timings(&mut self)
Reset the timings for the context.
sourcepub fn timings(&mut self) -> LlamaTimings
pub fn timings(&mut self) -> LlamaTimings
Returns the timings for the context.