@details Deterministically returns entire sentence constructed by a beam search.
@param ctx Pointer to the llama_context.
@param callback Invoked for each iteration of the beam_search loop, passing in beams_state.
@param callback_data A pointer that is simply passed back to callback.
@param n_beams Number of beams to use.
@param n_past Number of tokens already evaluated.
@param n_predict Maximum number of tokens to predict. EOS may occur earlier.
Apply chat template. Inspired by hf apply_chat_template() on python.
Both “model” and “custom_template” are optional, but at least one is required. “custom_template” has higher precedence than “model”
NOTE: This function does not use a jinja parser. It only support a pre-defined list of template. See more: https://github.com/ggerganov/llama.cpp/wiki/Templates-supported-by-llama_chat_apply_template
@param tmpl A Jinja template to use for this chat. If this is nullptr, the model’s default chat template will be used instead.
@param chat Pointer to a list of multiple llama_chat_message
@param n_msg Number of llama_chat_message in this chat
@param add_ass Whether to end the prompt with the token(s) that indicate the start of an assistant message.
@param buf A buffer to hold the output formatted prompt. The recommended alloc size is 2 * (total number of characters of all messages)
@param length The size of the allocated buffer
@return The total number of bytes of the formatted prompt. If is it larger than the size of buffer, you may need to re-alloc it and then re-apply the template.
@details Accepts the sampled token into the grammar
@details Apply classifier-free guidance to the logits as described in academic paper “Stay on topic with Classifier-Free Guidance” https://arxiv.org/abs/2306.17806
@param logits Logits extracted from the original generation context.
@param logits_guidance Logits extracted from a separate context from the same model. Other than a negative prompt at the beginning, it should have all generated and user input tokens copied from the main context.
@param scale Guidance strength. 1.0f means no guidance. Higher values mean stronger guidance.
@details Dynamic temperature implementation described in the paper https://arxiv.org/abs/2309.02772.
@details Apply constraints from grammar
@details Minimum P sampling as described in https://github.com/ggerganov/llama.cpp/pull/3841
@details Repetition penalty described in CTRL academic paper https://arxiv.org/abs/1909.05858, with negative logit fix.
@details Frequency and presence penalties described in OpenAI API https://platform.openai.com/docs/api-reference/parameter-details.
@details Sorts candidate tokens by their logits in descending order and calculate probabilities based on logits.
@details Tail Free Sampling described in https://www.trentonbricken.com/Tail-Free-Sampling/.
@details Randomly selects a token from the candidates based on their probabilities using the RNG of ctx.
@details Selects the token with the highest probability.
Does not compute the token probabilities. Use llama_sample_softmax() instead.
@details Mirostat 1.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words.
@param candidates A vector of llama_token_data
containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text.
@param tau The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text.
@param eta The learning rate used to update mu
based on the error between the target and observed surprisal of the sampled word. A larger learning rate will cause mu
to be updated more quickly, while a smaller learning rate will result in slower updates.
@param m The number of tokens considered in the estimation of s_hat
. This is an arbitrary value that is used to calculate s_hat
, which in turn helps to calculate the value of k
. In the paper, they use m = 100
, but you can experiment with different values to see how it affects the performance of the algorithm.
@param mu Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (2 * tau
) and is updated in the algorithm based on the error between the target and observed surprisal.
@details Mirostat 2.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words.
@param candidates A vector of llama_token_data
containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text.
@param tau The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text.
@param eta The learning rate used to update mu
based on the error between the target and observed surprisal of the sampled word. A larger learning rate will cause mu
to be updated more quickly, while a smaller learning rate will result in slower updates.
@param mu Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (2 * tau
) and is updated in the algorithm based on the error between the target and observed surprisal.
@details Top-K sampling described in academic paper “The Curious Case of Neural Text Degeneration” https://arxiv.org/abs/1904.09751
@details Nucleus sampling described in academic paper “The Curious Case of Neural Text Degeneration” https://arxiv.org/abs/1904.09751
@details Locally Typical Sampling implementation described in the paper https://arxiv.org/abs/2202.00666.
@details Build a split GGUF final path for this chunk.
llama_split_path(split_path, sizeof(split_path), “/models/ggml-model-q4_0”, 2, 4) => split_path = “/models/ggml-model-q4_0-00002-of-00004.gguf”
@details Extract the path prefix from the split_path if and only if the split_no and split_count match.
llama_split_prefix(split_prefix, 64, “/models/ggml-model-q4_0-00002-of-00004.gguf”, 2, 4) => split_prefix = “/models/ggml-model-q4_0”
@details Convert the provided text into tokens.
@param tokens The tokens pointer must be large enough to hold the resulting tokens.
@return Returns the number of tokens on success, no more than n_tokens_max
@return Returns a negative number on failure - the number of tokens that would have been returned
@param parse_special Allow tokenizing special and/or control tokens which otherwise are not exposed and treated
as plaintext. Does not insert a leading space.