Crate llama_cpp_sys_2
source ·Expand description
See llama-cpp-2 for a documented and safe API.
Structs
Constants
Statics
Functions
- feof⚠
- getc⚠
- getw⚠
- @details Deterministically returns entire sentence constructed by a beam search. @param ctx Pointer to the llama_context. @param callback Invoked for each iteration of the beam_search loop, passing in beams_state. @param callback_data A pointer that is simply passed back to callback. @param n_beams Number of beams to use. @param n_past Number of tokens already evaluated. @param n_predict Maximum number of tokens to predict. EOS may occur earlier.
- @details Accepts the sampled token into the grammar
- @details Apply classifier-free guidance to the logits as described in academic paper “Stay on topic with Classifier-Free Guidance” https://arxiv.org/abs/2306.17806 @param candidates A vector of
llama_token_data
containing the candidate tokens, the logits must be directly extracted from the original generation context without being sorted. @params guidance_ctx A separate context from the same model. Other than a negative prompt at the beginning, it should have all generated and user input tokens copied from the main context. @params scale Guidance strength. 1.0f means no guidance. Higher values mean stronger guidance. - @details Apply constraints from grammar
- @details Minimum P sampling as described in https://github.com/ggerganov/llama.cpp/pull/3841
- @details Repetition penalty described in CTRL academic paper https://arxiv.org/abs/1909.05858, with negative logit fix. @details Frequency and presence penalties described in OpenAI API https://platform.openai.com/docs/api-reference/parameter-details.
- @details Sorts candidate tokens by their logits in descending order and calculate probabilities based on logits.
- @details Tail Free Sampling described in https://www.trentonbricken.com/Tail-Free-Sampling/.
- @details Randomly selects a token from the candidates based on their probabilities.
- @details Selects the token with the highest probability. Does not compute the token probabilities. Use llama_sample_softmax() instead.
- @details Mirostat 1.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words. @param candidates A vector of
llama_token_data
containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text. @param tau The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text. @param eta The learning rate used to updatemu
based on the error between the target and observed surprisal of the sampled word. A larger learning rate will causemu
to be updated more quickly, while a smaller learning rate will result in slower updates. @param m The number of tokens considered in the estimation ofs_hat
. This is an arbitrary value that is used to calculates_hat
, which in turn helps to calculate the value ofk
. In the paper, they usem = 100
, but you can experiment with different values to see how it affects the performance of the algorithm. @param mu Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (2 * tau
) and is updated in the algorithm based on the error between the target and observed surprisal. - @details Mirostat 2.0 algorithm described in the paper https://arxiv.org/abs/2007.14966. Uses tokens instead of words. @param candidates A vector of
llama_token_data
containing the candidate tokens, their probabilities (p), and log-odds (logit) for the current position in the generated text. @param tau The target cross-entropy (or surprise) value you want to achieve for the generated text. A higher value corresponds to more surprising or less predictable text, while a lower value corresponds to less surprising or more predictable text. @param eta The learning rate used to updatemu
based on the error between the target and observed surprisal of the sampled word. A larger learning rate will causemu
to be updated more quickly, while a smaller learning rate will result in slower updates. @param mu Maximum cross-entropy. This value is initialized to be twice the target cross-entropy (2 * tau
) and is updated in the algorithm based on the error between the target and observed surprisal. - @details Top-K sampling described in academic paper “The Curious Case of Neural Text Degeneration” https://arxiv.org/abs/1904.09751
- @details Nucleus sampling described in academic paper “The Curious Case of Neural Text Degeneration” https://arxiv.org/abs/1904.09751
- @details Locally Typical Sampling implementation described in the paper https://arxiv.org/abs/2202.00666.
- @details Convert the provided text into tokens. @param tokens The tokens pointer must be large enough to hold the resulting tokens. @return Returns the number of tokens on success, no more than n_max_tokens @return Returns a negative number on failure - the number of tokens that would have been returned @param special Allow tokenizing special and/or control tokens which otherwise are not exposed and treated as plaintext. Does not insert a leading space.
- putc⚠
- puts⚠
- putw⚠