Struct aleph_alpha_client::Stopping
source · [−]Expand description
Controls the conditions under which the language models stops generating text.
Fields
maximum_tokens: u32
The maximum number of tokens to be generated. Completion will terminate after the maximum number of tokens is reached.Increase this value to allow for longer outputs. A text is split into tokens. Usually there are more tokens than words. The total number of tokens of prompt and maximum_tokens depends on the model.
stop_sequences: &'a [&'a str]
List of strings which will stop generation if they are generated. Stop sequences are helpful in structured texts. E.g.: In a question answering scenario a text may consist of lines starting with either “Question: “ or “Answer: “ (alternating). After producing an answer, the model will be likely to generate “Question: “. “Question: “ may therfore be used as stop sequence in order not to have the model generate more questions but rather restrict text generation to the answers.
Implementations
sourceimpl<'a> Stopping<'a>
impl<'a> Stopping<'a>
sourcepub fn from_maximum_tokens(maximum_tokens: u32) -> Self
pub fn from_maximum_tokens(maximum_tokens: u32) -> Self
Only stop once the model generates end of text, or maximum tokens are reached.