pub struct LlmOptions {
pub completion_first: Option<Option<bool>>,
pub frequency_penalty: Option<Option<f32>>,
pub max_tokens: Option<Option<i32>>,
pub presence_penalty: Option<Option<f32>>,
pub stop_tokens: Option<Option<Vec<String>>>,
pub stream_response: Option<Option<bool>>,
pub system_prompt: Option<Option<String>>,
pub temperature: Option<Option<f32>>,
}
Expand description
LlmOptions : LLM options to use for the completion. If not specified, this defaults to the dataset’s LLM options.
Fields§
§completion_first: Option<Option<bool>>
Completion first decides whether the stream should contain the stream of the completion response or the chunks first. Default is false. Keep in mind that || is used to separate the chunks from the completion response. If || is in the completion then you may want to split on ||{ instead.
frequency_penalty: Option<Option<f32>>
Frequency penalty is a number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. Default is 0.7.
max_tokens: Option<Option<i32>>
The maximum number of tokens to generate in the chat completion. Default is None.
presence_penalty: Option<Option<f32>>
Presence penalty is a number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. Default is 0.7.
stop_tokens: Option<Option<Vec<String>>>
Stop tokens are up to 4 sequences where the API will stop generating further tokens. Default is None.
stream_response: Option<Option<bool>>
Whether or not to stream the response. If this is set to true or not included, the response will be a stream. If this is set to false, the response will be a normal JSON response. Default is true.
system_prompt: Option<Option<String>>
Optionally, override the system prompt in dataset server settings.
temperature: Option<Option<f32>>
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. Default is 0.5.
Implementations§
Source§impl LlmOptions
impl LlmOptions
Sourcepub fn new() -> LlmOptions
pub fn new() -> LlmOptions
LLM options to use for the completion. If not specified, this defaults to the dataset’s LLM options.
Trait Implementations§
Source§impl Clone for LlmOptions
impl Clone for LlmOptions
Source§fn clone(&self) -> LlmOptions
fn clone(&self) -> LlmOptions
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read more