pub struct LanguageModelInput {Show 17 fields
pub system_prompt: Option<String>,
pub messages: Vec<Message>,
pub tools: Option<Vec<Tool>>,
pub tool_choice: Option<ToolChoiceOption>,
pub response_format: Option<ResponseFormatOption>,
pub max_tokens: Option<u32>,
pub temperature: Option<f64>,
pub top_p: Option<f64>,
pub top_k: Option<i32>,
pub presence_penalty: Option<f64>,
pub frequency_penalty: Option<f64>,
pub seed: Option<i64>,
pub modalities: Option<Vec<Modality>>,
pub metadata: Option<HashMap<String, String>>,
pub audio: Option<AudioOptions>,
pub reasoning: Option<ReasoningOptions>,
pub extra: Option<LanguageModelInputExtra>,
}Expand description
Defines the input parameters for the language model completion.
Fields§
§system_prompt: Option<String>A system prompt is a way of providing context and instructions to the model
messages: Vec<Message>A list of messages comprising the conversation so far.
tools: Option<Vec<Tool>>Definitions of tools that the model may use.
tool_choice: Option<ToolChoiceOption>§response_format: Option<ResponseFormatOption>§max_tokens: Option<u32>The maximum number of tokens that can be generated in the chat completion.
temperature: Option<f64>Amount of randomness injected into the response. Ranges from 0.0 to 1.0
top_p: Option<f64>An alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p
probability mass. Ranges from 0.0 to 1.0
top_k: Option<i32>Only sample from the top K options for each subsequent token. Used to remove ‘long tail’ low probability responses. Must be a non-negative integer.
presence_penalty: Option<f64>Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics.
frequency_penalty: Option<f64>Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim.
seed: Option<i64>The seed (integer), if set and supported by the model, to enable deterministic results.
modalities: Option<Vec<Modality>>The modalities that the model should support.
metadata: Option<HashMap<String, String>>A set of key/value pairs that store additional information about the request. This is forwarded to the model provider if supported.
audio: Option<AudioOptions>Options for audio generation.
reasoning: Option<ReasoningOptions>Options for reasoning generation.
extra: Option<LanguageModelInputExtra>Extra options that the model may support.
Trait Implementations§
Source§impl Clone for LanguageModelInput
impl Clone for LanguageModelInput
Source§fn clone(&self) -> LanguageModelInput
fn clone(&self) -> LanguageModelInput
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more