pub struct GoogleModel {
pub messages: Option<Vec<OpenAiMessage>>,
pub tools: Option<Vec<AnyscaleModelToolsInner>>,
pub tool_ids: Option<Vec<String>>,
pub knowledge_base: Option<CreateCustomKnowledgeBaseDto>,
pub knowledge_base_id: Option<String>,
pub model: ModelTrue,
pub provider: ProviderTrue,
pub realtime_config: Option<GoogleRealtimeConfig>,
pub temperature: Option<f64>,
pub max_tokens: Option<f64>,
pub emotion_recognition_enabled: Option<bool>,
pub num_fast_turns: Option<f64>,
}
Fields§
§messages: Option<Vec<OpenAiMessage>>
This is the starting state for the conversation.
tools: Option<Vec<AnyscaleModelToolsInner>>
These are the tools that the assistant can use during the call. To use existing tools, use toolIds
. Both tools
and toolIds
can be used together.
tool_ids: Option<Vec<String>>
These are the tools that the assistant can use during the call. To use transient tools, use tools
. Both tools
and toolIds
can be used together.
knowledge_base: Option<CreateCustomKnowledgeBaseDto>
§knowledge_base_id: Option<String>
This is the ID of the knowledge base the model will use.
model: ModelTrue
This is the Google model that will be used.
provider: ProviderTrue
§realtime_config: Option<GoogleRealtimeConfig>
This is the session configuration for the Gemini Flash 2.0 Multimodal Live API. Only applicable if the model gemini-2.0-flash-realtime-exp
is selected.
temperature: Option<f64>
This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.
max_tokens: Option<f64>
This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.
emotion_recognition_enabled: Option<bool>
This determines whether we detect user’s emotion while they speak and send it as an additional info to model. Default false
because the model is usually are good at understanding the user’s emotion from text. @default false
num_fast_turns: Option<f64>
This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai. Default is 0. @default 0
Implementations§
Source§impl GoogleModel
impl GoogleModel
pub fn new(model: ModelTrue, provider: ProviderTrue) -> GoogleModel
Trait Implementations§
Source§impl Clone for GoogleModel
impl Clone for GoogleModel
Source§fn clone(&self) -> GoogleModel
fn clone(&self) -> GoogleModel
1.0.0 · Source§const fn clone_from(&mut self, source: &Self)
const fn clone_from(&mut self, source: &Self)
source
. Read more