pub struct AnthropicModel {
pub messages: Option<Vec<OpenAiMessage>>,
pub tools: Option<Vec<AnyscaleModelToolsInner>>,
pub tool_ids: Option<Vec<String>>,
pub knowledge_base: Option<CreateCustomKnowledgeBaseDto>,
pub knowledge_base_id: Option<String>,
pub model: ModelTrue,
pub provider: ProviderTrue,
pub thinking: Option<AnthropicThinkingConfig>,
pub temperature: Option<f64>,
pub max_tokens: Option<f64>,
pub emotion_recognition_enabled: Option<bool>,
pub num_fast_turns: Option<f64>,
}
Fields§
§messages: Option<Vec<OpenAiMessage>>
This is the starting state for the conversation.
tools: Option<Vec<AnyscaleModelToolsInner>>
These are the tools that the assistant can use during the call. To use existing tools, use toolIds
. Both tools
and toolIds
can be used together.
tool_ids: Option<Vec<String>>
These are the tools that the assistant can use during the call. To use transient tools, use tools
. Both tools
and toolIds
can be used together.
knowledge_base: Option<CreateCustomKnowledgeBaseDto>
§knowledge_base_id: Option<String>
This is the ID of the knowledge base the model will use.
model: ModelTrue
The specific Anthropic/Claude model that will be used.
provider: ProviderTrue
The provider identifier for Anthropic.
thinking: Option<AnthropicThinkingConfig>
Optional configuration for Anthropic’s thinking feature. Only applicable for claude-3-7-sonnet-20250219 model. If provided, maxTokens must be greater than thinking.budgetTokens.
temperature: Option<f64>
This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.
max_tokens: Option<f64>
This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.
emotion_recognition_enabled: Option<bool>
This determines whether we detect user’s emotion while they speak and send it as an additional info to model. Default false
because the model is usually are good at understanding the user’s emotion from text. @default false
num_fast_turns: Option<f64>
This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai. Default is 0. @default 0
Implementations§
Source§impl AnthropicModel
impl AnthropicModel
pub fn new(model: ModelTrue, provider: ProviderTrue) -> AnthropicModel
Trait Implementations§
Source§impl Clone for AnthropicModel
impl Clone for AnthropicModel
Source§fn clone(&self) -> AnthropicModel
fn clone(&self) -> AnthropicModel
1.0.0 · Source§const fn clone_from(&mut self, source: &Self)
const fn clone_from(&mut self, source: &Self)
source
. Read more