pub struct AnyscaleModel {
pub messages: Option<Vec<OpenAiMessage>>,
pub tools: Option<Vec<AnyscaleModelToolsInner>>,
pub tool_ids: Option<Vec<String>>,
pub knowledge_base: Option<AnyscaleModelKnowledgeBase>,
pub knowledge_base_id: Option<String>,
pub provider: Provider,
pub model: String,
pub temperature: Option<f64>,
pub max_tokens: Option<f64>,
pub emotion_recognition_enabled: Option<bool>,
pub num_fast_turns: Option<f64>,
}Fields§
§messages: Option<Vec<OpenAiMessage>>This is the starting state for the conversation.
tools: Option<Vec<AnyscaleModelToolsInner>>These are the tools that the assistant can use during the call. To use existing tools, use toolIds. Both tools and toolIds can be used together.
tool_ids: Option<Vec<String>>These are the tools that the assistant can use during the call. To use transient tools, use tools. Both tools and toolIds can be used together.
knowledge_base: Option<AnyscaleModelKnowledgeBase>§knowledge_base_id: Option<String>This is the ID of the knowledge base the model will use.
provider: Provider§model: StringThis is the name of the model. Ex. cognitivecomputations/dolphin-mixtral-8x7b
temperature: Option<f64>This is the temperature that will be used for calls. Default is 0 to leverage caching for lower latency.
max_tokens: Option<f64>This is the max number of tokens that the assistant will be allowed to generate in each turn of the conversation. Default is 250.
emotion_recognition_enabled: Option<bool>This determines whether we detect user’s emotion while they speak and send it as an additional info to model. Default false because the model is usually are good at understanding the user’s emotion from text. @default false
num_fast_turns: Option<f64>This sets how many turns at the start of the conversation to use a smaller, faster model from the same provider before switching to the primary model. Example, gpt-3.5-turbo if provider is openai. Default is 0. @default 0
Implementations§
Source§impl AnyscaleModel
impl AnyscaleModel
pub fn new(provider: Provider, model: String) -> AnyscaleModel
Trait Implementations§
Source§impl Clone for AnyscaleModel
impl Clone for AnyscaleModel
Source§fn clone(&self) -> AnyscaleModel
fn clone(&self) -> AnyscaleModel
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read moreSource§impl Debug for AnyscaleModel
impl Debug for AnyscaleModel
Source§impl Default for AnyscaleModel
impl Default for AnyscaleModel
Source§fn default() -> AnyscaleModel
fn default() -> AnyscaleModel
Source§impl<'de> Deserialize<'de> for AnyscaleModel
impl<'de> Deserialize<'de> for AnyscaleModel
Source§fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
fn deserialize<__D>(__deserializer: __D) -> Result<Self, __D::Error>where
__D: Deserializer<'de>,
Source§impl OpenApi for AnyscaleModel
impl OpenApi for AnyscaleModel
Source§fn openapi() -> OpenApi
fn openapi() -> OpenApi
openapi::OpenApi instance which can be parsed with serde or served via
OpenAPI visualization tool such as Swagger UI.