pub struct AssistantOverrides {Show 35 fields
pub transcriber: Option<CreateAssistantDtoTranscriber>,
pub model: Option<CreateAssistantDtoModel>,
pub voice: Option<CreateAssistantDtoVoice>,
pub first_message: Option<String>,
pub first_message_interruptions_enabled: Option<bool>,
pub first_message_mode: Option<FirstMessageModeTrue>,
pub voicemail_detection: Option<CreateAssistantDtoVoicemailDetection>,
pub client_messages: Option<Vec<ClientMessagesTrue>>,
pub server_messages: Option<Vec<ServerMessagesTrue>>,
pub silence_timeout_seconds: Option<f64>,
pub max_duration_seconds: Option<f64>,
pub background_sound: Option<CreateAssistantDtoBackgroundSound>,
pub background_denoising_enabled: Option<bool>,
pub model_output_in_messages_enabled: Option<bool>,
pub transport_configurations: Option<Vec<CreateAssistantDtoTransportConfigurationsInner>>,
pub observability_plan: Option<LangfuseObservabilityPlan>,
pub credentials: Option<Vec<WorkflowUserEditableCredentialsInner>>,
pub hooks: Option<Vec<CreateAssistantDtoHooksInner>>,
pub variable_values: Option<Value>,
pub name: Option<String>,
pub voicemail_message: Option<String>,
pub end_call_message: Option<String>,
pub end_call_phrases: Option<Vec<String>>,
pub compliance_plan: Option<CompliancePlan>,
pub metadata: Option<Value>,
pub background_speech_denoising_plan: Option<BackgroundSpeechDenoisingPlan>,
pub analysis_plan: Option<AnalysisPlan>,
pub artifact_plan: Option<ArtifactPlan>,
pub message_plan: Option<MessagePlan>,
pub start_speaking_plan: Option<StartSpeakingPlan>,
pub stop_speaking_plan: Option<StopSpeakingPlan>,
pub monitor_plan: Option<MonitorPlan>,
pub credential_ids: Option<Vec<String>>,
pub server: Option<Server>,
pub keypad_input_plan: Option<KeypadInputPlan>,
}
Fields§
§transcriber: Option<CreateAssistantDtoTranscriber>
§model: Option<CreateAssistantDtoModel>
§voice: Option<CreateAssistantDtoVoice>
§first_message: Option<String>
This is the first message that the assistant will say. This can also be a URL to a containerized audio file (mp3, wav, etc.). If unspecified, assistant will wait for user to speak and use the model to respond once they speak.
first_message_interruptions_enabled: Option<bool>
§first_message_mode: Option<FirstMessageModeTrue>
This is the mode for the first message. Default is ‘assistant-speaks-first’. Use: - ‘assistant-speaks-first’ to have the assistant speak first. - ‘assistant-waits-for-user’ to have the assistant wait for the user to speak first. - ‘assistant-speaks-first-with-model-generated-message’ to have the assistant speak first with a message generated by the model based on the conversation state. (assistant.model.messages
at call start, call.messages
at squad transfer points). @default ‘assistant-speaks-first’
voicemail_detection: Option<CreateAssistantDtoVoicemailDetection>
§client_messages: Option<Vec<ClientMessagesTrue>>
These are the messages that will be sent to your Client SDKs. Default is conversation-update,function-call,hang,model-output,speech-update,status-update,transfer-update,transcript,tool-calls,user-interrupted,voice-input,workflow.node.started. You can check the shape of the messages in ClientMessage schema.
server_messages: Option<Vec<ServerMessagesTrue>>
These are the messages that will be sent to your Server URL. Default is conversation-update,end-of-call-report,function-call,hang,speech-update,status-update,tool-calls,transfer-destination-request,user-interrupted. You can check the shape of the messages in ServerMessage schema.
silence_timeout_seconds: Option<f64>
How many seconds of silence to wait before ending the call. Defaults to 30. @default 30
max_duration_seconds: Option<f64>
This is the maximum number of seconds that the call will last. When the call reaches this duration, it will be ended. @default 600 (10 minutes)
background_sound: Option<CreateAssistantDtoBackgroundSound>
§background_denoising_enabled: Option<bool>
This enables filtering of noise and background speech while the user is talking. Default false
while in beta. @default false
model_output_in_messages_enabled: Option<bool>
This determines whether the model’s output is used in conversation history rather than the transcription of assistant’s speech. Default false
while in beta. @default false
transport_configurations: Option<Vec<CreateAssistantDtoTransportConfigurationsInner>>
These are the configurations to be passed to the transport providers of assistant’s calls, like Twilio. You can store multiple configurations for different transport providers. For a call, only the configuration matching the call transport provider is used.
observability_plan: Option<LangfuseObservabilityPlan>
This is the plan for observability of assistant’s calls. Currently, only Langfuse is supported.
credentials: Option<Vec<WorkflowUserEditableCredentialsInner>>
These are dynamic credentials that will be used for the assistant calls. By default, all the credentials are available for use in the call but you can supplement an additional credentials using this. Dynamic credentials override existing credentials.
hooks: Option<Vec<CreateAssistantDtoHooksInner>>
This is a set of actions that will be performed on certain events.
variable_values: Option<Value>
These are values that will be used to replace the template variables in the assistant messages and other text-based fields. This uses LiquidJS syntax. https://liquidjs.com/tutorials/intro-to-liquid.html So for example, {{ name }}
will be replaced with the value of name
in variableValues
. {{\"now\" | date: \"%b %d, %Y, %I:%M %p\", \"America/New_York\"}}
will be replaced with the current date and time in New York. Some VAPI reserved defaults: - customer - the customer object
name: Option<String>
This is the name of the assistant. This is required when you want to transfer between assistants in a call.
voicemail_message: Option<String>
This is the message that the assistant will say if the call is forwarded to voicemail. If unspecified, it will hang up.
end_call_message: Option<String>
This is the message that the assistant will say if it ends the call. If unspecified, it will hang up without saying anything.
end_call_phrases: Option<Vec<String>>
This list contains phrases that, if spoken by the assistant, will trigger the call to be hung up. Case insensitive.
compliance_plan: Option<CompliancePlan>
§metadata: Option<Value>
This is for metadata you want to store on the assistant.
background_speech_denoising_plan: Option<BackgroundSpeechDenoisingPlan>
This enables filtering of noise and background speech while the user is talking. Features: - Smart denoising using Krisp - Fourier denoising Smart denoising can be combined with or used independently of Fourier denoising. Order of precedence: - Smart denoising - Fourier denoising
analysis_plan: Option<AnalysisPlan>
This is the plan for analysis of assistant’s calls. Stored in call.analysis
.
artifact_plan: Option<ArtifactPlan>
This is the plan for artifacts generated during assistant’s calls. Stored in call.artifact
.
message_plan: Option<MessagePlan>
This is the plan for static predefined messages that can be spoken by the assistant during the call, like idleMessages
. Note: firstMessage
, voicemailMessage
, and endCallMessage
are currently at the root level. They will be moved to messagePlan
in the future, but will remain backwards compatible.
start_speaking_plan: Option<StartSpeakingPlan>
This is the plan for when the assistant should start talking. You should configure this if you’re running into these issues: - The assistant is too slow to start talking after the customer is done speaking. - The assistant is too fast to start talking after the customer is done speaking. - The assistant is so fast that it’s actually interrupting the customer.
stop_speaking_plan: Option<StopSpeakingPlan>
This is the plan for when assistant should stop talking on customer interruption. You should configure this if you’re running into these issues: - The assistant is too slow to recognize customer’s interruption. - The assistant is too fast to recognize customer’s interruption. - The assistant is getting interrupted by phrases that are just acknowledgments. - The assistant is getting interrupted by background noises. - The assistant is not properly stopping – it starts talking right after getting interrupted.
monitor_plan: Option<MonitorPlan>
This is the plan for real-time monitoring of the assistant’s calls. Usage: - To enable live listening of the assistant’s calls, set monitorPlan.listenEnabled
to true
. - To enable live control of the assistant’s calls, set monitorPlan.controlEnabled
to true
.
credential_ids: Option<Vec<String>>
These are the credentials that will be used for the assistant calls. By default, all the credentials are available for use in the call but you can provide a subset using this.
server: Option<Server>
This is where Vapi will send webhooks. You can find all webhooks available along with their shape in ServerMessage schema. The order of precedence is: 1. assistant.server.url 2. phoneNumber.serverUrl 3. org.serverUrl
keypad_input_plan: Option<KeypadInputPlan>
Implementations§
Source§impl AssistantOverrides
impl AssistantOverrides
pub fn new() -> AssistantOverrides
Trait Implementations§
Source§impl Clone for AssistantOverrides
impl Clone for AssistantOverrides
Source§fn clone(&self) -> AssistantOverrides
fn clone(&self) -> AssistantOverrides
1.0.0 · Source§const fn clone_from(&mut self, source: &Self)
const fn clone_from(&mut self, source: &Self)
source
. Read more