pub struct ModifyAssistantRequest {
pub description: Option<String>,
pub instructions: Option<String>,
pub metadata: Option<Metadata>,
pub model: Option<Value>,
pub name: Option<String>,
pub reasoning_effort: Option<ReasoningEffort>,
pub response_format: Option<AssistantsApiResponseFormatOption>,
pub temperature: Option<f32>,
pub tool_resources: Option<ModifyAssistantRequestToolResources>,
pub tools: Option<Vec<Value>>,
pub top_p: Option<f32>,
}
Fields§
§description: Option<String>
The description of the assistant. The maximum length is 512 characters.
instructions: Option<String>
The system instructions that the assistant uses. The maximum length is 256,000 characters.
metadata: Option<Metadata>
§model: Option<Value>
ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
name: Option<String>
The name of the assistant. The maximum length is 256 characters.
reasoning_effort: Option<ReasoningEffort>
§response_format: Option<AssistantsApiResponseFormatOption>
§temperature: Option<f32>
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
tool_resources: Option<ModifyAssistantRequestToolResources>
§tools: Option<Vec<Value>>
A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter
, file_search
, or function
.
top_p: Option<f32>
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.