pub struct ChatBody<N, M>{Show 14 fields
pub model: N,
pub messages: Vec<M>,
pub request_id: Option<String>,
pub thinking: Option<ThinkingType>,
pub do_sample: Option<bool>,
pub stream: Option<bool>,
pub tool_stream: Option<bool>,
pub temperature: Option<f32>,
pub top_p: Option<f32>,
pub max_tokens: Option<u32>,
pub tools: Option<Vec<Tools>>,
pub user_id: Option<String>,
pub stop: Option<Vec<String>>,
pub response_format: Option<ResponseFormat>,
}Expand description
Main request body structure for chat API calls.
This structure represents a complete chat request with all possible configuration options. It uses generic types to support different model names and message types while maintaining type safety through trait bounds.
§Type Parameters
N- The model name type, must implementModelNameM- The message type, must form aBoundedpair withN
§Examples
use crate::model::base::{ChatBody, TextMessage};
// Create a basic chat request
let chat_body = ChatBody {
model: "gpt-4".to_string(),
messages: vec![
TextMessage::user("Hello, how are you?"),
TextMessage::assistant("I'm doing well, thank you!")
],
temperature: Some(0.7),
max_tokens: Some(1000),
..Default::default()
};Fields§
§model: NThe model to use for the chat completion.
messages: Vec<M>A list of messages comprising the conversation so far.
request_id: Option<String>A unique identifier for the request. Optional field that will be omitted from serialization if not provided.
thinking: Option<ThinkingType>Optional thinking prompt or reasoning text that can guide the model’s response. Only available for models that support thinking capabilities.
do_sample: Option<bool>Whether to use sampling during generation. When true, the model will use
probabilistic sampling; when false, it will use deterministic generation.
stream: Option<bool>Whether to stream back partial message deltas as they are generated.
When true, responses will be sent as server-sent events.
tool_stream: Option<bool>Whether to enable streaming of tool calls (streaming function call parameters). Only supported by GLM-4.6 models. Defaults to false when omitted.
temperature: Option<f32>Controls randomness in the output. Higher values (closer to 1.0) make the output more random, while lower values (closer to 0.0) make it more deterministic. Must be between 0.0 and 1.0.
top_p: Option<f32>Controls diversity via nucleus sampling. Only tokens with cumulative probability
up to top_p are considered. Must be between 0.0 and 1.0.
max_tokens: Option<u32>The maximum number of tokens to generate in the completion. Must be between 1 and 98304.
tools: Option<Vec<Tools>>A list of tools the model may call. Currently supports function calling, web search, and retrieval tools. Note: server expects an array; we model this as a vector of tool items.
user_id: Option<String>A unique identifier representing your end-user, which can help monitor and detect abuse. Must be between 6 and 128 characters long.
stop: Option<Vec<String>>Up to 1 sequence where the API will stop generating further tokens.
response_format: Option<ResponseFormat>An object specifying the format that the model must output. Can be either text or JSON object format.
Implementations§
Source§impl<N, M> ChatBody<N, M>
impl<N, M> ChatBody<N, M>
pub fn new(model: N, messages: M) -> Self
pub fn add_messages(self, messages: M) -> Self
pub fn with_request_id(self, request_id: impl Into<String>) -> Self
pub fn with_do_sample(self, do_sample: bool) -> Self
pub fn with_stream(self, stream: bool) -> Self
pub fn with_temperature(self, temperature: f32) -> Self
pub fn with_top_p(self, top_p: f32) -> Self
pub fn with_max_tokens(self, max_tokens: u32) -> Self
Sourcepub fn with_tools(self, tools: impl Into<Vec<Tools>>) -> Self
👎Deprecated: with_tools is deprecated; use add_tool/add_tools instead
pub fn with_tools(self, tools: impl Into<Vec<Tools>>) -> Self
Deprecated: use add_tools (single) or extend_tools (Vec) on ChatBody,
or prefer ChatCompletion::add_tool / add_tools at the client layer.
pub fn add_tools(self, tools: Tools) -> Self
pub fn extend_tools(self, tools: Vec<Tools>) -> Self
pub fn with_user_id(self, user_id: impl Into<String>) -> Self
pub fn with_stop(self, stop: String) -> Self
Source§impl<N, M> ChatBody<N, M>
impl<N, M> ChatBody<N, M>
Sourcepub fn with_thinking(self, thinking: ThinkingType) -> Self
pub fn with_thinking(self, thinking: ThinkingType) -> Self
Adds thinking text to the chat body for models that support thinking capabilities.
This method is only available for models that implement the ThinkEnable trait,
ensuring type safety for thinking-enabled models.
§Arguments
thinking- The thinking prompt or reasoning text to add
§Returns
Returns self with the thinking field set, allowing for method chaining.
§Examples
let chat_body = ChatBody::new(model, messages)
.with_thinking("Let me think step by step about this problem...");