pub struct CreateChatCompletionStreamResponse {
pub choices: Vec<CreateChatCompletionStreamResponseChoices>,
pub created: i32,
pub id: String,
pub model: String,
pub object: String,
pub service_tier: Option<ServiceTier>,
pub system_fingerprint: Option<String>,
pub usage: Option<CompletionUsage>,
}
Fields§
§choices: Vec<CreateChatCompletionStreamResponseChoices>
A list of chat completion choices. Can contain more than one elements if n
is greater than 1. Can also be empty for the last chunk if you set stream_options: {\"include_usage\": true}
.
created: i32
The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp.
id: String
A unique identifier for the chat completion. Each chunk has the same ID.
model: String
The model to generate the completion.
object: String
The object type, which is always chat.completion.chunk
.
service_tier: Option<ServiceTier>
§system_fingerprint: Option<String>
This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the seed
request parameter to understand when backend changes have been made that might impact determinism.
usage: Option<CompletionUsage>
An optional field that will only be present when you set stream_options: {\"include_usage\": true}
in your request. When present, it contains a null value except for the last chunk which contains the token usage statistics for the entire request. NOTE: If the stream is interrupted or cancelled, you may not receive the final usage chunk which contains the total token usage for the request.