pub struct Chat<'c, C: Config> { /* private fields */ }
Expand description
Given a list of messages comprising a conversation, the model will return a response.
Related guide: Chat completions
Implementations§
Source§impl<'c, C: Config> Chat<'c, C>
impl<'c, C: Config> Chat<'c, C>
pub fn new(client: &'c Client<C>) -> Self
Sourcepub async fn create(
&self,
request: CreateChatCompletionRequest,
) -> Result<CreateChatCompletionResponse, OpenAIError>
pub async fn create( &self, request: CreateChatCompletionRequest, ) -> Result<CreateChatCompletionResponse, OpenAIError>
Creates a model response for the given chat conversation. Learn more in the
and audio guides.
Parameter support can differ depending on the model used to generate the response, particularly for newer reasoning models. Parameters that are only supported for reasoning models are noted below. For the current state of unsupported parameters in reasoning models,
Sourcepub async fn create_stream(
&self,
request: CreateChatCompletionRequest,
) -> Result<ChatCompletionResponseStream, OpenAIError>
pub async fn create_stream( &self, request: CreateChatCompletionRequest, ) -> Result<ChatCompletionResponseStream, OpenAIError>
Creates a completion for the chat message
partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE]
message.
ChatCompletionResponseStream is a parsed SSE stream until a [DONE] is received from server.