pub struct ChatCompletionRequest {
pub model: String,
pub messages: Vec<ChatMessage>,
pub max_tokens: u32,
pub temperature: f64,
pub stop: Vec<String>,
}Expand description
OpenAI-compatible chat completion request.
Fields§
§model: StringModel identifier.
messages: Vec<ChatMessage>Messages in the conversation.
max_tokens: u32Maximum tokens to generate.
temperature: f64Temperature (0.0-2.0).
stop: Vec<String>Stop sequences.
Implementations§
Source§impl ChatCompletionRequest
impl ChatCompletionRequest
Sourcepub fn from_llm_request(model: impl Into<String>, request: &LlmRequest) -> Self
pub fn from_llm_request(model: impl Into<String>, request: &LlmRequest) -> Self
Creates a request from an LlmRequest.
Trait Implementations§
Source§impl Debug for ChatCompletionRequest
impl Debug for ChatCompletionRequest
Auto Trait Implementations§
impl Freeze for ChatCompletionRequest
impl RefUnwindSafe for ChatCompletionRequest
impl Send for ChatCompletionRequest
impl Sync for ChatCompletionRequest
impl Unpin for ChatCompletionRequest
impl UnsafeUnpin for ChatCompletionRequest
impl UnwindSafe for ChatCompletionRequest
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more