Expand description
This module provides functionality for working with completion models. It provides traits, structs, and enums for generating completion requests, handling completion responses, and defining completion models.
The main traits defined in this module are:
- Prompt: Defines a high-level LLM one-shot prompt interface.
- Chat: Defines a high-level LLM chat interface with chat history.
- Completion: Defines a low-level LLM completion interface for generating completion requests.
- CompletionModel: Defines a completion model that can be used to generate completion responses from requests.
The Prompt and Chat traits are high level traits that users are expected to use to interact with LLM models. Moreover, it is good practice to implement one of these traits for composite agents that use multiple LLM models to generate responses.
The Completion trait defines a lower level interface that is useful when the user want to further customize the request before sending it to the completion model provider.
The CompletionModel trait is meant to act as the interface between providers and the library. It defines the methods that need to be implemented by the user to define a custom base completion model (i.e.: a private or third party LLM provider).
The module also provides various structs and enums for representing generic completion requests, responses, and errors.
Example Usage:
use rig::providers::openai::{Client, self};
use rig::completion::*;
// Initialize the OpenAI client and a completion model
let openai = Client::new("your-openai-api-key");
let gpt_4 = openai.completion_model(openai::GPT_4);
// Create the completion request
let request = gpt_4.completion_request("Who are you?")
.preamble("\
You are Marvin, an extremely smart but depressed robot who is \
nonetheless helpful towards humanity.\
")
.temperature(0.5)
.build();
// Send the completion request and get the completion response
let response = gpt_4.completion(request)
.await
.expect("Failed to get completion response");
// Handle the completion response
match completion_response.choice {
ModelChoice::Message(message) => {
// Handle the completion response as a message
println!("Received message: {}", message);
}
ModelChoice::ToolCall(tool_name, tool_params) => {
// Handle the completion response as a tool call
println!("Received tool call: {} {:?}", tool_name, tool_params);
}
}For more information on how to use the completion functionality, refer to the documentation of the individual traits, structs, and enums defined in this module.
Structs§
- Completion
Request - Struct representing a general completion request that can be sent to a completion model provider.
- Completion
Request Builder - Builder struct for constructing a completion request.
- Completion
Response - General completion response struct that contains the high-level completion choice and the raw response.
- Document
- Message
- Tool
Definition
Enums§
- Completion
Error - Model
Choice - Enum representing the high-level completion choice returned by the completion model provider.
- Prompt
Error
Traits§
- Chat
- Trait defining a high-level LLM chat interface (i.e.: prompt and chat history in, response out).
- Completion
- Trait defininig a low-level LLM completion interface
- Completion
Model - Trait defining a completion model that can be used to generate completion responses. This trait is meant to be implemented by the user to define a custom completion model, either from a third party provider (e.g.: OpenAI) or a local model.
- Prompt
- Trait defining a high-level LLM simple prompt interface (i.e.: prompt in, response out).