Expand description
Completion request, response, and provider trait definitions.
Most applications use Prompt or Chat through
Agent. Provider integrations implement
CompletionModel and translate CompletionRequest into their native HTTP
request format.
§Low-level request example
use rig_core::{
client::{CompletionClient, ProviderClient},
completion::{AssistantContent, CompletionModel},
providers::openai,
};
let client = openai::Client::from_env()?;
let model = client.completion_model(openai::GPT_5_2);
let request = model
.completion_request("Who are you?")
.preamble("You are a concise assistant.".to_string())
.temperature(0.5)
.build();
let response = model.completion(request).await?;
for item in response.choice {
if let AssistantContent::Text(text) = item {
println!("{}", text.text);
}
}Structs§
- Completion
Request - Struct representing a general completion request that can be sent to a completion model provider.
- Completion
Request Builder - Builder struct for constructing a completion request.
- Completion
Response - General completion response struct that contains the high-level completion choice and the raw response. The completion choice contains one or more assistant content.
- Document
- Provider
Tool Definition - Provider-native tool definition.
- Tool
Definition - Usage
- Struct representing the token usage for a completion request.
If tokens used are
0, then the provider failed to supply token usage metrics.
Enums§
- Completion
Error - Prompt
Error - Prompt errors
- Structured
Output Error - Errors that can occur when using typed structured output via
TypedPrompt::prompt_typed.
Traits§
- Chat
- Trait defining a high-level LLM chat interface (i.e.: prompt and chat history in, response out).
- Completion
- Trait defining a low-level LLM completion interface
- Completion
Model - Trait defining a completion model that can be used to generate completion responses. This trait is meant to be implemented by the user to define a custom completion model, either from a third party provider (e.g.: OpenAI) or a local model.
- GetToken
Usage - A trait for grabbing the token usage of a completion response.
- Prompt
- Trait defining a high-level LLM simple prompt interface (i.e.: prompt in, response out).
- Typed
Prompt - Trait defining a high-level typed prompt interface for structured output.