Skip to main content

Module request

Module request 

Source
Expand description

Completion request, response, and provider trait definitions.

Most applications use Prompt or Chat through Agent. Provider integrations implement CompletionModel and translate CompletionRequest into their native HTTP request format.

§Low-level request example

use rig_core::{
    client::{CompletionClient, ProviderClient},
    completion::{AssistantContent, CompletionModel},
    providers::openai,
};

let client = openai::Client::from_env()?;
let model = client.completion_model(openai::GPT_5_2);

let request = model
    .completion_request("Who are you?")
    .preamble("You are a concise assistant.".to_string())
    .temperature(0.5)
    .build();

let response = model.completion(request).await?;
for item in response.choice {
    if let AssistantContent::Text(text) = item {
        println!("{}", text.text);
    }
}

Structs§

CompletionRequest
Struct representing a general completion request that can be sent to a completion model provider.
CompletionRequestBuilder
Builder struct for constructing a completion request.
CompletionResponse
General completion response struct that contains the high-level completion choice and the raw response. The completion choice contains one or more assistant content.
Document
ProviderToolDefinition
Provider-native tool definition.
ToolDefinition
Usage
Struct representing the token usage for a completion request. If tokens used are 0, then the provider failed to supply token usage metrics.

Enums§

CompletionError
PromptError
Prompt errors
StructuredOutputError
Errors that can occur when using typed structured output via TypedPrompt::prompt_typed.

Traits§

Chat
Trait defining a high-level LLM chat interface (i.e.: prompt and chat history in, response out).
Completion
Trait defining a low-level LLM completion interface
CompletionModel
Trait defining a completion model that can be used to generate completion responses. This trait is meant to be implemented by the user to define a custom completion model, either from a third party provider (e.g.: OpenAI) or a local model.
GetTokenUsage
A trait for grabbing the token usage of a completion response.
Prompt
Trait defining a high-level LLM simple prompt interface (i.e.: prompt in, response out).
TypedPrompt
Trait defining a high-level typed prompt interface for structured output.