pub struct GenerateRequest {
pub model: String,
pub prompt: String,
pub system: Option<String>,
pub template: Option<String>,
pub context: Option<Vec<u32>>,
pub options: Option<GenerateOptions>,
pub stream: bool,
pub format: Option<String>,
}
Expand description
Request for generating text with Ollama
This struct represents a request to the Ollama API for text generation. It includes the model to use, the input prompt, and various generation options.
§Examples
use projets_indexer::ollama::GenerateRequest;
let request = GenerateRequest {
model: "gemma3:1b".to_string(),
prompt: "Generate a tag for this project".to_string(),
system: Some("You are a technical project tagger.".to_string()),
template: None,
context: None,
options: None,
stream: false,
format: None,
};
Fields§
§model: String
Name of the model to use
The identifier of the Ollama model to use for text generation.
prompt: String
Input prompt for generation
The text prompt that will be used to generate the response.
system: Option<String>
System prompt for the model
Optional system-level instructions that guide the model’s behavior.
template: Option<String>
Template for formatting the prompt
Optional template string for formatting the prompt with variables.
context: Option<Vec<u32>>
Context from previous interactions
Optional context from previous interactions to maintain conversation history or state.
options: Option<GenerateOptions>
Generation options
Optional parameters that control how the model generates text.
stream: bool
Whether to stream the response
If true, the response will be streamed token by token.
format: Option<String>
Response format
Optional format specification for the response.