GenerativeModel

Struct GenerativeModel 

Source
pub struct GenerativeModel<'c> {
    pub system_instruction: Option<Content>,
    pub tools: Option<Vec<Tool>>,
    pub tool_config: Option<ToolConfig>,
    pub safety_settings: Option<Vec<SafetySetting>>,
    pub generation_config: Option<GenerationConfig>,
    pub cached_content: Option<Box<str>>,
    /* private fields */
}
Expand description

Configured interface for a specific generative AI model

§Example

use google_ai_rs::{Client, GenerativeModel};

let client = Client::new(auth).await?;
let model = client.generative_model("gemini-pro")
    .with_system_instruction("You are a helpful assistant")
    .with_response_format("application/json");

Fields§

§system_instruction: Option<Content>

System prompt guiding model behavior

§tools: Option<Vec<Tool>>

Available functions/tools the model can use

§tool_config: Option<ToolConfig>

Configuration for tool usage

§safety_settings: Option<Vec<SafetySetting>>

Content safety filters

§generation_config: Option<GenerationConfig>

Generation parameters (temperature, top-k, etc.)

§cached_content: Option<Box<str>>

Fullname of the cached content to use as context (e.g., “cachedContents/NAME”)

Implementations§

Source§

impl GenerativeModel<'_>

Source

pub fn start_chat(&self) -> Session<'_>

Starts a new chat session with empty history

Source§

impl<'c> GenerativeModel<'c>

Source

pub fn new(client: &'c Client, name: &str) -> Self

Creates a new model interface with default configuration

§Arguments
  • client - Authenticated API client
  • name - Model identifier (e.g., “gemini-pro”)

To access a tuned model named NAME, pass “tunedModels/NAME”.

Source

pub fn to_typed<T: AsSchema>(self) -> TypedModel<'c, T>

Converts this GenerativeModel into a TypedModel.

This prepares the model to return responses that are automatically parsed into the specified type T.

Source

pub async fn generate_content<T>( &self, contents: T, ) -> Result<GenerateContentResponse, Error>
where T: TryIntoContents,

Generates content from flexible input types.

This method clones the model’s configuration for the request, allowing the original GenerativeModel instance to be reused.

§Example
use google_ai_rs::Part;

// Simple text generation
let response = model.generate_content("Hello world!").await?;

// Multi-part content
model.generate_content((
    "What's in this image?",
    Part::blob("image/jpeg", image_data)
)).await?;
§Errors

Returns Error::Service for model errors or Error::Net for transport failures.

Source

pub async fn generate_content_consuming<T>( self, contents: T, ) -> Result<GenerateContentResponse, Error>
where T: TryIntoContents,

Generates content by consuming the model instance.

This is an efficient alternative to generate_content if you don’t need to reuse the model instance, as it avoids cloning the model’s configuration. This is useful for one-shot requests where the model is built, used, and then discarded.

Source

pub async fn typed_generate_content<I, T>( &self, contents: I, ) -> Result<T, Error>

A convenience method to generate a structured response of type T.

This method internally converts the GenerativeModel to a TypedModel<T>, makes the request, and returns the parsed result directly. It clones the model configuration for the request.

For repeated calls with the same target type, it may be more efficient to create a TypedModel instance directly.

Source

pub async fn generate_typed_content<I, T>( &self, contents: I, ) -> Result<TypedResponse<T>, Error>

A convenience method to generate a structured response with metadata.

Similar to typed_generate_content, but returns a TypedResponse<T> which includes both the parsed data and the raw API response metadata.

Source

pub async fn stream_generate_content<T>( &self, contents: T, ) -> Result<ResponseStream, Error>
where T: TryIntoContents,

Generates a streaming response from flexible input.

This method clones the model’s configuration for the request, allowing the original GenerativeModel instance to be reused.

§Example
let mut stream = model.stream_generate_content("Tell me a story.").await?;
while let Some(chunk) = stream.next().await? {
    // Process streaming response
}
§Errors

Returns Error::Service for model errors or Error::Net for transport failures.

Source

pub async fn stream_generate_content_consuming<T>( self, contents: T, ) -> Result<ResponseStream, Error>
where T: TryIntoContents,

Generates a streaming response by consuming the model instance.

This is an efficient alternative to stream_generate_content if you don’t need to reuse the model instance, as it avoids cloning the model’s configuration.

Source

pub async fn count_tokens<T>( &self, contents: T, ) -> Result<CountTokensResponse, Error>
where T: TryIntoContents,

Estimates token usage for given content

Useful for cost estimation and validation before full generation

§Arguments
  • parts - Content input that can be converted to parts
§Example
let token_count = model.count_tokens(content).await?;
println!("Estimated cost: ${}", token_count.total() * COST_PER_TOKEN);
Source

pub async fn info(&self) -> Result<Info, Error>

info returns information about the model.

Info::Tuned if the current model is a fine-tuned one, otherwise Info::Model.

Source

pub fn change_model(&mut self, to: &str)

Changes the model identifier for this instance in place.

Source

pub fn full_name(&self) -> &str

Returns the full identifier of the model, including any models/ prefix.

Source

pub fn with_system_instruction<I: IntoContent>(self, instruction: I) -> Self

Sets system-level behavior instructions

Source

pub fn to_model(self, to: &str) -> Self

Changes the model identifier, returning the modified instance.

Source

pub fn with_cached_content(self, c: &CachedContent) -> Result<Self, Error>

Sets cached content for persisted context

§Example
use google_ai_rs::content::IntoContents as _;

let content = "You are a helpful assistant".into_cached_content_for("gemini-1.0-pro");

let cached_content = client.create_cached_content(content).await?;
let model = client.generative_model("gemini-pro")
            .with_cached_content(&cached_content);
Source

pub fn with_response_format(self, mime_type: &str) -> Self

Specifies expected response format (e.g., “application/json”)

Source

pub fn as_response_schema<T: AsSchema>(self) -> Self

Configures the model to respond with a schema matching the type T.

This is a convenient way to get structured JSON output.

§Example
use google_ai_rs::AsSchema;

#[derive(Debug, AsSchema)]
#[schema(description = "A primary colour")]
struct PrimaryColor {
    #[schema(description = "The name of the colour")]
    name: String,

    #[schema(description = "The RGB value of the color, in hex")]
    #[schema(rename = "RGB")]
    rgb: String
}

let model = client.generative_model("gemini-pro")
    .as_response_schema::<Vec<PrimaryColor>>();
Source

pub fn with_response_schema(self, schema: Schema) -> Self

Set response schema with explicit Schema object

Use when you need full control over schema details. Automatically sets response format to JSON if not specified.

§Example
use google_ai_rs::Schema;
use google_ai_rs::SchemaType;

let model = client.generative_model("gemini-pro")
     .with_response_schema(Schema {
        r#type: SchemaType::String as i32,
        format: "enum".into(),
        ..Default::default()
});
Source

pub fn tools<I>(self, tools: I) -> Self
where I: IntoIterator<Item = Tool>,

Adds a collection of tools to the model.

Tools define external functions that the model can call.

§Arguments
  • tools - An iterator of Tool instances.
Source

pub fn tool_config(self, tool_config: impl Into<ToolConfig>) -> Self

Configures how the model uses tools.

§Arguments
  • tool_config - The configuration for tool usage.
Source

pub fn safety_settings<I>(self, safety_settings: I) -> Self
where I: IntoIterator<Item = SafetySetting>,

Applies content safety filters to the model.

Safety settings control the probability thresholds for filtering potentially harmful content.

§Arguments
  • safety_settings - An iterator of SafetySetting instances.
Source

pub fn generation_config( self, generation_config: impl Into<GenerationConfig>, ) -> Self

Sets the generation parameters for the model.

This includes settings like temperature, top_k, and top_p to control the creativity and randomness of the model’s output.

§Arguments
  • generation_config - The configuration for generation.
Source

pub fn with_cloned_instruction<I: IntoContent>(&self, instruction: I) -> Self

Creates a copy with new system instructions

Source

pub fn candidate_count(self, x: i32) -> Self

Sets the number of candidates to generate.

This parameter specifies how many different response candidates the model should generate for a given prompt. The model will then select the best one based on its internal evaluation.

Source

pub fn max_output_tokens(self, x: i32) -> Self

Sets the maximum number of output tokens.

This parameter caps the length of the generated response, measured in tokens. It’s useful for controlling response size and preventing excessively long outputs.

Source

pub fn temperature(self, x: f32) -> Self

Sets the temperature for generation.

Temperature controls the randomness of the output. Higher values, like 1.0, make the output more creative and unpredictable, while lower values, like 0.1, make it more deterministic and focused.

Source

pub fn top_p(self, x: f32) -> Self

Sets the top-p sampling parameter.

Top-p (also known as nucleus sampling) chooses the smallest set of most likely tokens whose cumulative probability exceeds the value of x. This technique helps to prevent low-probability, nonsensical tokens from being chosen.

Source

pub fn top_k(self, x: i32) -> Self

Sets the top-k sampling parameter.

Top-k restricts the model’s token selection to the k most likely tokens at each step. It’s a method for controlling the model’s creativity and focus.

Source

pub fn set_candidate_count(&mut self, x: i32)

Sets the number of candidates to generate.

This parameter specifies how many different response candidates the model should generate for a given prompt. The model will then select the best one based on its internal evaluation.

Source

pub fn set_max_output_tokens(&mut self, x: i32)

Sets the maximum number of output tokens.

This parameter caps the length of the generated response, measured in tokens. It’s useful for controlling response size and preventing excessively long outputs.

Source

pub fn set_temperature(&mut self, x: f32)

Sets the temperature for generation.

Temperature controls the randomness of the output. Higher values, like 1.0, make the output more creative and unpredictable, while lower values, like 0.1, make it more deterministic and focused.

Source

pub fn set_top_p(&mut self, x: f32)

Sets the top-p sampling parameter.

Top-p (also known as nucleus sampling) chooses the smallest set of most likely tokens whose cumulative probability exceeds the value of x. This technique helps to prevent low-probability, nonsensical tokens from being chosen.

Source

pub fn set_top_k(&mut self, x: i32)

Sets the top-k sampling parameter.

Top-k restricts the model’s token selection to the k most likely tokens at each step. It’s a method for controlling the model’s creativity and focus.

Trait Implementations§

Source§

impl<'c> Clone for GenerativeModel<'c>

Source§

fn clone(&self) -> GenerativeModel<'c>

Returns a duplicate of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl<'c> Debug for GenerativeModel<'c>

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl<'c, T> From<GenerativeModel<'c>> for TypedModel<'c, T>
where T: AsSchema,

Source§

fn from(value: GenerativeModel<'c>) -> Self

Converts to this type from the input type.

Auto Trait Implementations§

§

impl<'c> Freeze for GenerativeModel<'c>

§

impl<'c> !RefUnwindSafe for GenerativeModel<'c>

§

impl<'c> Send for GenerativeModel<'c>

§

impl<'c> Sync for GenerativeModel<'c>

§

impl<'c> Unpin for GenerativeModel<'c>

§

impl<'c> !UnwindSafe for GenerativeModel<'c>

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T> Instrument for T

Source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoRequest<T> for T

Source§

fn into_request(self) -> Request<T>

Wrap the input message T in a tonic_veecore::Request
Source§

impl<L> LayerExt<L> for L

Source§

fn named_layer<S>(&self, service: S) -> Layered<<L as Layer<S>>::Service, S>
where L: Layer<S>,

Applies the layer to a service and wraps it in Layered.
Source§

impl<T> Same for T

Source§

type Output = T

Should always be Self
Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

Source§

fn vzip(self) -> V

Source§

impl<T> WithSubscriber for T

Source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more