pub struct TypedModel<'c, T> { /* private fields */ }
Expand description
Type-safe wrapper for GenerativeModel
guaranteeing response type T
.
This type enforces schema contracts through Rust’s type system while maintaining compatibility with Google’s Generative AI API. Use when:
- You need structured output from the model
- Response schema stability is critical
- You want compile-time validation of response handling
§Example
use google_ai_rs::{Client, GenerativeModel, AsSchema};
#[derive(AsSchema)]
struct Recipe {
name: String,
ingredients: Vec<String>,
}
let client = Client::new(auth).await?;
let model = client.typed_model::<Recipe>("gemini-pro");
Implementations§
Source§impl<'c, T> TypedModel<'c, T>where
T: AsSchema,
impl<'c, T> TypedModel<'c, T>where
T: AsSchema,
Sourcepub fn new(client: &'c Client, name: &str) -> Self
pub fn new(client: &'c Client, name: &str) -> Self
Creates a new typed model configured to return responses of type T
.
§Arguments
client
: Authenticated API client.name
: Model name (e.g., “gemini-pro”).
Sourcepub async fn generate_typed_content<I>(
&self,
contents: I,
) -> Result<TypedResponse<T>, Error>
pub async fn generate_typed_content<I>( &self, contents: I, ) -> Result<TypedResponse<T>, Error>
Generates content with full response metadata.
This method clones the model configuration and returns a TypedResponse
,
containing both the parsed T
and the raw API response.
§Example
let model = TypedModel::<StockAnalysis>::new(&client, "gemini-pro");
let analysis: TypedResponse<StockAnalysis> = model.generate_typed_content((
"Analyze NVDA stock performance",
"Consider PE ratio and recent earnings"
)).await?;
println!("Analysis: {:?}", analysis.t);
println!("Token Usage: {:?}", analysis.raw.usage_metadata);
Sourcepub async fn generate_typed_content_consuming<I>(
self,
contents: I,
) -> Result<TypedResponse<T>, Error>
pub async fn generate_typed_content_consuming<I>( self, contents: I, ) -> Result<TypedResponse<T>, Error>
Generates content with metadata, consuming the model instance.
An efficient alternative to generate_typed_content
that avoids cloning
the model configuration, useful for one-shot requests.
Sourcepub async fn generate_content<I>(&self, contents: I) -> Result<T, Error>
pub async fn generate_content<I>(&self, contents: I) -> Result<T, Error>
Generates content and parses it directly into type T
.
This is the primary method for most users wanting type-safe responses. It handles all the details of requesting structured JSON and deserializing it into your specified Rust type. It clones the model configuration to allow reuse.
§Serde Integration
When the serde
feature is enabled, any type implementing serde::Deserialize
automatically works with this method. Just define your response structure and
let the library handle parsing.
§Example: Simple JSON Response
#[derive(AsSchema, Deserialize)]
struct StoryResponse {
title: String,
length: usize,
tags: Vec<String>,
}
let model = TypedModel::<StoryResponse>::new(&client, "gemini-pro");
let story = model.generate_content("Write a short story about a robot astronaut").await?;
println!("{} ({} words)", story.title, story.length);
§Example: Multi-part Input
#[derive(AsSchema, Deserialize)]
struct Analysis { safety_rating: u8 }
let model = TypedModel::<Analysis>::new(&client, "gemini-pro-vision");
let analysis = model.generate_content((
"Analyze this scene safety:",
Part::blob("image/jpeg", image_data),
"Consider vehicles, pedestrians, and weather"
)).await?;
§Errors
Returns an error if API communication fails or if the response cannot be
parsed into type T
.
pub async fn generate_content_consuming<I>( self, contents: I, ) -> Result<T, Error>
Sourcepub fn into_inner(self) -> GenerativeModel<'c>
pub fn into_inner(self) -> GenerativeModel<'c>
Consumes the TypedModel
, returning the underlying GenerativeModel
.
The returned GenerativeModel
will retain the response schema configuration
that was set for type T
.
Sourcepub unsafe fn from_inner_unchecked(inner: GenerativeModel<'c>) -> Self
pub unsafe fn from_inner_unchecked(inner: GenerativeModel<'c>) -> Self
Creates a TypedModel
from a GenerativeModel
without validation.
This is an advanced-use method that assumes the provided GenerativeModel
has already been configured with a response schema that is compatible with T
.
§Safety
The caller must ensure that inner.generation_config.response_schema
is Some
and that its schema corresponds exactly to the schema of type T
. Failure to
uphold this invariant will likely result in API errors or deserialization failures.
Methods from Deref<Target = GenerativeModel<'c>>§
Sourcepub fn start_chat(&self) -> Session<'_>
pub fn start_chat(&self) -> Session<'_>
Starts a new chat session with empty history
Sourcepub async fn generate_content<T>(
&self,
contents: T,
) -> Result<GenerateContentResponse, Error>where
T: TryIntoContents,
pub async fn generate_content<T>(
&self,
contents: T,
) -> Result<GenerateContentResponse, Error>where
T: TryIntoContents,
Generates content from flexible input types.
This method clones the model’s configuration for the request, allowing the original
GenerativeModel
instance to be reused.
§Example
use google_ai_rs::Part;
// Simple text generation
let response = model.generate_content("Hello world!").await?;
// Multi-part content
model.generate_content((
"What's in this image?",
Part::blob("image/jpeg", image_data)
)).await?;
§Errors
Returns Error::Service
for model errors or Error::Net
for transport failures.
Sourcepub async fn typed_generate_content<I, T>(
&self,
contents: I,
) -> Result<T, Error>
pub async fn typed_generate_content<I, T>( &self, contents: I, ) -> Result<T, Error>
A convenience method to generate a structured response of type T
.
This method internally converts the GenerativeModel
to a TypedModel<T>
,
makes the request, and returns the parsed result directly. It clones the model
configuration for the request.
For repeated calls with the same target type, it may be more efficient to create a
TypedModel
instance directly.
Sourcepub async fn generate_typed_content<I, T>(
&self,
contents: I,
) -> Result<TypedResponse<T>, Error>
pub async fn generate_typed_content<I, T>( &self, contents: I, ) -> Result<TypedResponse<T>, Error>
A convenience method to generate a structured response with metadata.
Similar to typed_generate_content
, but returns a TypedResponse<T>
which includes
both the parsed data and the raw API response metadata.
Sourcepub async fn stream_generate_content<T>(
&self,
contents: T,
) -> Result<ResponseStream, Error>where
T: TryIntoContents,
pub async fn stream_generate_content<T>(
&self,
contents: T,
) -> Result<ResponseStream, Error>where
T: TryIntoContents,
Generates a streaming response from flexible input.
This method clones the model’s configuration for the request, allowing the original
GenerativeModel
instance to be reused.
§Example
let mut stream = model.stream_generate_content("Tell me a story.").await?;
while let Some(chunk) = stream.next().await? {
// Process streaming response
}
§Errors
Returns Error::Service
for model errors or Error::Net
for transport failures.
Sourcepub async fn count_tokens<T>(
&self,
contents: T,
) -> Result<CountTokensResponse, Error>where
T: TryIntoContents,
pub async fn count_tokens<T>(
&self,
contents: T,
) -> Result<CountTokensResponse, Error>where
T: TryIntoContents,
Estimates token usage for given content
Useful for cost estimation and validation before full generation
§Arguments
parts
- Content input that can be converted to parts
§Example
let token_count = model.count_tokens(content).await?;
println!("Estimated cost: ${}", token_count.total() * COST_PER_TOKEN);
Sourcepub async fn info(&self) -> Result<Info, Error>
pub async fn info(&self) -> Result<Info, Error>
info returns information about the model.
Info::Tuned
if the current model is a fine-tuned one,
otherwise Info::Model
.
Sourcepub fn full_name(&self) -> &str
pub fn full_name(&self) -> &str
Returns the full identifier of the model, including any models/
prefix.
Sourcepub fn with_cloned_instruction<I: IntoContent>(&self, instruction: I) -> Self
pub fn with_cloned_instruction<I: IntoContent>(&self, instruction: I) -> Self
Creates a copy with new system instructions
Trait Implementations§
Source§impl<T> Clone for TypedModel<'_, T>
impl<T> Clone for TypedModel<'_, T>
Source§impl<T> Debug for TypedModel<'_, T>
impl<T> Debug for TypedModel<'_, T>
Source§impl<'c, T> Deref for TypedModel<'c, T>
impl<'c, T> Deref for TypedModel<'c, T>
Source§impl<'c, T> From<GenerativeModel<'c>> for TypedModel<'c, T>where
T: AsSchema,
impl<'c, T> From<GenerativeModel<'c>> for TypedModel<'c, T>where
T: AsSchema,
Source§fn from(value: GenerativeModel<'c>) -> Self
fn from(value: GenerativeModel<'c>) -> Self
Auto Trait Implementations§
impl<'c, T> Freeze for TypedModel<'c, T>
impl<'c, T> !RefUnwindSafe for TypedModel<'c, T>
impl<'c, T> Send for TypedModel<'c, T>
impl<'c, T> Sync for TypedModel<'c, T>
impl<'c, T> Unpin for TypedModel<'c, T>
impl<'c, T> !UnwindSafe for TypedModel<'c, T>
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
Source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic_veecore::Request