pub struct LanguageModel {
pub name: String,
pub aliases: Vec<String>,
pub version: String,
pub input_modalities: Vec<Modality>,
pub output_modalities: Vec<Modality>,
pub prompt_text_token_price: i64,
pub prompt_image_token_price: i64,
pub cached_prompt_token_price: i64,
pub completion_text_token_price: i64,
pub search_price: i64,
pub max_prompt_length: i32,
pub system_fingerprint: String,
}Expand description
Information about a language model.
This struct contains comprehensive metadata about an xAI language model, including its capabilities, pricing, and technical specifications.
§Pricing Units
The pricing fields use specific units to represent fractional cents:
prompt_text_token_price: 1/100 USD cents per 1M tokens (e.g., 500 = $0.05 per 1M tokens)prompt_image_token_price: 1/100 USD cents per 1M tokenscompletion_text_token_price: 1/100 USD cents per 1M tokenscached_prompt_token_price: USD cents per 100M tokens (e.g., 50 = $0.50 per 100M tokens)search_price: 1/100 USD cents per 1M searches
Use calculate_cost to convert these to USD amounts.
§Examples
let model = client.get_model("grok-2-1212").await?;
// Check capabilities
if model.supports_multimodal() {
println!("{} supports images!", model.name);
}
// Calculate costs
let cost = model.calculate_cost(50_000, 5_000, 0);
println!("50K prompt + 5K completion costs: ${:.4}", cost);Fields§
§name: StringThe model name used in API requests (e.g., “grok-2-1212”).
aliases: Vec<String>Alternative names that can be used for this model (e.g., [“grok-2-latest”]).
Aliases provide convenient shortcuts for referring to models without needing to know the specific version number.
version: StringVersion number of the model (e.g., “2.0”).
input_modalities: Vec<Modality>Supported input modalities.
Common combinations:
[Text]- Text-only model[Text, Image]- Multimodal model supporting vision
output_modalities: Vec<Modality>Supported output modalities.
Most models output [Text], but some specialized models may
support image generation or embeddings.
prompt_text_token_price: i64Price per million prompt text tokens in 1/100 USD cents.
Example: 500 = $0.05 per 1M tokens = $0.00005 per token
prompt_image_token_price: i64Price per million prompt image tokens in 1/100 USD cents.
Only applicable for multimodal models that accept images.
cached_prompt_token_price: i64Price per 100 million cached prompt tokens in USD cents.
Example: 50 = $0.50 per 100M tokens
Cached tokens are significantly cheaper as they’re reused from previous requests with the same prefix.
completion_text_token_price: i64Price per million completion text tokens in 1/100 USD cents.
Example: 1500 = $0.15 per 1M tokens = $0.00015 per token
search_price: i64Price per million searches in 1/100 USD cents.
Only applicable when using web search or X search tools.
max_prompt_length: i32Maximum context length in tokens (prompt + completion).
This represents the total number of tokens the model can process in a single request, including both input and output.
system_fingerprint: StringBackend configuration fingerprint.
This identifier tracks the specific backend configuration used by the model, useful for debugging and reproducibility.
Implementations§
Source§impl LanguageModel
impl LanguageModel
Sourcepub fn calculate_cost(
&self,
prompt_tokens: u32,
completion_tokens: u32,
cached_tokens: u32,
) -> f64
pub fn calculate_cost( &self, prompt_tokens: u32, completion_tokens: u32, cached_tokens: u32, ) -> f64
Calculate the cost (in USD) for a given number of prompt and completion tokens.
§Examples
let cost = model.calculate_cost(1000, 500, 0);
println!("Cost: ${:.4}", cost);Sourcepub fn supports_multimodal(&self) -> bool
pub fn supports_multimodal(&self) -> bool
Check if the model supports multimodal input (text + images).
Returns true if the model accepts both text and image inputs,
allowing you to send image URLs alongside text prompts.
§Examples
let model = client.get_model("grok-2-vision-1212").await?;
if model.supports_multimodal() {
println!("{} can process images!", model.name);
} else {
println!("{} is text-only", model.name);
}Trait Implementations§
Source§impl Clone for LanguageModel
impl Clone for LanguageModel
Source§fn clone(&self) -> LanguageModel
fn clone(&self) -> LanguageModel
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read moreAuto Trait Implementations§
impl Freeze for LanguageModel
impl RefUnwindSafe for LanguageModel
impl Send for LanguageModel
impl Sync for LanguageModel
impl Unpin for LanguageModel
impl UnwindSafe for LanguageModel
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
Source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T in a tonic::Request