pub const CACHING_MODELS: &[&str];Expand description
Models that support context caching (min 2048 tokens required) Context caching reduces costs for repeated API calls with similar contexts Reference: https://ai.google.dev/gemini-api/docs/caching
pub const CACHING_MODELS: &[&str];Models that support context caching (min 2048 tokens required) Context caching reduces costs for repeated API calls with similar contexts Reference: https://ai.google.dev/gemini-api/docs/caching