pub struct TokenUsage {
pub input_tokens: u64,
pub cached_input_tokens: Option<u64>,
pub output_tokens: u64,
pub reasoning_output_tokens: Option<u64>,
pub total_tokens: u64,
}
Fields§
§input_tokens: u64
§cached_input_tokens: Option<u64>
§output_tokens: u64
§reasoning_output_tokens: Option<u64>
§total_tokens: u64
Implementations§
Source§impl TokenUsage
impl TokenUsage
pub const fn is_zero(&self) -> bool
pub fn cached_input(&self) -> u64
pub fn non_cached_input(&self) -> u64
Sourcepub fn blended_total(&self) -> u64
pub fn blended_total(&self) -> u64
Primary count for display as a single absolute value: non-cached input + output.
Sourcepub fn tokens_in_context_window(&self) -> u64
pub fn tokens_in_context_window(&self) -> u64
For estimating what % of the model’s context window is used, we need to account for reasoning output tokens from prior turns being dropped from the context window. We approximate this here by subtracting reasoning output tokens from the total. This will be off for the current turn and pending function calls.
Sourcepub fn percent_of_context_window_remaining(
&self,
context_window: u64,
baseline_used_tokens: u64,
) -> u8
pub fn percent_of_context_window_remaining( &self, context_window: u64, baseline_used_tokens: u64, ) -> u8
Estimate the remaining user-controllable percentage of the model’s context window.
context_window
is the total size of the model’s context window.
baseline_used_tokens
should capture tokens that are always present in
the context (e.g., system prompt and fixed tool instructions) so that
the percentage reflects the portion the user can influence.
This normalizes both the numerator and denominator by subtracting the baseline, so immediately after the first prompt the UI shows 100% left and trends toward 0% as the user fills the effective window.
Trait Implementations§
Source§impl Clone for TokenUsage
impl Clone for TokenUsage
Source§fn clone(&self) -> TokenUsage
fn clone(&self) -> TokenUsage
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read more