ccmetrics-0.1.0 is not a library.
ccmetrics
Honest token usage metrics for Claude Code.
What it does
Parses Claude Code JSONL session files, correctly deduplicates streaming chunks, disaggregates 5 token types with per-tier pricing, and calculates accurate API-equivalent costs.
Why
Every Claude Code usage tool gets the math wrong. We researched why and built the correct implementation:
| Tool | Output Tokens | Total Cost | Problem |
|---|---|---|---|
| ccmetrics | 8,625,351 | $2,376 | Correct (final chunk, 5-type split) |
| ccusage | 2,975,552 | $2,032 | First-seen-wins keeps placeholder tokens |
| claudelytics | 12,750,257 | $17,703 | No dedup, counts every streaming chunk |
Install
Or build from source:
Usage
Filters
Output options
What makes it different
- Correct dedup -- groups by
requestId, keeps final chunk (stop_reason != null) with real token counts - 5-type token split -- input, output, cache read, cache write 5m, cache write 1h (each at different pricing)
- Per-model breakdown -- token and cost split by model for independent verification
- Per-project breakdown -- usage grouped by project (shown when 2+ projects)
- Main vs subagent -- separates main thread from subagent usage
- Daily and session views -- track usage over time, drill into individual sessions
- Streaming pipeline -- real-time progress with step summaries (scan, parse, dedup, filter, calculate)
- Date, model, and project filters -- slice data by time range, model, or project
- Pricing modifiers -- fast mode (6x), data residency (1.1x), long context (2x/1.5x)
- Explain mode --
ccmetrics explainwalks through dedup, pricing, and cache tiers using your own data - Abbreviated numbers -- large token counts display as 2.86B, 6.1M, 260K for readability
- No runtime -- single Rust binary, no network, no database
Docs
- PRD -- product requirements (v1.3)
- Architecture -- module layout, data flow
- Pricing -- embedded pricing table reference
- Research blog -- full analysis of why tools disagree
License
MIT