Contatori
High-performance sharded atomic counters for Rust.
A library providing thread-safe, high-performance counters optimized for highly concurrent workloads. This library implements a sharded counter pattern that dramatically reduces contention compared to traditional single atomic counters.
The Problem
In multi-threaded applications, a naive approach to counting uses a single atomic variable shared across all threads. While this is correct, it creates a severe performance bottleneck: every increment operation causes cache line bouncing between CPU cores, as each core must acquire exclusive access to the cache line containing the counter.
This contention grows worse with more threads and higher update frequencies, turning what should be a simple operation into a major scalability bottleneck.
The Solution: Sharded Counters
This library solves the contention problem by sharding counters across multiple slots (64 by default). Each thread is assigned to a specific slot, so threads updating the counter typically operate on different memory locations, eliminating contention.
Design Principles
-
Per-Thread Sharding: Each thread gets assigned a slot index via
thread_local!, ensuring that concurrent updates from different threads don't compete for the same cache line. -
Cache Line Padding: Each slot is wrapped in
CachePadded, which adds padding to ensure each atomic value occupies its own cache line (typically 64 bytes). This prevents false sharing where unrelated data on the same cache line causes unnecessary invalidations. -
Relaxed Ordering: All atomic operations use
Ordering::Relaxedsince counters don't need to establish happens-before relationships with other memory operations. This allows maximum optimization by the CPU. -
Aggregation on Read: The global counter value is computed by summing all slots. This makes reads slightly more expensive but keeps writes extremely fast, which is the right trade-off for counters (many writes, few reads).
Performance Benchmark
Single Counter: Sharded vs AtomicUsize
Benchmarked on Apple M2 (8 cores) with 8 threads, each performing 1,000,000 increments (8 million total operations):
┌─────────────────────────────────────────────────────────────────────────────┐
│ Counter Performance Comparison │
│ (8 threads × 1,000,000 iterations) │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ AtomicUsize (single) ████████████████████████████████████████ 162.53 ms │
│ │
│ Unsigned (sharded) █ 2.27 ms │
│ │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ Speedup: 71.6x faster │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
The sharded counter is ~72x faster than a naive atomic counter under high contention. This difference grows with more threads and higher contention.
Contatori vs OpenTelemetry
Benchmarked on Apple M2 (8 cores) with 8 threads, each performing 100,000 increments:
┌─────────────────────────────────────────────────────────────────────────────┐
│ Counter Performance: Contatori vs OpenTelemetry │
│ (8 threads × 100,000 iterations) │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ Simple counter (no labels): │
│ │
│ OpenTelemetry Counter ████████████████████████████████████████ 25.81 ms │
│ │
│ contatori Monotone █ 0.33 ms │
│ │
│ Speedup: 79x faster │
│ │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ Labeled counters (rotating GET/POST/PUT/DELETE): │
│ │
│ OpenTelemetry Counter ████████████████████████████████████████ 356.46 ms │
│ │
│ cont. labeled_group! ▏ 0.21 ms │
│ │
│ Speedup: 1665x faster │
│ │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ High contention (all threads same label): │
│ │
│ OpenTelemetry Counter ████████████████████████████████████████ 350.45 ms │
│ │
│ cont. labeled_group! ▏ 0.32 ms │
│ │
│ Speedup: 1093x faster │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Contatori is 79x to ~1600x faster than OpenTelemetry counters depending on usage pattern. This massive difference comes from:
- Zero runtime overhead: Labels are resolved at compile time
- Sharded storage: Each sub-counter uses the same sharding strategy
- No dynamic dispatch: Direct field access instead of hash lookups
Available Counter Types
| Type | Description | Use Case | MetricKind |
|---|---|---|---|
Monotone |
Monotonically increasing counter (never resets) | Prometheus counters, total requests | Counter |
Unsigned |
Unsigned integer counter | Event counts, request totals | Gauge |
Signed |
Signed integer counter | Gauges, balance tracking | Gauge |
Minimum |
Tracks minimum observed value | Latency minimums | Gauge |
Maximum |
Tracks maximum observed value | Latency maximums, peak values | Gauge |
Average |
Computes running average | Average latency, mean values | Gauge |
Rate |
Calculates rate of change (units/second) | Request rates, throughput | Gauge |
Quick Start
Add to your Cargo.toml:
[]
= "0.7"
Basic Usage
use Unsigned;
use Observable;
// Create a counter (can be shared across threads via Arc)
let counter = new.with_name;
// Increment from any thread - extremely fast!
counter.add;
counter.add;
// Read the total value (aggregates all shards)
println!;
// value() does NOT reset the counter - it just reads
println!; // Still 6
Resettable Counters
To reset a counter when reading (useful for per-period metrics), wrap it with Resettable:
use Unsigned;
use Observable;
use Resettable;
// Create a resettable counter for per-period metrics
let requests_per_second = new;
requests_per_second.add;
// value() returns the value AND resets the counter
let count = requests_per_second.value;
println!; // 100
println!; // 0
Multi-threaded Usage
use Unsigned;
use Observable;
use Arc;
use thread;
let counter = new;
let mut handles = vec!;
for _ in 0..8
for h in handles
assert_eq!;
Tracking Statistics
use Minimum;
use Maximum;
use Average;
use Observable;
let min_latency = new.with_name;
let max_latency = new.with_name;
let avg_latency = new.with_name;
// Record some latencies
for latency in
println!; // 85
println!; // 200
println!; // 130
Thread Safety
All counter types are Send + Sync and can be safely shared across threads using Arc<Counter>. The sharding ensures that concurrent updates are efficient.
Memory Usage
Each counter uses approximately 4KB of memory (64 slots × 64 bytes per cache line). This is a trade-off: more memory for dramatically better performance under contention.
Serialization & Observers
The library provides optional modules for serializing and exporting counter values in various formats. Each module is gated behind a feature flag:
| Feature | Module | Description |
|---|---|---|
serde |
snapshot |
Serializable snapshot types (use with any serde format) |
table |
observers::table |
Renders counters as ASCII tables |
json |
observers::json |
Serializes counters to JSON (includes serde) |
prometheus |
observers::prometheus |
Exports in Prometheus exposition format |
full |
All modules | Enables all observer modules |
Snapshot Module
The snapshot module provides serializable types that work with any serde-compatible format (JSON, YAML, TOML, bincode, etc.).
[]
= { = "0.7", = ["serde"] }
use Unsigned;
use Observable;
use ;
let requests = new.with_name;
let errors = new.with_name;
requests.add;
errors.add;
let counters: = vec!;
// Collect snapshots
let snapshot = collect;
// Serialize with any serde-compatible format
let json = to_string.unwrap;
let yaml = to_string.unwrap;
let bytes = serialize.unwrap;
TableObserver
Renders counters as formatted ASCII tables using the tabled crate.
[]
= { = "0.7", = ["table"] }
use Unsigned;
use Observable;
use ;
let requests = new.with_name;
let errors = new.with_name;
requests.add;
errors.add;
let counters: = vec!;
// Standard format (vertical list)
let observer = new.with_style;
println!;
// ╭──────────┬───────╮
// │ Name │ Value │
// ├──────────┼───────┤
// │ requests │ 1000 │
// │ errors │ 5 │
// ╰──────────┴───────╯
// Compact format (multiple columns)
let observer = new
.compact
.columns;
println!;
// ╭────────────────┬───────────┬──────────────╮
// │ requests: 1000 │ errors: 5 │ latency: 120 │
// ╰────────────────┴───────────┴──────────────╯
Available styles: Ascii, Rounded, Sharp, Modern, Extended, Markdown, ReStructuredText, Dots, Blank, Double
Compact separators: Colon (:), Equals (=), Arrow (→), Pipe (|), Space
TableObserver Configuration
| Method | Description |
|---|---|
with_style(TableStyle) |
Sets the table border style |
with_header(bool) |
Shows or hides the header row |
with_title(String) |
Adds a title above the table |
compact(bool) |
Enables compact horizontal layout |
columns(usize) |
Number of columns in compact mode |
separator(CompactSeparator) |
Separator between name and value in compact mode |
render(iter) |
Renders the counters to a string |
Note: To reset counters when rendering, wrap them with Resettable.
JsonObserver
Serializes counters to JSON format using serde.
[]
= { = "0.7", = ["json"] }
use Unsigned;
use Observable;
use JsonObserver;
let requests = new.with_name;
let errors = new.with_name;
requests.add;
errors.add;
let counters: = vec!;
// Simple array output
let json = new
.to_json
.unwrap;
// Pretty-printed output with timestamp wrapper
let json = new
.pretty
.wrap_in_snapshot
.include_timestamp
.to_json
.unwrap;
JsonObserver Configuration
| Method | Description |
|---|---|
pretty(bool) |
Enables pretty-printed JSON output |
wrap_in_snapshot(bool) |
Wraps output in a MetricsSnapshot object |
include_timestamp(bool) |
Includes timestamp in the snapshot (requires wrap_in_snapshot) |
to_json(iter) |
Serializes counters to a JSON string |
collect(iter) |
Returns a Vec<CounterSnapshot> for custom processing |
Note: To reset counters when serializing, wrap them with Resettable.
PrometheusObserver
Exports counters in Prometheus exposition format using the official prometheus crate.
[]
= { = "0.7", = ["prometheus"] }
Automatic Metric Type Detection
The observer automatically determines the correct Prometheus metric type based on the counter's metric_kind() method:
| Counter Type | MetricKind |
Prometheus Type |
|---|---|---|
Monotone |
Counter |
Counter |
Unsigned |
Gauge |
Gauge |
Signed |
Gauge |
Gauge |
Minimum |
Gauge |
Gauge |
Maximum |
Gauge |
Gauge |
Average |
Gauge |
Gauge |
This means you don't need to manually specify types for most use cases:
use Monotone;
use Signed;
use ;
use PrometheusObserver;
// Monotone returns MetricKind::Counter, auto-detected as Prometheus Counter
let requests = new.with_name;
assert_eq!;
requests.add;
// Signed returns MetricKind::Gauge, auto-detected as Prometheus Gauge
let connections = new.with_name;
assert_eq!;
connections.add;
let counters: = vec!;
let observer = new
.with_namespace
.with_help
.with_help;
let output = observer.render.unwrap;
// Output will have:
// # TYPE myapp_http_requests_total counter
// # TYPE myapp_active_connections gauge
Manual Type Override
You can override the auto-detected type using with_type():
use Unsigned;
use Observable;
use ;
let requests = new.with_name;
requests.add;
let counters: = vec!;
// Force Unsigned to be exported as Counter instead of Gauge
let observer = new
.with_type;
let output = observer.render.unwrap;
PrometheusObserver Configuration
| Method | Description |
|---|---|
with_namespace(str) |
Sets a prefix for all metric names (e.g., myapp_) |
with_subsystem(str) |
Sets a subsystem between namespace and metric name |
with_const_label(name, value) |
Adds a constant label to all metrics |
with_type(name, MetricType) |
Overrides auto-detected metric type (Counter or Gauge) |
with_help(name, text) |
Sets the help text for a specific metric |
render(iter) |
Renders counters to Prometheus exposition format |
Note: To reset counters when rendering, wrap them with Resettable.
Metric Types
| Prometheus Type | Description | Auto-detected from MetricKind |
|---|---|---|
MetricType::Counter |
Cumulative metric that only goes up | MetricKind::Counter (Monotone) |
MetricType::Gauge |
Value that can go up and down | MetricKind::Gauge (all other counters) |
Adapters
The library provides adapter types that add additional behavior to counters while maintaining compatibility with the Observable trait.
| Wrapper/Macro | Description |
|---|---|
Resettable |
Resets counter when value() is called - for periodic metrics |
labeled_group! |
Creates a struct of counters with shared metric name and different labels |
Resettable
Wraps a counter to reset it when value() is called. Useful for evaluating metrics over observation periods (e.g., requests per second, errors per minute).
use Unsigned;
use Observable;
use Resettable;
let requests = new;
requests.add;
// value() returns the value AND resets the counter
assert_eq!;
assert_eq!; // Reset to 0!
requests.add;
assert_eq!; // Just this period
Regular counters (without Resettable) keep their value across reads:
use Unsigned;
use Observable;
let total = new.with_name;
total.add;
// value() just reads, does NOT reset
assert_eq!;
assert_eq!; // Still 100!
Rate Counter
The Rate counter calculates the rate of change (units per second) over time. It's useful for tracking throughput, request rates, or any metric where you need to know "how fast" something is happening.
use Rate;
use Observable;
use thread;
use Duration;
// Can be used as a static
static REQUESTS: Rate = new.with_name;
// Increment like a normal counter
REQUESTS.add;
REQUESTS.add;
// Get the absolute count
println!; // 6
// Get the rate (units per second)
// First call returns 0.0 and establishes baseline
let rate1 = REQUESTS.rate; // 0.0
// Add more and wait
REQUESTS.add;
sleep;
// Now rate() returns actual rate
let rate2 = REQUESTS.rate; // ~1000.0 per second
The Rate counter:
- Uses sharded storage like all other counters (high performance)
- Can be initialized in
constcontext (static RATE: Rate = Rate::new()) - Returns
MetricKind::Gauge(rates can go up or down) - Exports as float values in Prometheus
Labeled Group
The labeled_group! macro creates a struct containing multiple counters that share a metric name but have different label values. This is the recommended way to track metrics with labels (e.g., HTTP requests by method).
use labeled_group;
use Unsigned;
use Observable;
// Define a labeled group
labeled_group!;
// Can be used as a static
static HTTP: HttpRequests = new;
// Direct field access for incrementing
HTTP.total.add;
HTTP.get.add;
// Observers automatically expand the group:
// http_requests 1 (no label - the total)
// http_requests{method="GET"} 1
// http_requests{method="POST"} 0
// http_requests{method="PUT"} 0
// http_requests{method="DELETE"} 0
The expand() method on Observable returns all sub-counters with their labels, which observers use automatically.
When to Use Sharded Counters
Sharded counters are ideal when:
- Multiple threads frequently update the same counter
- Write performance is more important than read performance
- You're tracking metrics, statistics, or telemetry data
For single-threaded scenarios or rarely-updated counters, a simple AtomicUsize may be more appropriate due to lower memory overhead.
Running Benchmarks
Running Tests
License
MIT
Architecture Diagram
┌─────────────────────────────────────┐
│ Counter Structure │
├─────────────────────────────────────┤
Thread 0 ──writes──► │ [Slot 0] ████████ (CachePadded) │
Thread 1 ──writes──► │ [Slot 1] ████████ (CachePadded) │
Thread 2 ──writes──► │ [Slot 2] ████████ (CachePadded) │
... │ ... │
Thread 63 ─writes──► │ [Slot 63] ███████ (CachePadded) │
└─────────────────────────────────────┘
│
▼
value() aggregates
all slots on read