Contatori
High-performance sharded atomic counters for Rust.
A library providing thread-safe, high-performance counters optimized for highly concurrent workloads. This library implements a sharded counter pattern that dramatically reduces contention compared to traditional single atomic counters.
The Problem
In multi-threaded applications, a naive approach to counting uses a single atomic variable shared across all threads. While this is correct, it creates a severe performance bottleneck: every increment operation causes cache line bouncing between CPU cores, as each core must acquire exclusive access to the cache line containing the counter.
This contention grows worse with more threads and higher update frequencies, turning what should be a simple operation into a major scalability bottleneck.
The Solution: Sharded Counters
This library solves the contention problem by sharding counters across multiple slots (64 by default). Each thread is assigned to a specific slot, so threads updating the counter typically operate on different memory locations, eliminating contention.
Design Principles
-
Per-Thread Sharding: Each thread gets assigned a slot index via
thread_local!, ensuring that concurrent updates from different threads don't compete for the same cache line. -
Cache Line Padding: Each slot is wrapped in
CachePadded, which adds padding to ensure each atomic value occupies its own cache line (typically 64 bytes). This prevents false sharing where unrelated data on the same cache line causes unnecessary invalidations. -
Relaxed Ordering: All atomic operations use
Ordering::Relaxedsince counters don't need to establish happens-before relationships with other memory operations. This allows maximum optimization by the CPU. -
Aggregation on Read: The global counter value is computed by summing all slots. This makes reads slightly more expensive but keeps writes extremely fast, which is the right trade-off for counters (many writes, few reads).
Performance Benchmark
Benchmarked on Apple M2 (8 cores) with 8 threads, each performing 1,000,000 increments (8 million total operations):
┌─────────────────────────────────────────────────────────────────────────────┐
│ Counter Performance Comparison │
│ (8 threads × 1,000,000 iterations) │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ AtomicUsize (single) ████████████████████████████████████████ 162.53 ms │
│ │
│ Unsigned (sharded) █ 2.27 ms │
│ │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ Speedup: 71.6x faster │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
The sharded counter is ~72x faster than a naive atomic counter under high contention. This difference grows with more threads and higher contention.
Available Counter Types
| Type | Description | Use Case |
|---|---|---|
Unsigned |
Unsigned integer counter | Event counts, request totals |
Signed |
Signed integer counter | Gauges, balance tracking |
Minimum |
Tracks minimum observed value | Latency minimums |
Maximum |
Tracks maximum observed value | Latency maximums, peak values |
Average |
Computes running average | Average latency, mean values |
Quick Start
Add to your Cargo.toml:
[]
= "0.3.0"
Basic Usage
use Unsigned;
use Observable;
// Create a counter (can be shared across threads via Arc)
let counter = new.with_name;
// Increment from any thread - extremely fast!
counter.add;
counter.add;
// Read the total value (aggregates all shards)
println!;
// Read and reset atomically
let total = counter.value_and_reset;
Multi-threaded Usage
use Unsigned;
use Observable;
use Arc;
use thread;
let counter = new;
let mut handles = vec!;
for _ in 0..8
for h in handles
assert_eq!;
Tracking Statistics
use Minimum;
use Maximum;
use Average;
use Observable;
let min_latency = new.with_name;
let max_latency = new.with_name;
let avg_latency = new.with_name;
// Record some latencies
for latency in
println!; // 85
println!; // 200
println!; // 130
Thread Safety
All counter types are Send + Sync and can be safely shared across threads using Arc<Counter>. The sharding ensures that concurrent updates are efficient.
Memory Usage
Each counter uses approximately 4KB of memory (64 slots × 64 bytes per cache line). This is a trade-off: more memory for dramatically better performance under contention.
When to Use
Use these counters when:
- Multiple threads frequently update the same counter
- Write performance is more important than read performance
- You're tracking metrics, statistics, or telemetry data
For single-threaded scenarios or rarely-updated counters, a simple AtomicUsize may be more appropriate due to lower memory overhead.
Running Benchmarks
Running Tests
License
MIT
Architecture Diagram
┌─────────────────────────────────────┐
│ Counter Structure │
├─────────────────────────────────────┤
Thread 0 ──writes──► │ [Slot 0] ████████ (CachePadded) │
Thread 1 ──writes──► │ [Slot 1] ████████ (CachePadded) │
Thread 2 ──writes──► │ [Slot 2] ████████ (CachePadded) │
... │ ... │
Thread 63 ─writes──► │ [Slot 63] ███████ (CachePadded) │
└─────────────────────────────────────┘
│
▼
value() aggregates
all slots on read