ferroid
ferroid is a Rust crate for generating
and parsing Snowflake-style unique IDs.
It supports pre-built layouts for platforms like Twitter, Discord, Instagram, and Mastodon. These IDs are 64-bit integers that encode timestamps, machine/shard IDs, and sequence numbers - making them lexicographically sortable, scalable, and ideal for distributed systems.
Features:
- 📌 Bit-level layout compatibility with major Snowflake formats
- 🧩 Pluggable time sources via the
TimeSourcetrait - 🧵 Lock-based and lock-free thread-safe ID generation
- 📐 Customizable layouts via the
Snowflaketrait - 🔢 Lexicographically sortable string encoding
📦 Supported Layouts
| Platform | Timestamp Bits | Machine ID Bits | Sequence Bits | Epoch |
|---|---|---|---|---|
| 41 | 10 | 12 | 2010-11-04 01:42:54.657 | |
| Discord | 42 | 10 | 12 | 2015-01-01 00:00:00.000 |
| 41 | 13 | 10 | 2011-01-01 00:00:00.000 | |
| Mastodon | 48 | 0 | 16 | 1970-01-01 00:00:00.000 |
🔧 Generator Comparison
| Generator | Thread-Safe | Lock-Free | Throughput | Use Case |
|---|---|---|---|---|
BasicSnowflakeGenerator |
❌ | ❌ | Highest | Single-threaded, zero contention; ideal for sharded/core-local generators |
LockSnowflakeGenerator |
✅ | ❌ | Medium | Multi-threaded workloads where fair access across threads is important |
AtomicSnowflakeGenerator |
✅ | ✅ | High | Multi-threaded workloads where fair access is sacrificed for higher throughput |
All generators produce monotonically increasing, time-ordered, and unique IDs.
🚀 Usage
Generate an ID
Synchronous
Calling next_id() may yield Pending if the current sequence is exhausted. In
that case, you can spin, yield, or sleep depending on your environment:
use ;
let clock = with_epoch;
let generator = new;
let id: SnowflakeTwitterId = loop ;
println!;
Asynchronous
If you're in an async context (e.g., using Tokio or Smol), you can enable one of the following features:
async-tokio- for integration with the Tokio runtimeasync-smol- for integration with the Smol runtime
Then, import the corresponding SnowflakeGeneratorAsyncTokioExt or
SnowflakeGeneratorAsyncSmolExt trait to asynchronously request a new ID
without blocking or spinning.
Tokio Example
use ;
async
Smol Example
use ;
Custom Layouts
To define a custom Snowflake layout, use the define_snowflake_id macro:
use define_snowflake_id;
// Example: a 64-bit Twitter-like ID layout
//
// Bit Index: 63 63 62 22 21 12 11 0
// +--------------+----------------+-----------------+---------------+
// Field: | reserved (1) | timestamp (41) | machine ID (10) | sequence (12) |
// +--------------+----------------+-----------------+---------------+
// |<----------- MSB ---------- 64 bits ----------- LSB ------------>|
define_snowflake_id!;
// Example: a 128-bit extended ID layout
//
// Bit Index: 127 88 87 40 39 20 19 0
// +--------------------+----------------+-----------------+---------------+
// Field: | reserved (40 bits) | timestamp (48) | machine ID (20) | sequence (20) |
// +--------------------+----------------+-----------------+---------------+
// |<--- HI 64 bits --->|<------------------- LO 64 bits ----------------->|
// |<- MSB ------ LSB ->|<----- MSB ---------- 64 bits --------- LSB ----->|
define_snowflake_id!;
Note: All four sections (
reserved,timestamp,machine_id, andsequence) must be specified in the macro, even if a section uses 0 bits.reservedbits are always stored as zero and can be used for future expansion.
Behavior
- If the clock advances: reset sequence to 0 →
IdGenStatus::Ready - If the clock is unchanged: increment sequence →
IdGenStatus::Ready - If the clock goes backward: return
IdGenStatus::Pending - If the sequence overflows: return
IdGenStatus::Pending
Serialize as padded string
Use .to_padded_string() or .encode() (enabled with base32 feature) for
sortable representations:
use ;
let id = from;
println!;
// > default: 517811998762
println!;
// > padded: 00000000517811998762
let encoded = id.encode;
println!;
// > base32: 00000Y4G0082M
let decoded = decode.expect;
assert_eq!;
📈 Benchmarks
ferroid ships with Criterion benchmarks to measure ID generation performance.
Here's a snapshot of peak single-core throughput on a MacBook Pro 14" M1 (8 performance + 2 efficiency cores), measured under ideal conditions where the generator never yields. These numbers reflect the upper bounds of real-clock performance:
And here's the equivalent theoretical maximum throughput in an async context
using Tokio and Smol runtimes:
To run all benchmarks:
NOTE: Shared generators (like LockSnowflakeGenerator and
AtomicSnowflakeGenerator) can slow down under high thread contention. This
happens because threads must coordinate access - either through mutex locks or
atomic compare-and-swap (CAS) loops - which introduces overhead.
For maximum throughput, avoid sharing. Instead, give each thread its own generator instance. This eliminates contention and allows every thread to issue IDs independently at full speed.
The thread-safe generators are primarily for convenience, or for use cases where ID generation is not expected to be the performance bottleneck.
🧪 Testing
Run all tests with:
📄 License
Licensed under either of:
at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.