ferroid
ferroid is a Rust crate for generating
and parsing Snowflake and ULID identifiers.
Features
- ๐ Bit-level compatibility with major Snowflake and ULID formats
- ๐งฉ Pluggable clocks and RNGs via
TimeSourceandRandSource - ๐งต Lock-free, lock-based, and single-threaded generators
- ๐ Custom layouts via
define_snowflake_id!anddefine_ulid!macros - ๐ข Crockford base32 support with
base32feature flag
๐ฆ Supported Layouts
Snowflake
| Platform | Timestamp Bits | Machine ID Bits | Sequence Bits | Epoch |
|---|---|---|---|---|
| 41 | 10 | 12 | 2010-11-04 01:42:54.657 | |
| Discord | 42 | 10 | 12 | 2015-01-01 00:00:00.000 |
| 41 | 13 | 10 | 2011-01-01 00:00:00.000 | |
| Mastodon | 48 | 0 | 16 | 1970-01-01 00:00:00.000 |
Ulid
| Platform | Timestamp Bits | Random Bits | Epoch |
|---|---|---|---|
| ULID | 48 | 80 | 1970-01-01 00:00:00.000 |
๐ง Generator Comparison
| Snowflake Generator | Monotonic | Thread-Safe | Lock-Free | Throughput | Use Case |
|---|---|---|---|---|---|
BasicSnowflakeGenerator |
โ | โ | โ | Highest | Single-threaded or generator per thread |
LockSnowflakeGenerator |
โ | โ | โ | Medium | Fair multithreaded access |
AtomicSnowflakeGenerator |
โ | โ | โ | High | Fast concurrent generation (less fair) |
| Ulid Generator | Monotonic | Thread-Safe | Lock-Free | Throughput | Use Case |
|---|---|---|---|---|---|
BasicUlidGenerator |
โ | โ | โ | Slow | Thread-safe, always random, but slow |
BasicMonoUlidGenerator |
โ | โ | โ | Highest | Single-threaded or generator per thread |
LockMonoUlidGenerator |
โ | โ | โ | High | Fair multithreaded access |
๐ Usage
Thread Locals
The simplest way to generate a ULID is via Ulid, which provides a thread-local
generator that can produce both non-monotonic and monotonic ULIDs:
Thread-local generators are not currently available for SnowflakeId-style IDs
because they rely on a valid machine_id to avoid collisions. Mapping unique
machine_ids across threads requires coordination beyond what thread_local!
alone can guarantee.
Crockford Base32
Enable the base32 feature to support Crockford Base32 encoding and decoding
IDs.
By default, printing an ID returns its raw integer representation. If you need
fixed-width, URL-safe, and lexicographically sortable strings (e.g. for use in
databases, logs, or URLs), use .encode() to obtain a lightweight formatter. It
can be passed freely without committing to any specific string primitive,
letting the consumer choose how and when to render it.
The formatter design avoids heap allocation by default and supports both owned
and borrowed encoding buffers. For full String support, enable the alloc
feature.
โ ๏ธ Decoding and Overflow: ULID Spec vs. Ferroid
Base32 encodes in 5-bit chunks. That means:
- A
u32(32 bits) maps to 7 Base32 characters (7 ร 5 = 35 bits) - A
u64(64 bits) maps to 13 Base32 characters (13 ร 5 = 65 bits) - A
u128(128 bits) maps to 26 Base32 characters (26 ร 5 = 130 bits)
This creates an invariant: an encoded string may contain more bits than the target type can hold.
The ULID specification is strict:
Technically, a 26-character Base32 encoded string can contain 130 bits of information, whereas a ULID must only contain 128 bits. Therefore, the largest valid ULID encoded in Base32 is 7ZZZZZZZZZZZZZZZZZZZZZZZZZ, which corresponds to an epoch time of 281474976710655 or 2 ^ 48 - 1.
Any attempt to decode or encode a ULID larger than this should be rejected by all implementations, to prevent overflow bugs.
Ferroid takes a more flexible stance:
- Strings like
"ZZZZZZZZZZZZZZZZZZZZZZZZZZ"(which technically overflow) are accepted and decoded without error. - However, if any of the overflowed bits fall into reserved regions, which must
remain zero, decoding will fail with
Base32Error::DecodeOverflow.
This allows any 13-character Base32 string to decode into a u64, or any
26-character string into a u128, as long as reserved layout constraints
aren't violated. If the layout defines no reserved bits, decoding is always
considered valid.
For example:
- A
ULIDhas no reserved bits, so decoding will never fail due to overflow. - A
SnowflakeTwitterIdreserves the highest bit, so decoding must ensure that bit remains unset.
If reserved bits are set during decoding, Ferroid returns a
Base32Error::DecodeOverflow { id } containing the full (invalid) ID. You can
recover by calling .into_valid() to mask off reserved bits-allowing either
explicit error handling or silent correction.
Generate an ID
Clocks
In std environments, you can use the default MonotonicClock implementation.
It is thread-safe, lightweight to clone, and intended to be shared across the
application. If you're using multiple generators, clone and reuse the same clock
instance.
By default, MonotonicClock::default() sets the offset to UNIX_EPOCH. You
should override this depending on the ID specification. For example, Twitter IDs
use TWITTER_EPOCH, which begins at Thursday, November 4, 2010, 01:42:54.657
UTC (millisecond zero).
Synchronous Generators
Calling next_id() may yield Pending if the current sequence is exhausted.
Please note that while this behavior is exposed to provide maximum flexibility,
you must be generating enough IDs per millisecond to draw out the Pending
path. You may spin, yield, or sleep depending on your environment:
Asynchronous Generators
If you're in an async context (e.g., using Tokio or Smol), enable one of the following features to avoid blocking behavior:
aysnc-tokioasync-smol
These features extend the generator to yield cooperatively when it returns
Pending, causing the current task to sleep for the specified yield_for
duration (typically ~1ms). While this is fully non-blocking, it may oversleep
slightly due to OS or executor timing precision, potentially reducing peak
throughput.
Custom Layouts
To gain more control or optimize for different performance characteristics, you can define a custom layout.
Use the define_* macros below to create a new struct with your chosen name.
The resulting type behaves just like built-in types such as SnowflakeTwitterId
or ULID, with no extra setup required and full compatibility with the existing
API.
โ ๏ธ Note: When using the snowflake macro, you must specify all four sections (in
order): reserved, timestamp, machine_id, and sequence-even if a section
uses 0 bits.
The reserved bits are always set to zero and can be reserved for future use.
Similarly, the ulid macro requires all three fields: reserved, timestamp,
and random.
Behavior
Snowflake
- If the clock advances: reset sequence to 0 โ
IdGenStatus::Ready - If the clock is unchanged: increment sequence โ
IdGenStatus::Ready - If the clock goes backward: return
IdGenStatus::Pending - If the sequence increment overflows: return
IdGenStatus::Pending
Ulid
This implementation respects monotonicity within the same millisecond in a single generator by incrementing the random portion of the ID and guarding against overflow.
- If the clock advances: generate new random โ
IdGenStatus::Ready - If the clock is unchanged: increment random โ
IdGenStatus::Ready - If the clock goes backward: return
IdGenStatus::Pending - If the random increment overflows: return
IdGenStatus::Pending
Probability of ID Collisions
When generating time-sortable IDs that use random bits, it's important to estimate the probability of collisions (i.e., two IDs being the same within the same millisecond), given your ID layout and system throughput.
Monotonic IDs with Multiple ULID Generators
If you have $g$ generators (e.g., distributed nodes), and each generator produces $k$ sequential (monotonic) IDs per millisecond by incrementing from a random starting point, the probability that any two generators produce overlapping IDs in the same millisecond is approximately:
$$P_\text{collision} \approx \frac{g(g-1)(2k-1)}{2 \cdot 2^r}$$
Where:
- $g$ = number of generators
- $k$ = number of monotonic IDs per generator per millisecond
- $r$ = number of random bits per ID
- $P_\text{collision}$ = probability of at least one collision
Note: The formula above uses the approximate (birthday bound) model, which assumes that:
- $k \ll 2r$ and $g \ll 2r$
- Each generator's range of $k$ IDs starts at a uniformly random position within the $r$-bit space
Estimating Time Until a Collision Occurs
While collisions only happen within a single millisecond, we often want to know how long it takes before any collision happens, given continuous generation over time.
The expected time in milliseconds to reach a 50% chance of collision is:
$T_{\text{50%}} \approx \frac{\ln 2}{P_\text{collision}} = \frac{0.6931 \cdot 2 \cdot 2^r}{g(g - 1)(2k - 1)}$
This is derived from the cumulative probability formula:
$P_\text{collision}(T) = 1 - (1 - P_\text{collision})^T$
Solving for $T$ when $P_\text{collision}(T) = 0.5$:
$(1 - P_\text{collision})^T = 0.5$
$\Rightarrow T \approx \frac{\ln(0.5)}{\ln(1 - P_\text{collision})}$
Using the approximation $\ln(1 - x) \approx -x$ for small $x$, this simplifies to:
$\Rightarrow T \approx \frac{\ln 2}{P_\text{collision}}$
The $\ln 2$ term arises because $\ln(0.5) = -\ln 2$. After $T_\text{50%}$ milliseconds, there's a 50% chance that at least one collision has occurred.
| Generators ($g$) | IDs per generator per ms ($k$) | $P_\text{collision}$ | Estimated Time to 50% Collision ($T_{\text{50%}}$) |
|---|---|---|---|
| 1 | 1 | $0$ (single generator; no collision possible) | โ (no collision possible) |
| 1 | 65,536 | $0$ (single generator; no collision possible) | โ (no collision possible) |
| 2 | 1 | $\displaystyle \frac{2 \times 1 \times 1}{2 \cdot 2{80}} \approx 8.27 \times 10{-25}$ | $\approx 8.38 \times 10^{23} \text{ ms}$ |
| 2 | 65,536 | $\displaystyle \frac{2 \times 1 \times 131{,}071}{2 \cdot 2{80}} \approx 1.08 \times 10{-19}$ | $\approx 6.41 \times 10^{18} \text{ ms}$ |
| 1,000 | 1 | $\displaystyle \frac{1{,}000 \times 999 \times 1}{2 \cdot 2{80}} \approx 4.13 \times 10{-19}$ | $\approx 1.68 \times 10^{18} \text{ ms}$ |
| 1,000 | 65,536 | $\displaystyle \frac{1{,}000 \times 999 \times 131{,}071}{2 \cdot 2{80}} \approx 5.42 \times 10{-14}$ | $\approx 1.28 \times 10^{13} \text{ ms} \approx 406\ years$ |
๐ Benchmarks
Snowflake ID generation is theoretically capped by:
max IDs/sec = 2^sequence_bits ร 1000ms
For example, Twitter-style IDs (12 sequence bits) allow:
4096 IDs/ms ร 1000 ms/sec = ~4M IDs/sec
To benchmark this, we generate IDs in chunks of 4096, which aligns with the sequence limit per millisecond in Snowflake layouts. For ULIDs, we use the same chunk size for consistency, but this number does not represent a hard throughput cap - ULID generation is probabilistic: monotonicity within the same millisecond increments the random bit value. Chunking here primarily serves to keep the benchmark code consistent.
Async benchmarks are tricky because a single generator's performance is affected by task scheduling, which is not predictable and whose scheduler typically has a resolution of 1 millisecond. By the time a task is scheduled to execute (i.e., generate an ID), a millisecond may have already passed, potentially resetting any sequence counter or monotonic increment - thus, never truly testing the hot path. To mitigate this, async tests measure maximum throughput: each task generates a batch of IDs and may await on any of them. This approach offsets idle time on one generator with active work on another, yielding more representative throughput numbers.
Snowflake:
- Sync: Benchmarks the hot path without yielding to the clock.
- Async: Also uses 4096-ID batches, but may yield (sequence exhaustion/CAS failure) or await due to task scheduling, reducing throughput.
ULID:
- Sync & Async: Uses the same 4096-ID batches. Due to random number generation, monotonic increments may overflow randomly, reflecting real-world behavior. In general, it is rare for ULIDs to overflow.
Tests were ran on an M1 Macbook Pro 14", 32GB, 10 cores (8 performance, 2 efficiency).
Synchronous Generators
| Generator | Time per ID | Throughput |
|---|---|---|
BasicSnowflakeGenerator |
~2.8 ns | ~353M IDs/sec |
LockSnowflakeGenerator |
~8.9 ns | ~111M IDs/sec |
AtomicSnowflakeGenerator |
~3.1 ns | ~320M IDs/sec |
BasicUlidGenerator |
~20.4 ns | ~44M IDs/sec |
BasicMonoUlidGenerator |
~3.4 ns | ~288M IDs/sec |
LockMonoUlidGenerator |
~9.2 ns | ~109M IDs/sec |
Thread Local Generators
| Generator | Time per ID | Throughput |
|---|---|---|
Ulid::new_ulid |
~24 ns | ~41.7M IDs/sec |
Ulid::new_mono_ulid |
~5.6 ns | ~178M IDs/sec |
Async (Tokio Runtime) - Peak throughput
| Generator | Generators | Time per ID | Throughput |
|---|---|---|---|
LockSnowflakeGenerator |
1024 | ~1.46 ns | ~687M IDs/sec |
AtomicSnowflakeGenerator |
1024 | ~0.86 ns | ~1.17B IDs/sec |
LockMonoUlidGenerator |
1024 | ~1.57 ns | ~635M IDs/sec |
Async (Smol Runtime) - Peak throughput
| Generator | Generators | Time per ID | Throughput |
|---|---|---|---|
LockSnowflakeGenerator |
1024 | ~1.40 ns | ~710M IDs/sec |
AtomicSnowflakeGenerator |
1024 | ~0.62 ns | ~1.61B IDs/sec |
LockMonoUlidGenerator |
1024 | ~1.32 ns | ~756M IDs/sec |
To run all benchmarks:
๐งช Testing
Run all tests with:
๐ License
Licensed under either of:
at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.