pub struct UniqueIdAllocatorAtomic<T: IntegerIdCounter> { /* private fields */ }Expand description
Allocates unique integer ids atomically, in a way safe to use from multiple threads.
§Thread Safety
This type makes the following guarantees about allocations:
- Absent a call to
Self::reset, each call toSelf::allocreturns a new value. Consequently, if only a single allocator is used, all the ids will be unique. - All available ids will be used before an
IdExhaustedErroris returned. Equivalently, allocation will never skip over ids (although it may appear to from the perspective of a single thread). - Once an
IdExhaustedErroris returned, all future allocations will fail unlessSelf::resetis called. This is similar to the guarantees ofIterator::fuse.
This type only makes guarantees about atomicity, not about synchronization with other operations.
In other words, without a core::sync::atomic::fence,
there are no guarantees about the relative-ordering between this counter and other memory locations.
It is not meant to be used as a synchronization primitive; It is only meant to allocate unique ids.
An incorrect implementation of IntegerId or IntegerIdCounter can break some or all of these guarantees,
but will not be able to trigger undefined behavior.
Implementations§
Source§impl<T: IntegerIdCounter> UniqueIdAllocatorAtomic<T>
impl<T: IntegerIdCounter> UniqueIdAllocatorAtomic<T>
Sourcepub const fn new() -> Self
pub const fn new() -> Self
Create a new allocator,
using T::START as the first id (usually zero).
Sourcepub fn with_start(start: T) -> Self
pub fn with_start(start: T) -> Self
Create a new allocator, using the specified value as the first id.
Use Self::with_start_const if you need a constant function.
Sourcepub const fn with_start_const(start: T) -> Selfwhere
T: NoUninit,
pub const fn with_start_const(start: T) -> Selfwhere
T: NoUninit,
Create a new allocator, using the specified value as the first id.
In order to be usable from a const function,
this requires that T implement the bytemuck::NoUninit trait
and have the same size and representation as T::Int.
If that does not happen, this method will fail to compile with a const panic.
§Safety
This function cannot cause undefined behavior.
Sourcepub fn approx_max_used_id(&self) -> Option<T>
pub fn approx_max_used_id(&self) -> Option<T>
Estimate the maximum currently used id,
or None if no ids have been allocated yet.
Unlike UniqueIdAllocator::max_used_id, this is only an approximation.
This is because other threads may be concurrently allocating a new id.
Sourcepub fn try_alloc(&self) -> Result<T, IdExhaustedError<T>>
pub fn try_alloc(&self) -> Result<T, IdExhaustedError<T>>
Attempt to allocate a new id, returning an error if exhausted.
This operation is guaranteed to be atomic,
and will never reuse ids unless Self::reset is called.
However, it should not be used as a tool for synchronization.
See type-level docs for more details.
§Errors
Once the number of allocated ids exceeds the range of the underlying
IntegerIdCounter, then this function will return an error.
This function will never skip over valid ids,
so the error can only occur if all ids have ben used.
Sourcepub fn alloc(&self) -> T
pub fn alloc(&self) -> T
Attempt to allocate a new id, panicking if exhausted.
This operation is guaranteed to be atomic,
and will never reuse ids unless Self::reset is called.
However, it should not be used as a tool for synchronization.
See type-level docs for more details.
§Panics
Panics if ids are exhausted, when Self::try_alloc would have returned an error.
Sourcepub fn reset(&self)
pub fn reset(&self)
Reset the allocator to a pristine state, beginning allocations all over again.
This is equivalent to running *allocator = UniqueIdAllocatorAtomic::new(),
but is done atomically and does not require a &mut Self reference.
This may cause unexpected behavior if ids are expected to be monotonically increasing, or if the new ids conflict with ones still in use. To avoid this, keep the id allocator private.
There is no counterpart UniqueIdAllocator::set_next_id,
because the ability to force the counter to jump forwards
could prevent future optimizations.