vlock/
lib.rs

1//! Similar to [`std::sync::RwLock`], yet it locks only on writes and reads are
2//! wait-free.
3//!
4//! [`VLock`] is for read-heavy scenarios. Updates are strictly serialized and may
5//! have to wait until readers of older data are finished. It works by keeping
6//! multiple versions of the shared data and counting references to each of
7//! the versions. Named `VLock`, because it uses versioning and behaves sort-of
8//! like a lock. Naming is hard.
9//!
10//! # Why bother?
11//!
12//! [`VLock`] is a fast[^1] and scalable lock with stable read performance for
13//! non-copy types at the expense of slower writes.
14//!
15//! It's a common pattern among various sorts of systems, where there is a
16//! number of processing threads, each making decisions via shared state and
17//! some other code that updates that shared state once in a while. In network
18//! infrastructure, for example, this can be thought of data and control planes,
19//! be it routers or various levels of load balancers. In data-heavy applications,
20//! that can be a hot view of some stuff stored in a database.
21//!
22//! If that roughly describes your use case, you may benefit from this library
23//! at the expense of having 1+ extra copies of data in memory, slightly more
24//! state and slower write performance. The former is usually necessary even
25//! when [`RwLock`][std::sync::RwLock] is used to minimize time under lock,
26//! and [`VLock`] actually reuses old versions which may save you some time
27//! by avoiding allocations. The extra state is one `usize` plus a `usize`
28//! for every version... and there's not much you can do about slower writes.
29//!
30//! If read performance is critical and readers can't afford to wait for writes,
31//! you may benefit significantly.
32//!
33//! As a bonus, the implementation is simple with about 200 lines of actual
34//! code and without any external dependencies.
35//!
36//! [^1]: Based on synthetic benchmarks on `x86_64` laptops, read performance was
37//! 1.3-2.0 times faster than `ArcSwap`, and may be order of magnitude faster
38//! than fastest `RwLock` implementation in certain cases. Writes of `VLock`
39//! are more efficient than `ArcSwap`. Comparing to `RwLock`, writes are
40//! generally 1 to 10 times slower than `parking_lot` implementation, but an
41//! improvement over the `std` implementation. With `SeqLock` results were
42//! mixed: in some scenarios reads of `VLock` was 4 times slower, in some about
43//! 1:1 and in other 2 times quicker. Although write performance of `VLock`
44//! is significantly worse than that of `SeqLock`, it can be used for non-copy
45//! types. Note that write performance of `VLock` may degrade significantly
46//! when readers are not progressing and `N` is small, in other words `VLock`
47//! is susceptible to write starvation by prioritizing reads.
48//!
49//! # How does it work?
50//!
51//! As mentioned above, it uses a combination of versioning and reference
52//! counting with a lock to serialize writes.
53//!
54//! [`VLock<T, N>`] has a fixed size `N`, which is the maximum number of
55//! *versions* to allocate. Version is identified by an *offset* in the allocated
56//! space. *State* has both the offset and the *counter* in the same atomic.
57//! [`VLock`] keeps the current state and a state for every version.
58//!
59//! Every time [`read`][VLock::read] is called, the current state counter is
60//! incremented and the associated offset is used to return the current version
61//! to the reader. After the returned pointer is dropped, the per-version state
62//! counter is decremented.
63//!
64//! During [`update`][VLock::update], a first unused version is picked by checking
65//! whether per-version counter is 0, or if nothing is available, a new version
66//! is initialized if size allows. This is the place where waiting for readers
67//! is happening. Once a free offset is found and it is updated, the current
68//! state counter is reset and the new offset is written (which is one atomic
69//! operation). The returned counter is then added to the per-version counter
70//! associated with that version, after which it will reach 0 once all readers
71//! are done with it.
72//!
73//! In other words, tracking access can be thought of balancing with two counters -
74//! one incrementing in the current state, marking the start of the read access
75//! to data, and the other decrementing in the per-version state, marking the
76//! end of the access. Then, both are added together. If the resulting counter
77//! is greater than 0, there are still some readers, if it's 0, then the
78//! version is definitely unused.
79//!
80//! Old versions are not dropped, so it acts like a pool. That may be handy
81//! when dealing with heap-allocated datastructures that have to be updated
82//! or rebuilt, but may also lead to some old garbage lying around in memory.
83//!
84//! # Usage
85//!
86//! ```
87//! use std::sync::Arc;
88//! use std::thread;
89//! use std::time;
90//!
91//! use vlock::VLock;
92//!
93//! let lock = Arc::new(VLock::<String, 4>::new(String::from("hi there!")));
94//! let lock_clone = Arc::clone(&lock);
95//! let t = thread::spawn(move || {
96//!     for _ in 0..5 {
97//!         println!("{}", *lock_clone.read());
98//!         thread::sleep(time::Duration::from_millis(1));
99//!     }
100//!     lock_clone.update(
101//!         |_, value| {
102//!             value.clear();
103//!             value.push_str("bye!");
104//!         },
105//!         String::new
106//!     );
107//! });
108//! thread::sleep(time::Duration::from_millis(2));
109//! lock.update(
110//!     |_, value| {
111//!         value.clear();
112//!         value.push_str("here's some text for you");
113//!     },
114//!     String::new
115//! );
116//! if let Err(err) = t.join() {
117//!     println!("thread has failed: {err:?}");
118//! }
119//! assert_eq!(*lock.read(), "bye!");
120//! ```
121
122#![warn(missing_docs)]
123#![warn(clippy::pedantic)]
124#![allow(clippy::inline_always)]
125
126use std::{borrow, cell, fmt, hash, hint, marker, mem, ops, ptr, sync::atomic, thread};
127
128// The number of attempts to acquire one of old versions for updating, spinning
129// in between attempts exponentially.
130const ACQUIRE_ATTEMPTS: usize = 14;
131// Number of times to yield to the scheduler before attempting to initialize
132// a new version. This makes the number of initialized versions slowly adapt to
133// the level of saturation.
134const YIELDS_BEFORE_INIT: usize = 2;
135const LOCKED: usize = 0;
136
137/// A versioned lock.
138///
139/// Allows multiple readers and a single writer at a time. Reads are wait-free,
140/// writes are protected by a lock.
141///
142/// The type parameter `T` describes the data that is protected by the lock.
143/// Constant `N` is the maximum number of versions of the data that can be
144/// allocated.
145///
146/// # Examples
147///
148/// ```
149/// use vlock::VLock;
150///
151/// let lock: VLock<_, 2> = 10.into();
152/// // there can be multiple reads
153/// {
154///     let read1 = lock.read();
155///     let read2 = lock.read();
156///     assert_eq!(*read1, 10);
157///     assert_eq!(*read2, 10);
158/// }
159///
160/// // old versions are still accessible after update
161/// {
162///     let read1 = lock.read();
163///     // this triggers a new version to be initialized with 2
164///     lock.update(|_, value| *value += 20, || 2);
165///     let read2 = lock.read();
166///     assert_eq!(*read1, 10);
167///     assert_eq!(*read2, 22);
168///     // attempt to update here will block until read1 is dropped
169/// } // read1 is dropped, allowing more updates
170///
171/// // current version can be accessed directly when updating
172/// {
173///     lock.update(|curr, value| *value = *curr + 8, || 2);
174///     let read1 = lock.read();
175///     assert_eq!(*read1, 30);
176/// }
177/// ```
178pub struct VLock<T, const N: usize> {
179    // The state for the current version. Contains the offset in data of the
180    // current version and a counter of acquired reads to it.
181    state: atomic::AtomicUsize,
182
183    // The initialized length of data. Since we need a lock for writes, this
184    // value is exploited - LOCKED means the writes are locked.
185    length: atomic::AtomicUsize,
186
187    // Versions of data, initialized lazily. Each item in the array keeps its
188    // own state to check whether this version is still in use.
189    data: cell::UnsafeCell<[mem::MaybeUninit<Data<T>>; N]>,
190}
191
192impl<T, const N: usize> VLock<T, N> {
193    const STEP: usize = {
194        assert!(N > 1, "VLock requires at least 2 versions to work");
195        N.next_power_of_two()
196    };
197    const OFFSET: usize = Self::STEP - 1;
198    const COUNTER: usize = !Self::OFFSET;
199
200    /// Creates a new unlocked instance of `VLock` with the initial version.
201    #[inline(always)]
202    pub fn new(value: T) -> Self {
203        // SAFETY: The assume_init is for the array of MaybeUninits.
204        let mut data: [mem::MaybeUninit<Data<T>>; N] =
205            unsafe { mem::MaybeUninit::uninit().assume_init() };
206        data[0].write(Data {
207            state: atomic::AtomicUsize::new(0),
208            value,
209        });
210        Self {
211            state: atomic::AtomicUsize::new(0),
212            length: atomic::AtomicUsize::new(1),
213            data: cell::UnsafeCell::new(data),
214        }
215    }
216
217    #[inline(always)]
218    unsafe fn at(&self, offset: usize) -> &mem::MaybeUninit<Data<T>> {
219        &(*self.data.get())[offset]
220    }
221
222    #[allow(clippy::mut_from_ref)]
223    #[inline(always)]
224    unsafe fn at_mut(&self, offset: usize) -> &mut mem::MaybeUninit<Data<T>> {
225        &mut (*self.data.get())[offset]
226    }
227
228    /// Acquires a reference to the current version of `T`.
229    ///
230    /// This call never blocks. However, holding on to a version for too long
231    /// may block concurrent updates, so aim to have frequent short reads if
232    /// possible. Note, that repeated calls may return a newer version.
233    ///
234    /// Returned reference is RAII-style, releasing the access once dropped.
235    ///
236    /// # Panics
237    ///
238    /// Attempt to acquire will panic if there is an overflow in the internal
239    /// counter for the current version. The counter max is
240    /// `usize::MAX >> N.next_power_of_two().ilog2()` and corresponds to the
241    /// number of total `read` calls that can happen to a single version. Every
242    /// next [`update`][`VLock::update`] resets the counter.
243    ///
244    /// Normally, this should not happen, but one can imagine a system with
245    /// high frequency of reads running for too long without any data updates.
246    ///
247    /// # Examples
248    ///
249    /// ```
250    /// use vlock::VLock;
251    ///
252    /// let lock: VLock<_, 2> = 10.into();
253    /// {
254    ///     let value = lock.read();
255    ///     assert_eq!(*value, 10);
256    /// } // the access is released after value is dropped
257    /// ```
258    #[must_use]
259    #[inline(always)]
260    pub fn read(&self) -> ReadRef<'_, T, N> {
261        // Taking Acquire to ensure that access to the new version happens
262        // after all mutations complete. It's OK to store the counter with
263        // Relaxed ordering.
264        let state = self.state.fetch_add(Self::STEP, atomic::Ordering::Acquire);
265        assert_ne!(
266            state.wrapping_add(Self::STEP),
267            state & Self::OFFSET,
268            "counter overflow"
269        );
270
271        ReadRef {
272            // SAFETY: Current offset always points to init data - see new() and
273            // update(). No mutable borrow can happen: update() mutates non-current
274            // version and prevents selecting versions for update with non-zero
275            // counter, which is guaranteed until this ReadRef is dropped.
276            ptr: unsafe { self.at(state & Self::OFFSET) }.as_ptr(),
277            phantom: marker::PhantomData,
278        }
279    }
280
281    #[inline(always)]
282    fn lock(&self) -> usize {
283        loop {
284            let value = self.length.swap(LOCKED, atomic::Ordering::Acquire);
285            if value != LOCKED {
286                return value;
287            }
288            thread::yield_now();
289        }
290    }
291
292    #[inline(always)]
293    fn acquire<I>(&self, curr_offset: usize, length: usize, init: I) -> usize
294    where
295        I: FnOnce() -> T,
296    {
297        let mut remaining = ACQUIRE_ATTEMPTS;
298        loop {
299            'attempt: loop {
300                if length == 1 {
301                    break 'attempt;
302                }
303
304                // SAFETY: No concurrent mutations can happen because of the lock.
305                if let Some(state) = unsafe { &*self.data.get() }
306                    .iter()
307                    .enumerate()
308                    .take(length)
309                    .filter(|(offset, _)| *offset != curr_offset)
310                    // SAFETY: Length counts inits. It's safe to assume that
311                    // the first length MaybeUninits are inits.
312                    .map(|(_, init)| unsafe { init.assume_init_ref() })
313                    // These versions are not "active", i.e. there are no new reads to
314                    // these happening at this point, only some old readers may be
315                    // holding on to these.
316                    //
317                    // Taking Acquire here to ensure that all access to that version
318                    // has completed before it can be reused.
319                    .map(|version| version.state.load(atomic::Ordering::Acquire))
320                    .find(|&state| state & Self::COUNTER == 0)
321                {
322                    return state & Self::OFFSET;
323                }
324
325                // That was the last attempt. Keep it that way indefinitely.
326                if remaining == 1 {
327                    break 'attempt;
328                }
329
330                // Spin with exponential waiting time and only then yield
331                // YIELDS_BEFORE_INIT times.
332                if remaining > YIELDS_BEFORE_INIT + 1 {
333                    for _ in 0..1 << ACQUIRE_ATTEMPTS.saturating_sub(remaining) {
334                        hint::spin_loop();
335                    }
336                } else {
337                    thread::yield_now();
338                }
339                remaining -= 1;
340            }
341
342            // Try to initialize a new version, if there's room for that.
343            if length < N {
344                // SAFETY: The version at length is uninit. It is safe to init.
345                // No concurrent mutations can happen because of the lock.
346                unsafe { self.at_mut(length) }.write(Data {
347                    state: atomic::AtomicUsize::new(length),
348                    value: init(),
349                });
350                return length;
351            }
352
353            // No new versions can be initialized and previous versions are busy.
354            // From this point on there will be just one attempt to acquire,
355            // yielding in between.
356            thread::yield_now();
357        }
358    }
359
360    #[inline(always)]
361    fn unlock(&self, value: usize) {
362        assert_ne!(value, LOCKED, "locking unlock");
363        assert_eq!(
364            self.length.swap(value, atomic::Ordering::Release),
365            LOCKED,
366            "unlock without lock"
367        );
368    }
369
370    /// Locks this `VLock` and calls `f` with the current and one of the
371    /// previously used or a newly initialized versions, blocking the current
372    /// thread if the lock can't be acquired or if all `N` versions of data are
373    /// in use.
374    ///
375    /// If a new version needs initialization, `init` will be called before
376    /// `f`. This happens when all initialized versions are in use and the
377    /// number of versions is less than `N`.
378    ///
379    /// Note, that there is no guarantee which exact non-current version will
380    /// be passed to `f`, because it depends on reader access patterns. Unless
381    /// `N` equals 2, of course.
382    ///
383    /// Current implementation tries to avoid initializing new versions by
384    /// attempting to access already-initialized non-current versions multiple
385    /// times with exponential wait time in between, and then yielding few times
386    /// in hope that readers will progress. This seems to be a good balance to
387    /// avoid aggressive initialization when there is high contention, and
388    /// while waiting longer initially, eventually the number of initialized
389    /// versions will grow to match the saturation level if size allows.
390    ///
391    /// # Panics
392    ///
393    /// If the current version changes during `update` for any reason, or
394    /// unlocking encounters unexpected state, the `update` will panic, leaving
395    /// the state unrecoverable. Bit-flips are rare, but do happen nevertheless.
396    ///
397    /// # Examples
398    ///
399    /// ```
400    /// use vlock::VLock;
401    ///
402    /// let lock: VLock<_, 2> = 10.into();
403    /// assert_eq!(*lock.read(), 10);
404    /// lock.update(|_, value| *value += 20, || 13);
405    /// assert_eq!(*lock.read(), 33);
406    /// lock.update(|_, value| *value += 20, || 13);
407    /// assert_eq!(*lock.read(), 30);
408    /// ```
409    #[inline(always)]
410    pub fn update<F, I>(&self, f: F, init: I)
411    where
412        F: FnOnce(&T, &mut T),
413        I: FnOnce() -> T,
414    {
415        self.compare_update(|_| true, f, init);
416    }
417
418    /// Same as [`VLock::update`], except that `pred` is called under a lock
419    /// with the current version to determine whether the update should proceed
420    /// or not. The return value indicates whether the update was completed.
421    ///
422    /// # Panics
423    ///
424    /// See [`VLock::update`].
425    ///
426    /// # Examples
427    ///
428    /// ```
429    /// use vlock::VLock;
430    ///
431    /// let lock: VLock<_, 2> = 10.into();
432    /// assert_eq!(*lock.read(), 10);
433    /// assert!(!lock.compare_update(|curr| *curr > 30, |_, value| *value += 20, || 13));
434    /// assert_eq!(*lock.read(), 10);
435    /// assert!(lock.compare_update(|curr| *curr < 30, |_, value| *value += 20, || 13));
436    /// assert_eq!(*lock.read(), 33);
437    /// ```
438    pub fn compare_update<P, F, I>(&self, pred: P, f: F, init: I) -> bool
439    where
440        P: FnOnce(&T) -> bool,
441        F: FnOnce(&T, &mut T),
442        I: FnOnce() -> T,
443    {
444        let mut length = self.lock();
445        // Relaxed is fine, because all changes to the offset are behind a
446        // lock which was acquired just above.
447        let offset = self.state.load(atomic::Ordering::Relaxed) & Self::OFFSET;
448
449        // SAFETY: Current version is init, which happened either in new() or
450        // in acquire() during previous update(). Next mutable borrow at this
451        // offset can happen only in a subsequent update(), which is serialized.
452        let version = unsafe { self.at(offset).assume_init_ref() };
453
454        if !pred(&version.value) {
455            self.unlock(length);
456            return false;
457        }
458
459        let new_offset = self.acquire(offset, length, init);
460        if new_offset == length {
461            length = length.saturating_add(1);
462        }
463        f(
464            &version.value,
465            // SAFETY: Data at new_offset is ensured to be init and new_offset
466            // cannot overlap with the current offset. Reference counting tracks
467            // that there are no active "borrows". See acquire() for details.
468            // No concurrent mutations can happen because of the lock.
469            &mut unsafe { self.at_mut(new_offset).assume_init_mut() }.value,
470        );
471
472        // Update the state to point to the new offset and reset the counter for
473        // that version. This is the point when the new read() calls start to
474        // refer to the new version.
475        //
476        // Changes to the offset are behind a lock. Regarding the read(), ensure
477        // that the version at the new_offset has been written before subsequent
478        // loads, so using Release here.
479        let prev_state = self.state.swap(new_offset, atomic::Ordering::Release);
480
481        // Who knows what is going on on a computer this is running on.
482        assert_eq!(
483            prev_state & Self::OFFSET,
484            offset,
485            "offset changed while writing"
486        );
487
488        // Add the total number of times the previous version was handed out to
489        // the counter of that same version, so it can reach 0 when all borrows
490        // are dropped.
491        //
492        // Relaxed should be OK. Next access to that version is in subsequent
493        // update() call. If the counter happens to go to 0 at this point, the
494        // next access to that version is synchronized by the lock. Otherwise,
495        // synchronization is ensured via Acquire load.
496        version
497            .state
498            .fetch_add(prev_state & Self::COUNTER, atomic::Ordering::Relaxed);
499
500        self.unlock(length);
501        true
502    }
503
504    /// Returns a mutable reference to the current version.
505    ///
506    /// Because of a mutable borrow, exclusive access is guaranteed by the compiler.
507    ///
508    /// # Examples
509    ///
510    /// ```
511    /// use vlock::VLock;
512    ///
513    /// let mut lock: VLock<_, 2> = 10.into();
514    /// assert_eq!(*lock.get_mut(), 10);
515    /// ```
516    #[inline(always)]
517    pub fn get_mut(&mut self) -> &mut T {
518        let offset = *self.state.get_mut() & Self::OFFSET;
519        // SAFETY: Exclusive mutable access is guaranteed by the compiler.
520        // Current offset always points to init data - see new() and update().
521        &mut unsafe { self.at_mut(offset).assume_init_mut() }.value
522    }
523}
524
525impl<T: Default, const N: usize> VLock<T, N> {
526    /// Same as [`VLock::update`], except that new versions are initialized
527    /// with [`Default::default`].
528    ///
529    /// # Panics
530    ///
531    /// See [`VLock::update`].
532    ///
533    /// # Examples
534    ///
535    /// ```
536    /// use vlock::VLock;
537    ///
538    /// let lock: VLock<_, 2> = 10.into();
539    /// assert_eq!(*lock.read(), 10);
540    /// lock.update_default(|_, value| *value += 20);
541    /// assert_eq!(*lock.read(), 20);
542    /// lock.update_default(|_, value| *value += 20);
543    /// assert_eq!(*lock.read(), 30);
544    /// ```
545    #[inline(always)]
546    pub fn update_default<F>(&self, f: F)
547    where
548        F: FnOnce(&T, &mut T),
549    {
550        self.update(f, T::default);
551    }
552
553    /// Same as [`VLock::compare_update`], except that new versions are
554    /// initialized with [`Default::default`].
555    ///
556    /// # Panics
557    ///
558    /// See [`VLock::update`].
559    ///
560    /// # Examples
561    ///
562    /// ```
563    /// use vlock::VLock;
564    ///
565    /// let lock: VLock<_, 2> = 10.into();
566    /// assert_eq!(*lock.read(), 10);
567    /// assert!(!lock.compare_update_default(|curr| *curr > 30, |_, value| *value += 20));
568    /// assert_eq!(*lock.read(), 10);
569    /// assert!(lock.compare_update_default(|curr| *curr < 30, |_, value| *value += 20));
570    /// assert_eq!(*lock.read(), 20);
571    /// ```
572    #[inline(always)]
573    pub fn compare_update_default<P, F>(&self, pred: P, f: F) -> bool
574    where
575        P: FnOnce(&T) -> bool,
576        F: FnOnce(&T, &mut T),
577    {
578        self.compare_update(pred, f, T::default)
579    }
580}
581
582impl<T, const N: usize> From<T> for VLock<T, N> {
583    #[inline(always)]
584    fn from(value: T) -> Self {
585        VLock::new(value)
586    }
587}
588
589impl<T: Default, const N: usize> Default for VLock<T, N> {
590    #[inline(always)]
591    fn default() -> Self {
592        VLock::new(T::default())
593    }
594}
595
596unsafe impl<T: Send + Sync, const N: usize> Send for VLock<T, N> {}
597unsafe impl<T: Send + Sync, const N: usize> Sync for VLock<T, N> {}
598
599impl<T: Clone, const N: usize> Clone for VLock<T, N> {
600    #[inline(always)]
601    fn clone(&self) -> Self {
602        Self::new(self.read().clone())
603    }
604
605    fn clone_from(&mut self, source: &Self) {
606        let offset = *self.state.get_mut() & Self::OFFSET;
607        // SAFETY: Exclusive mutable access is guaranteed by the compiler.
608        let current = unsafe { self.at_mut(offset).assume_init_mut() };
609        *current.state.get_mut() = 0;
610        current.value.clone_from(&source.read());
611        if offset != 0 {
612            // SAFETY: Exclusive mutable access is guaranteed by the compiler.
613            let first = unsafe { self.at_mut(0).assume_init_mut() };
614            mem::swap(first, current);
615        }
616        *self.state.get_mut() = 0;
617
618        // SAFETY: Exclusive mutable access is guaranteed by the compiler.
619        for init in unsafe { &mut *self.data.get() }
620            .iter_mut()
621            .take(*self.length.get_mut())
622            .skip(1)
623        {
624            // SAFETY: Length counts inits. It's safe to assume that the first
625            // length MaybeUninits are inits.
626            let state = &mut unsafe { init.assume_init_mut() }.state;
627            assert_eq!(*state.get_mut() & Self::COUNTER, 0);
628            unsafe { init.assume_init_drop() };
629        }
630        *self.length.get_mut() = 1;
631    }
632}
633
634impl<T, const N: usize> fmt::Debug for VLock<T, N> {
635    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
636        let length = self.length.load(atomic::Ordering::Relaxed);
637        if length == LOCKED {
638            f.debug_struct("VLock (locked)")
639                .field("state", &self.state)
640                .finish_non_exhaustive()
641        } else {
642            f.debug_struct("VLock")
643                .field("state", &self.state)
644                .field("length", &length)
645                .finish_non_exhaustive()
646        }
647    }
648}
649
650impl<T, const N: usize> Drop for VLock<T, N> {
651    #[inline(always)]
652    fn drop(&mut self) {
653        // SAFETY: Exclusive mutable access is guaranteed by the compiler.
654        for init in unsafe { &mut *self.data.get() }
655            .iter_mut()
656            .take(*self.length.get_mut())
657        {
658            // SAFETY: Length counts inits. It's safe to assume that the first
659            // length MaybeUninits are inits.
660            unsafe { init.assume_init_drop() };
661        }
662    }
663}
664
665#[derive(Debug)]
666struct Data<T> {
667    // The counter is decremented every time the read reference to that data
668    // is dropped, and incremented by the total number of read() calls to that
669    // version when the new version is written. We also keep an offset here
670    // to match the state of VLock.
671    //
672    // Note, that overflow is expected. Values are not compared to anything
673    // other than 0 anyway and it will reach 0 regardless of the sign.
674    state: atomic::AtomicUsize,
675    value: T,
676}
677
678/// A barely smart pointer referencing data owned by [`VLock`]. The access
679/// to the data is RAII-style, and is released when dropped.
680///
681/// This type is created by [`VLock::read`].
682///
683/// This type can't be cloned. Holding on to a version for too long is no good.
684/// Instead, call [`read`] to acquire more copies if necessary. The next [`read`]
685/// may return a newer version, however.
686///
687/// [`read`]: VLock::read
688pub struct ReadRef<'a, T: 'a, const N: usize> {
689    ptr: *const Data<T>,
690    phantom: marker::PhantomData<&'a Data<T>>,
691}
692
693impl<T, const N: usize> Eq for ReadRef<'_, T, N> {}
694
695impl<T, const N: usize> PartialEq for ReadRef<'_, T, N> {
696    /// Equality by comparing the addresses of two pointers, which if equal
697    /// guarantee that two versions are in fact the same exact version.
698    ///
699    /// Dereference to compare the inner values.
700    ///
701    /// # Examples
702    ///
703    /// ```
704    /// use vlock::VLock;
705    ///
706    /// let lock: VLock<_, 2> = 10.into();
707    /// let read1 = lock.read();
708    /// let read2 = lock.read();
709    /// assert_eq!(read1, read2);
710    ///
711    /// lock.update(|curr, value| *value = *curr, || 0);
712    /// let read3 = lock.read();
713    /// assert_ne!(read2, read3);
714    /// assert_eq!(*read2, *read3);
715    /// ```
716    #[inline(always)]
717    fn eq(&self, other: &Self) -> bool {
718        self.ptr == other.ptr
719    }
720}
721
722impl<T, const N: usize> AsRef<T> for ReadRef<'_, T, N> {
723    #[inline(always)]
724    fn as_ref(&self) -> &T {
725        self
726    }
727}
728
729impl<T, const N: usize> borrow::Borrow<T> for ReadRef<'_, T, N> {
730    #[inline(always)]
731    fn borrow(&self) -> &T {
732        self
733    }
734}
735
736impl<T: fmt::Debug, const N: usize> fmt::Debug for ReadRef<'_, T, N> {
737    #[inline(always)]
738    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
739        fmt::Debug::fmt(&**self, f)
740    }
741}
742
743impl<T: fmt::Display, const N: usize> fmt::Display for ReadRef<'_, T, N> {
744    #[inline(always)]
745    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
746        fmt::Display::fmt(&**self, f)
747    }
748}
749
750impl<T, const N: usize> fmt::Pointer for ReadRef<'_, T, N> {
751    #[inline(always)]
752    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
753        fmt::Pointer::fmt(&ptr::addr_of!(**self), f)
754    }
755}
756
757impl<T, const N: usize> hash::Hash for ReadRef<'_, T, N> {
758    /// Feeds the address of a pointer to calculate a hash, which is unique
759    /// per version.
760    ///
761    /// Dereference to hash the value behind the pointer.
762    ///
763    /// # Examples
764    ///
765    /// ```
766    /// use std::collections::HashSet;
767    ///
768    /// use vlock::VLock;
769    ///
770    /// let lock: VLock<_, 2> = 10.into();
771    /// let mut ptr_set = HashSet::new();
772    /// let mut val_set = HashSet::new();
773    /// assert!(ptr_set.insert(lock.read()));
774    /// assert!(val_set.insert(*lock.read()));
775    /// assert!(!ptr_set.insert(lock.read()));
776    /// assert!(!val_set.insert(*lock.read()));
777    ///
778    /// lock.update(|curr, value| *value = *curr, || 0);
779    /// assert!(ptr_set.insert(lock.read()));
780    /// assert!(!val_set.insert(*lock.read()));
781    /// ```
782    #[inline(always)]
783    fn hash<H: hash::Hasher>(&self, state: &mut H) {
784        self.ptr.hash(state);
785    }
786}
787
788impl<T, const N: usize> ops::Deref for ReadRef<'_, T, N> {
789    type Target = T;
790
791    #[inline(always)]
792    fn deref(&self) -> &T {
793        // SAFETY: Pointer is non-null based on the context where it is set.
794        // No mutable borrow of the same address can happen, until Self is
795        // dropped due to reference counting. See VLock::read() for details.
796        &unsafe { &*self.ptr }.value
797    }
798}
799
800unsafe impl<T: Sync, const N: usize> Send for ReadRef<'_, T, N> {}
801unsafe impl<T: Sync, const N: usize> Sync for ReadRef<'_, T, N> {}
802
803impl<T, const N: usize> Drop for ReadRef<'_, T, N> {
804    #[inline(always)]
805    fn drop(&mut self) {
806        // Release to synchronize later reuse of the data this points to.
807        //
808        // SAFETY: Pointer is non-null based on the context where it is set.
809        unsafe { &*self.ptr }.state.fetch_sub(
810            /*STEP*/ N.next_power_of_two(),
811            atomic::Ordering::Release,
812        );
813    }
814}