arcshift/lib.rs
1#![no_std]
2#![cfg_attr(feature = "nightly", feature(ptr_metadata))]
3#![deny(warnings)]
4#![forbid(clippy::undocumented_unsafe_blocks)]
5#![deny(missing_docs)]
6
7//! # Introduction to ArcShift
8//!
9//! [`ArcShift`] is a data type similar to `std::sync::Arc`, except that it allows updating
10//! the value pointed to. It can be used as a replacement for
11//! `std::sync::Arc<std::sync::RwLock<T>>`, giving much faster read access.
12//!
13//! Updating the value in ArcShift is significantly more expensive than writing `std::sync::RwLock`,
14//! so ArcShift is most suited to cases where updates are infrequent.
15//!
16//! ## Example
17//! ```
18//! # #[cfg(any(feature="loom",feature="shuttle"))]
19//! # pub fn main() {}
20//! # #[cfg(not(any(feature="loom",feature="shuttle")))]
21//! # pub fn main() {
22//! # extern crate arcshift;
23//! # use arcshift::ArcShift;
24//! use std::thread;
25//!
26//!
27//! let mut arc = ArcShift::new("Hello".to_string());
28//! let mut arc2 = arc.clone();
29//!
30//!
31//! let j1 = thread::spawn(move||{
32//! println!("Value in thread 1: '{}'", arc.get()); //Prints 'Hello'
33//! arc.update("New value".to_string());
34//! println!("Updated value in thread 1: '{}'", arc.get()); //Prints 'New value'
35//! });
36//!
37//! let j2 = thread::spawn(move||{
38//! // Prints either 'Hello' or 'New value', depending on scheduling:
39//! println!("Value in thread 2: '{}'", arc2.get());
40//! });
41//!
42//! j1.join().unwrap();
43//! j2.join().unwrap();
44//! # }
45//! ```
46//!
47//! # Strong points
48//! * Easy to use (similar to Arc)
49//! * Extensively tested
50//! * All functions are lock free ( <https://en.wikipedia.org/wiki/Non-blocking_algorithm> )
51//! * For use cases where no updates occur, performance is very good (much
52//! better than RwLock or Mutex).
53//! * Updates are reasonably fast (think, 15-100 nanoseconds), but much slower than Mutex- or
54//! RwLock-writes.
55//! * The function [`ArcShift::shared_non_reloading_get`] allows access without any overhead
56//! compared to regular Arc (benchmarks show identical performance to Arc).
57//! * ArcShift does not rely on thread-local variables.
58//! * Supports unsized types (i.e, you can use `ArcShift<[u8]>`)
59//! * ArcShift is no_std compatible (though 'alloc' is required, since ArcShift is a heap
60//! data structure). Compile with "default-features=false" to enable no_std
61//! compatibility.
62//!
63//! # Limitations
64//!
65//! ArcShift achieves its performance at the expense of the following disadvantages:
66//!
67//! * ArcShift's performance relies on being able to update its pointer when
68//! new values are detected. This means that ArcShift is most efficient when each
69//! thread has a mutable ArcShift instance. This can often be achieved by cloning the ArcShift,
70//! and distributing one owned copy to every thread (these clones all point to the same
71//! inner value). ArcShift can still be used with only shared access ([`ArcShift::shared_get`]),
72//! and performance is still very good as long as the pointer is current. However, if
73//! the ArcShift instance is stale (needs reloading, because an update has occurred), reads will
74//! be approximately twice as costly as for RwLock. One mitigation for this is to use
75//! [`cell::ArcShiftCell`]. However, this type is not `Sync`, only `Send`.
76//! * When modifying the value, the old version of the value lingers in memory until
77//! the last ArcShift that uses it has reloaded. Such a reload only happens when the ArcShift
78//! is accessed using a unique (`&mut`) access (like [`ArcShift::get`] or [`ArcShift::reload`]).
79//! This can be partially mitigated by using the [`ArcShiftWeak`]-type for long-lived
80//! never-reloaded instances.
81//! * Modifying the value is approximately 10x more expensive than modifying an `Arc<Mutex<T>>`.
82//! That said, if you're storing anything significantly more complex than an integer, the overhead
83//! of ArcShift may be insignificant.
84//! * When the value is modified, the next subsequent reload is slower than an `Arc<RwLock<T>>`
85//! access.
86//! * ArcShift is its own datatype. It is in no way compatible with `Arc<T>`.
87//! * At most usize::MAX/8 instances of ArcShift or ArcShiftWeak can be created for each value.
88//! (this is because it uses some bits of its weak refcount to store metadata).
89//!
90//! # Detailed performance characteristics
91//!
92//!
93//! * [`ArcShift::get`] - Very good average performance.
94//! Checking for new values requires a single atomic operation, of the least expensive kind
95//! (Ordering::Relaxed). On x86_64, this is the exact same machine operation as a regular
96//! memory access, and also on arm it is not an expensive operation. The cost of such access
97//! is much smaller than a mutex access, even an uncontended one. In the case where a reload
98//! is actually necessary, there is a significant performance impact (but still typically
99//! below 150ns for modern machines (2025)).
100//!
101//! If other instances have made updates, subsequent accesses will have a penalty. This
102//! penalty can be significant, because previous values may have to be dropped. However,
103//! when any updates have been processed, subsequent accesses will be fast again. It is
104//! guaranteed that any update that completed before the execution of [`ArcShift::get`]
105//! started, will be visible.
106//!
107//! * [`ArcShift::shared_get`] - Good performance as long as the value is not stale.
108//! If self points to a previous value, each call to `shared_get` will traverse the
109//! memory structures to find the most recent value.
110//!
111//! There are three cases:
112//! * The value is up-to-date. In this case, execution is very fast.
113//! * The value is stale, but no write is in progress. Expect a penalty equal to
114//! twice the cost of an RwLock write, approximately.
115//! * The value is stale, and there is a write in progress. This is a rare race condition.
116//! Expect a severe performance penalty (~10-20x the cost of an RwLock write).
117//!
118//! `shared_get` also guarantees that any updates that completed before it was called,
119//! will be visible.
120//!
121//! * [`ArcShift::shared_non_reloading_get`] - No overhead compared to plain Arc.
122//! Will not reload, even if the ArcShift instance is stale. May thus return an old
123//! value. If shared_get has been used previously, this method may return an older
124//! value than what the shared_get returned.
125//!
126//! * [`ArcShift::reload`] - Similar cost to [`ArcShift::get`].
127//!
128//! * [`ArcShift::clone`] - Fast. Requires a single atomic increment, and an atomic read.
129//! If the current instance is stale, the cloned value will be reloaded, with identical
130//! cost to [`ArcShift::get`].
131//!
132//! * Drop - Can be slow. The last remaining owner of a value will drop said value.
133//!
134//!
135//! # Motivation
136//!
137//! The primary raison d'ĂȘtre for [`ArcShift`] is to be a version of Arc which allows
138//! updating the stored value, with very little overhead over regular Arc for read heavy
139//! loads.
140//!
141//! One such motivating use-case for ArcShift is hot-reloadable assets in computer games.
142//! During normal usage, assets do not change. All benchmarks and play experience will
143//! be dependent only on this baseline performance. Ideally, we therefore want to have
144//! a very small performance penalty for the case when assets are *not* updated, comparable
145//! to using regular `std::sync::Arc`.
146//!
147//! During game development, artists may update assets, and hot-reload is a very
148//! time-saving feature. A performance hit during asset-reload is acceptable though.
149//! ArcShift prioritizes base performance, while accepting a penalty when updates are made.
150//!
151//! ArcShift can, of course, be useful in other domains than computer games.
152//!
153//!
154//! # Panicking drop methods
155//! If a drop implementation panics, ArcShift will make sure that the internal data structures
156//! remain uncorrupted. When run without the std-library, some memory leakage will occur every
157//! time a drop method panics. With the std-library, only memory owned by the payload type
158//! might leak.
159//!
160//! # No_std
161//! By default, arcshift uses the rust standard library. This is enabled by the 'std' feature, which
162//! is enabled by default. ArcShift can work without the full rust std library. However, this
163//! comes at a slight performance cost. When the 'std' feature is enabled (which it is by default),
164//! `catch_unwind` is used to guard drop functions, to make sure memory structures are not corrupted
165//! if a user-supplied drop method panics. However, to ensure the same guarantee when running
166//! without std, arcshift presently moves allocations to temporary boxes to be able to run drop
167//! after all memory traversal is finished. This requires multiple allocations, which makes
168//! operation without 'std' slower. Panicking drop methods can also lead to memory leaks
169//! without the std. The memory structures remain intact, and no undefined behavior occurs.
170//! The performance penalty is only present during updates.
171//!
172//! If the overhead mentioned in the previous paragraph is unacceptable, and if the final binary
173//! is compiled with panic=abort, this extra cost can be mitigated. Enable the feature
174//! "nostd_unchecked_panics" to do this. This must never be done if the process will ever continue
175//! executing after a panic, since it can lead to memory reclamation essentially being disabled
176//! for any ArcShift-chain that has had a panicking drop. However, no UB will result, in any case.
177//!
178//! # Implementation
179//!
180//! The basic idea of ArcShift is that each ArcShift instance points to a small heap block,
181//! that contains the pointee value of type T, three reference counts, and 'prev'/'next'-pointers.
182//! The 'next'-pointer starts out as null, but when the value in an ArcShift is updated, the
183//! 'next'-pointer is set to point to the updated value.
184//!
185//! This means that each ArcShift-instance always points at a valid value of type T. No locking
186//! or synchronization is required to get at this value. This is why ArcShift instances are fast
187//! to use. There is the drawback that as long as an ArcShift-instance exists, whatever value
188//! it points to must be kept alive. Each time an ArcShift instance is accessed mutably, we have
189//! an opportunity to update its pointer to the 'next' value. The operation to update the pointer
190//! is called a 'reload'.
191//!
192//! When the last ArcShift-instance releases a particular value, it will be dropped.
193//!
194//! ArcShiftWeak-instances also keep pointers to the heap blocks mentioned above, but the value T
195//! in the block can be dropped while being held by an ArcShiftWeak. This means that an ArcShiftWeak-
196//! instance only consumes `std::mem::size_of::<T>()` bytes plus 5 words of memory, when the value
197//! it points to has been dropped. When the ArcShiftWeak-instance is reloaded, or dropped, that
198//! memory is also released.
199//!
200//! # Prior Art
201//!
202//! ArcShift is very much inspired by arc-swap. The two crates can be used for similar problems.
203//! They have slightly different APIs, one or the other may be a more natural fit depending on
204//! the problem. ArcShift may be faster for some problems, slower for others.
205//!
206//!
207//! # Pitfall #1 - lingering memory usage
208//!
209//! Be aware that ArcShift instances that are just "lying around" without ever being reloaded,
210//! will keep old values around, taking up memory. This is a fundamental drawback of the approach
211//! taken by ArcShift. One workaround is to replace any long-lived infrequently reloaded instances of
212//! [`ArcShift`] with [`ArcShiftWeak`]. This alleviates the problem, though heap storage of approx
213//! `size_of<T>` + 5 words is still expended.
214//!
215//! # Pitfall #2 - reference count limitations
216//!
217//! ArcShift uses usize data type for the reference counts. However, it reserves two bits for
218//! tracking some metadata. This leaves usize::MAX/4 as the maximum usable reference count. To
219//! avoid having to check the refcount twice (once before increasing the count), we set the limit
220//! at usize::MAX/8, and check the count after the atomic operation. This has the effect that if more
221//! than usize::MAX/8 threads clone the same ArcShift instance concurrently, the unsoundness will occur.
222//! However, this is considered acceptable, because this exceeds the possible number
223//! of concurrent threads by a huge safety margin. Also note that usize::MAX/8 ArcShift instances would
224//! take up usize::MAX bytes of memory, which is very much impossible in practice. By leaking
225//! ArcShift instances in a tight loop it is still possible to achieve a weak count of usize::MAX/8,
226//! in which case ArcShift will panic.
227//!
228//! # A larger example
229//!
230//! ```no_run
231//! # extern crate arcshift;
232//! # use arcshift::ArcShift;
233//!
234//! struct CharacterModel {
235//! /* 3D model, textures, etc*/
236//! }
237//!
238//! struct World {
239//! models: Vec<ArcShift<CharacterModel>>
240//! }
241//!
242//! /// Loads models. Regularly scans filesystem,
243//! /// updates models when their files change on disk.
244//! fn load_models() -> Vec<ArcShift<CharacterModel>> {
245//! let models: Vec<ArcShift<CharacterModel>> = vec![];
246//!
247//! /* Somehow load models */
248//!
249//! let mut models_for_reloader = models.clone();
250//! std::thread::spawn(move||{
251//! loop {
252//! /* detect file system changes*/
253//! let changed_model = 0usize;
254//!
255//! models_for_reloader[changed_model].update(CharacterModel{/* newly loaded*/});
256//! }
257//!
258//! });
259//!
260//! models
261//! }
262//!
263//! fn run_game() {
264//! let mut world = World {
265//! models: load_models()
266//! };
267//! loop {
268//! run_game_logic(&mut world);
269//! }
270//! }
271//!
272//! fn run_game_logic(world: &mut World) {
273//! /*
274//! Do game logic, possibly in multiple threads, accessing different parts of World,
275//! possibly cloning 'ArcShift' instances for use by other threads
276//! */
277//!
278//! for model in world.models.iter_mut() {
279//! // Accessing ArcShift using 'get' ensures
280//! // old versions do not linger in RAM.
281//! let model_ref : &CharacterModel = model.get();
282//! // Do stuff with 'model_ref'
283//! }
284//! }
285//!
286//!
287//! ```
288//!
289
290#[cfg(feature = "std")]
291extern crate std;
292
293extern crate alloc;
294
295use alloc::boxed::Box;
296
297use core::alloc::Layout;
298
299use crate::deferred_panics_helper::DropHandler;
300use crate::deferred_panics_helper::IDropHandler;
301use core::cell::UnsafeCell;
302use core::fmt::{Debug, Formatter};
303use core::marker::PhantomData;
304#[allow(unused)]
305use core::mem::align_of_val;
306use core::mem::size_of;
307#[allow(unused)]
308use core::mem::size_of_val;
309use core::mem::{transmute, ManuallyDrop};
310use core::panic::UnwindSafe;
311use core::ptr::{addr_of_mut, null, null_mut, NonNull};
312use core::sync::atomic::Ordering;
313// About unsafe code in this crate:
314// Some private functions contain unsafe code, and place limitations on their
315// callers, without these private functions being marked unsafe.
316// The rationale is that there are lots of operations that simply aren't unsafe, like
317// assigning null to a pointer, that could cause UB in unsafe code in this crate.
318// This crate is inherently dependent on all the code in it being correct. Therefore,
319// marking more functions unsafe buys us very little.
320// Note! The API of this crate is 100% safe and UB should be impossible to trigger through the API.
321// All public methods are 100% sound, this argument only concerns private methods.
322
323// All atomic primitives are reexported from a
324// local module called 'atomic', so we can easily change between using
325// types from 'std' (normal case) and types from shuttle/loom testing libraries.
326
327mod deferred_panics_helper;
328
329/// Declarations of atomic ops for using Arcshift in production
330#[cfg(all(not(loom), not(feature = "shuttle")))]
331mod atomic {
332 pub use core::sync::atomic::{fence, AtomicPtr, AtomicUsize, Ordering};
333 #[allow(unused)]
334 #[cfg(feature = "std")]
335 pub use std::sync::{Arc, Mutex};
336 #[allow(unused)]
337 #[cfg(feature = "std")]
338 pub use std::thread;
339}
340
341/// Declarations for verifying Arcshift using 'shuttle'
342#[cfg(feature = "shuttle")]
343mod atomic {
344 #[allow(unused)]
345 pub use shuttle::sync::atomic::{fence, AtomicPtr, AtomicUsize, Ordering};
346 #[allow(unused)]
347 pub use shuttle::sync::{Arc, Mutex};
348 #[allow(unused)]
349 pub use shuttle::thread;
350}
351
352/// Declarations for verifying Arcshift using 'loom'
353#[cfg(loom)]
354mod atomic {
355 pub use loom::sync::atomic::{fence, AtomicPtr, AtomicUsize, Ordering};
356 #[allow(unused)]
357 pub use loom::sync::{Arc, Mutex};
358 #[allow(unused)]
359 pub use loom::thread;
360}
361
362#[doc(hidden)]
363#[cfg(all(feature = "std", feature = "debug", not(loom)))]
364#[macro_export]
365macro_rules! debug_println {
366 ($($x:tt)*) => {
367 std::println!("{:?}: {}", $crate::atomic::thread::current().id(), std::format!($($x)*))
368 }
369}
370
371#[doc(hidden)]
372#[cfg(all(feature = "std", feature = "debug", loom))]
373#[macro_export]
374macro_rules! debug_println {
375 ($($x:tt)*) => { std::println!($($x)*) }
376}
377
378#[doc(hidden)]
379#[cfg(any(not(feature = "std"), not(feature = "debug")))]
380#[macro_export]
381macro_rules! debug_println {
382 ($($x:tt)*) => {{}};
383}
384
385/// Smart pointer with similar use case as std::sync::Arc, but with
386/// the added ability to atomically replace the contents of the Arc.
387/// See `crate` documentation for more information.
388///
389/// ```rust
390/// # extern crate arcshift;
391/// # #[cfg(any(feature="loom",feature="shuttle"))]
392/// # fn main() {}
393/// # #[cfg(not(any(feature="loom",feature="shuttle")))]
394/// # pub fn main() {
395/// # use arcshift::ArcShift;
396/// let mut instance = ArcShift::new("test");
397/// println!("Value: {:?}", instance.get());
398/// # }
399/// ```
400pub struct ArcShift<T: ?Sized> {
401 item: NonNull<ItemHolderDummy<T>>,
402
403 // Make sure ArcShift is invariant.
404 // ArcShift instances with different lifetimes of T should not be compatible,
405 // since could lead to a short-lived value T being insert into a chain which
406 // also has long-lived ArcShift instances with long-lived T.
407 pd: PhantomData<*mut T>,
408}
409impl<T> UnwindSafe for ArcShift<T> {}
410
411impl<T: ?Sized> Debug for ArcShift<T>
412where
413 T: Debug,
414{
415 fn fmt(&self, f: &mut Formatter<'_>) -> core::fmt::Result {
416 write!(f, "ArcShift({:?})", self.shared_non_reloading_get())
417 }
418}
419
420impl<T: Default> Default for ArcShift<T> {
421 fn default() -> Self {
422 ArcShift::new(Default::default())
423 }
424}
425
426impl<T: ?Sized> Debug for ArcShiftWeak<T>
427where
428 T: Debug,
429{
430 fn fmt(&self, f: &mut Formatter<'_>) -> core::fmt::Result {
431 write!(f, "ArcShiftWeak(..)")
432 }
433}
434
435/// ArcShiftWeak is a way to keep a pointer to an object without preventing said object
436/// from being deallocated. This can be useful when creating cyclic data structure, to avoid
437/// memory leaks.
438///
439/// ```no_run
440/// # use arcshift::ArcShift;
441/// # #[cfg(any(feature="loom",feature="shuttle"))]
442/// # fn main() {}
443/// # #[cfg(not(any(feature="loom",feature="shuttle")))]
444/// # fn main() {
445/// # extern crate arcshift;
446/// # use arcshift::ArcShiftWeak;
447/// let original_instance = ArcShift::new("test");
448/// let light_instance = ArcShift::downgrade(&original_instance);
449/// let mut instance = light_instance.upgrade().unwrap();
450/// println!("Value: {:?}", instance.get());
451/// # }
452/// ```
453///
454/// WARNING! Because of implementation reasons, each instance of ArcShiftWeak will claim
455/// a memory equal to `size_of::<T>` (plus 5 words), even if the value inside it has freed,
456/// and even if all other instances of ArcShift/ArcShiftWeak have been dropped.
457/// If this limitation is unacceptable, consider using `ArcShiftWeak<Box<T>>` as your datatype,
458/// or possibly using a different crate.
459pub struct ArcShiftWeak<T: ?Sized> {
460 item: NonNull<ItemHolderDummy<T>>,
461}
462
463/// Module with a convenient cell-like data structure for reloading ArcShift instances
464/// despite only having shared access.
465pub mod cell;
466
467#[inline(always)]
468const fn is_sized<T: ?Sized>() -> bool {
469 size_of::<&T>() == size_of::<&()>()
470}
471
472/// SAFETY:
473/// If `T` is `Sync` and `Send`, `ArcShift<T>` can also be `Sync`
474unsafe impl<T: Sync + Send + ?Sized> Sync for ArcShift<T> {}
475
476/// SAFETY:
477/// If `T` is `Sync` and `Send`, `ArcShift<T>` can also be `Send`
478unsafe impl<T: Sync + Send + ?Sized> Send for ArcShift<T> {}
479
480/// SAFETY:
481/// If `T` is `Sync` and `Send`, `ArcShiftWeak<T>` can also be `Sync`
482unsafe impl<T: Sync + Send + ?Sized> Sync for ArcShiftWeak<T> {}
483
484/// SAFETY:
485/// If `T` is `Sync` and `Send`, `ArcShiftWeak<T>` can also be `Send`
486unsafe impl<T: Sync + Send + ?Sized> Send for ArcShiftWeak<T> {}
487
488#[repr(transparent)]
489struct UnsizedMetadata<T: ?Sized> {
490 #[cfg(feature = "nightly")]
491 meta: <T as std::ptr::Pointee>::Metadata,
492 #[cfg(not(feature = "nightly"))]
493 meta: *const (),
494 phantom: PhantomData<T>,
495}
496impl<T: ?Sized> Clone for UnsizedMetadata<T> {
497 fn clone(&self) -> Self {
498 *self
499 }
500}
501impl<T: ?Sized> Copy for UnsizedMetadata<T> {}
502impl<T: ?Sized> core::fmt::Debug for UnsizedMetadata<T> {
503 fn fmt(&self, f: &mut Formatter<'_>) -> core::fmt::Result {
504 write!(f, "Metadata({:?})", self.meta)
505 }
506}
507
508#[allow(dead_code)]
509#[repr(C)]
510struct FatPtr<T: ?Sized> {
511 ptr: *mut u8,
512 meta: UnsizedMetadata<T>,
513}
514
515#[inline]
516fn arc_from_raw_parts_mut<T: ?Sized, M: IMetadata>(
517 data_ptr: *mut u8,
518 metadata: UnsizedMetadata<T>,
519) -> *mut ItemHolder<T, M> {
520 // SAFETY:
521 // This is the best I managed without using nightly-only features (August 2024).
522 // It is sound as long as the actual internal representation of fat pointers doesn't change.
523 #[cfg(not(feature = "nightly"))]
524 unsafe {
525 core::mem::transmute_copy(&FatPtr {
526 ptr: data_ptr,
527 meta: metadata,
528 })
529 }
530 #[cfg(feature = "nightly")]
531 {
532 std::ptr::from_raw_parts_mut(data_ptr, metadata.meta)
533 }
534}
535
536#[inline]
537#[cfg(not(any(feature = "std", feature = "nostd_unchecked_panics")))]
538#[cfg_attr(test, mutants::skip)]
539pub(crate) fn ptr_from_raw_parts_mut<T: ?Sized>(
540 data_ptr: *mut u8,
541 metadata: UnsizedMetadata<T>,
542) -> *mut T {
543 // SAFETY:
544 // This is the best I managed without using nightly-only features (August 2024).
545 // It is sound as long as the actual internal representation of fat pointers doesn't change.
546 #[cfg(not(feature = "nightly"))]
547 unsafe {
548 core::mem::transmute_copy(&FatPtr {
549 ptr: data_ptr,
550 meta: metadata,
551 })
552 }
553 #[cfg(feature = "nightly")]
554 {
555 std::ptr::from_raw_parts_mut(data_ptr, metadata.meta)
556 }
557}
558
559impl<T: ?Sized> UnsizedMetadata<T> {
560 #[inline]
561 #[cfg(not(feature = "nightly"))]
562 fn polyfill_metadata(cur_ptr: *const T) -> *const () {
563 if is_sized::<T>() {
564 unreachable!() //We're never using this function for sized data
565 } else {
566 let ptrptr = &cur_ptr as *const *const T as *const *const ();
567 // SAFETY:
568 // This is a trick to get at the 'metadata'-part of the fat-ptr 'cur_ptr'.
569 // It works in practice, as long as the internal representation of fat pointers doesn't
570 // change.
571 unsafe { *ptrptr.wrapping_offset(1) }
572 }
573 }
574
575 #[inline]
576 pub fn new(cur_ptr: *const T) -> UnsizedMetadata<T> {
577 UnsizedMetadata {
578 #[cfg(feature = "nightly")]
579 meta: std::ptr::metadata(cur_ptr),
580 #[cfg(not(feature = "nightly"))]
581 meta: Self::polyfill_metadata(cur_ptr),
582 phantom: PhantomData,
583 }
584 }
585}
586
587fn get_holder_layout<T: ?Sized>(ptr: *const T) -> Layout {
588 // SAFETY:
589 // The pointer 'ptr' is a valid pointer
590 let payload_layout = Layout::for_value(unsafe { &*ptr });
591 if is_sized::<T>() {
592 let layout = get_holder_base_layout::<ItemHolder<(), SizedMetadata>>();
593 let (layout, _) = layout.extend(payload_layout).unwrap();
594 layout.pad_to_align()
595 } else {
596 let layout = Layout::new::<UnsizedMetadata<T>>();
597 let (layout, _) = layout
598 .extend(get_holder_base_layout::<ItemHolder<(), UnsizedMetadata<T>>>())
599 .unwrap();
600 let (layout, _) = layout.extend(payload_layout).unwrap();
601 layout.pad_to_align()
602 }
603}
604
605#[inline(always)]
606fn to_dummy<T: ?Sized, M: IMetadata>(ptr: *const ItemHolder<T, M>) -> *const ItemHolderDummy<T> {
607 ptr as *mut ItemHolderDummy<T>
608}
609
610#[inline(always)]
611fn from_dummy<T: ?Sized, M: IMetadata>(ptr: *const ItemHolderDummy<T>) -> *const ItemHolder<T, M> {
612 get_full_ptr_raw::<T, M>(ptr)
613}
614
615macro_rules! with_holder {
616 ($p: expr, $t: ty, $item: ident, $f:expr) => {
617 if is_sized::<$t>() {
618 let $item = from_dummy::<$t, SizedMetadata>($p.as_ptr());
619 $f
620 } else {
621 let $item = from_dummy::<$t, UnsizedMetadata<$t>>($p.as_ptr());
622 $f
623 }
624 };
625}
626
627fn make_sized_or_unsized_holder_from_box<T: ?Sized>(
628 item: Box<T>,
629 prev: *mut ItemHolderDummy<T>,
630) -> *const ItemHolderDummy<T> {
631 if is_sized::<T>() {
632 to_dummy(make_sized_holder_from_box(item, prev))
633 } else {
634 to_dummy(make_unsized_holder_from_box(item, prev))
635 }
636}
637
638#[allow(clippy::let_and_return)]
639fn get_weak_count(count: usize) -> usize {
640 let count = count & ((1 << (usize::BITS - 2)) - 1);
641 #[cfg(feature = "validate")]
642 assert!(count < MAX_REF_COUNT / 2);
643 count
644}
645
646/// Node has next, or will soon have next. Note! It may seem this is unnecessary,
647/// since if there's a "next", then that next node will hold a weak ref, so a lonely
648/// reference could detect that there was a 'next' anyway.
649///
650/// Consider the following sequence:
651/// * Instance A and B point to Node N1
652/// * A starts dropping. It loads weak count '2'.
653/// * B updates, creating N2, and advanced to N2, and is then dropped.
654/// * Weak count of N1 is now still 2, so the compare_exchange for A *succeeds*
655///
656/// Having this flag makes it impossible for N1 to succeed
657const WEAK_HAVE_NEXT: usize = 1 << (usize::BITS - 1);
658const WEAK_HAVE_PREV: usize = 1 << (usize::BITS - 2);
659
660/// To avoid potential unsafety if refcounts overflow and wrap around,
661/// we have a maximum limit for ref count values. This limit is set with a large margin relative
662/// to the actual wraparound value. Note that this
663/// limit is enforced post fact, meaning that if there are more than usize::MAX/8 or so
664/// simultaneous threads, unsoundness can occur. This is deemed acceptable, because it is
665/// impossible to achieve in practice. Especially since it basically requires an extremely long
666/// running program with memory leaks, or leaking memory very fast in a tight loop. Without leaking,
667/// is impractical to achieve such high refcounts, since having usize::MAX/8 ArcShift instances
668/// alive on a 64-bit machine is impossible, since this would require usize::MAX/2 bytes of memory,
669/// orders of magnitude larger than any existing machine (in 2025).
670const MAX_REF_COUNT: usize = usize::MAX / 8;
671
672fn get_weak_prev(count: usize) -> bool {
673 (count & WEAK_HAVE_PREV) != 0
674}
675
676#[allow(unused)]
677fn get_weak_next(count: usize) -> bool {
678 (count & WEAK_HAVE_NEXT) != 0
679}
680
681#[cfg_attr(test, mutants::skip)]
682fn initial_weak_count<T: ?Sized>(prev: *const ItemHolderDummy<T>) -> usize {
683 if prev.is_null() {
684 1
685 } else {
686 1 | WEAK_HAVE_PREV
687 }
688}
689
690#[allow(unused)]
691#[cfg(all(feature = "std", feature = "debug"))]
692#[cfg_attr(test, mutants::skip)]
693fn format_weak(weak: usize) -> std::string::String {
694 let count = weak & ((1 << (usize::BITS - 2)) - 1);
695 let have_next = (weak & WEAK_HAVE_NEXT) != 0;
696 let have_prev = (weak & WEAK_HAVE_PREV) != 0;
697 std::format!("(weak: {count} prev: {have_prev} next: {have_next})")
698}
699
700#[allow(unused)]
701fn make_unsized_holder_from_box<T: ?Sized>(
702 item: Box<T>,
703 prev: *mut ItemHolderDummy<T>,
704) -> *const ItemHolder<T, UnsizedMetadata<T>> {
705 let cur_ptr = Box::into_raw(item);
706
707 debug_println!("thesize: {:?}", cur_ptr);
708 let item_holder_ptr: *mut ItemHolder<T, UnsizedMetadata<T>>;
709
710 // SAFETY:
711 // The pointer 'cur_ptr' is a valid pointer (it's just been created by 'Box::into_raw'
712 let payload_layout = Layout::for_value(unsafe { &*cur_ptr });
713 let the_size = payload_layout.size();
714
715 if is_sized::<T>() {
716 unreachable!()
717 } else {
718 let layout = get_holder_layout::<T>(cur_ptr);
719
720 let metadata = UnsizedMetadata::new(cur_ptr);
721 debug_println!("Layout: {:?}, meta: {:?}", layout, metadata);
722 item_holder_ptr =
723 // SAFETY:
724 // core::alloc::alloc requires the allocated layout to have a nonzero size. This
725 // is fulfilled, since ItemHolder is non-zero sized even if T is zero-sized.
726 // The returned memory is uninitialized, but we will initialize the required parts of it
727 // below.
728 unsafe { arc_from_raw_parts_mut(alloc::alloc::alloc(layout) as *mut _, metadata) };
729 debug_println!("Sized result ptr: {:?}", item_holder_ptr);
730 // SAFETY:
731 // result_ptr is a valid pointer
732 unsafe {
733 addr_of_mut!((*item_holder_ptr).the_meta).write(metadata);
734 }
735 }
736
737 // SAFETY:
738 // The copy is safe because MaybeUninit<ItemHolder<T>> is guaranteed to have the same
739 // memory layout as ItemHolder<T>, so we're just copying a value of T to a new location.
740 unsafe {
741 (addr_of_mut!((*item_holder_ptr).payload) as *mut u8)
742 .copy_from(cur_ptr as *mut u8, the_size);
743 }
744 // SAFETY:
745 // next is just an AtomicPtr-type, for which all bit patterns are valid.
746 unsafe {
747 addr_of_mut!((*item_holder_ptr).prev).write(atomic::AtomicPtr::new(prev as *mut _));
748 }
749 // SAFETY:
750 // refcount is just an AtomicPtr-type, for which all bit patterns are valid.
751 unsafe {
752 addr_of_mut!((*item_holder_ptr).next).write(atomic::AtomicPtr::default());
753 }
754 // SAFETY:
755 // strong_count is just an AtomicUsize-type, for which all bit patterns are valid.
756 unsafe {
757 addr_of_mut!((*item_holder_ptr).strong_count).write(atomic::AtomicUsize::new(1));
758 }
759 // SAFETY:
760 // weak_count is just an AtomicUsize-type, for which all bit patterns are valid.
761 unsafe {
762 addr_of_mut!((*item_holder_ptr).weak_count)
763 .write(atomic::AtomicUsize::new(initial_weak_count(prev)));
764 }
765 // SAFETY:
766 // advance_count is just an AtomicUsize-type, for which all bit patterns are valid.
767 unsafe {
768 addr_of_mut!((*item_holder_ptr).advance_count).write(atomic::AtomicUsize::new(0));
769 }
770
771 // SAFETY:
772 // result_ptr is a valid pointer
773 #[cfg(feature = "validate")]
774 unsafe {
775 addr_of_mut!((*item_holder_ptr).magic1)
776 .write(std::sync::atomic::AtomicUsize::new(0xbeef8111));
777 }
778
779 // SAFETY:
780 // result_ptr is a valid pointer
781 #[cfg(feature = "validate")]
782 unsafe {
783 addr_of_mut!((*item_holder_ptr).magic2)
784 .write(std::sync::atomic::AtomicUsize::new(0x12348111));
785 }
786
787 // SAFETY:
788 // input_ptr is a *mut T that has been created from a Box<T>.
789 // Converting it to a Box<MaybeUninit<T>> is safe, and will make sure that any drop-method
790 // of T is not run. We must not drop T here, since we've moved it to the 'result_ptr'.
791 let _t: Box<ManuallyDrop<T>> = unsafe { Box::from_raw(cur_ptr as *mut ManuallyDrop<T>) }; //Free the memory, but don't drop 'input'
792
793 item_holder_ptr
794}
795
796#[allow(unused)]
797fn make_sized_holder<T>(
798 payload: T,
799 prev: *const ItemHolderDummy<T>,
800) -> *mut ItemHolder<T, SizedMetadata> {
801 let holder = ItemHolder {
802 the_meta: SizedMetadata,
803 #[cfg(feature = "validate")]
804 magic1: std::sync::atomic::AtomicUsize::new(0xbeef8111),
805 next: Default::default(),
806 prev: atomic::AtomicPtr::new(prev as *mut _),
807 strong_count: atomic::AtomicUsize::new(1),
808 weak_count: atomic::AtomicUsize::new(initial_weak_count::<T>(prev)),
809 advance_count: atomic::AtomicUsize::new(0),
810 #[cfg(feature = "validate")]
811 magic2: std::sync::atomic::AtomicUsize::new(0x12348111),
812 payload: UnsafeCell::new(ManuallyDrop::new(payload)),
813 };
814
815 Box::into_raw(Box::new(holder))
816}
817
818#[allow(unused)]
819fn make_sized_holder_from_box<T: ?Sized>(
820 item: Box<T>,
821 prev: *mut ItemHolderDummy<T>,
822) -> *mut ItemHolder<T, SizedMetadata> {
823 let cur_ptr = Box::into_raw(item);
824
825 debug_println!("thesize: {:?}", cur_ptr);
826
827 // SAFETY:
828 // 'cur_ptr' is a valid pointer, it's just been created by Box::into_raw
829 let payload_layout = Layout::for_value(unsafe { &*cur_ptr });
830 let the_size = payload_layout.size();
831
832 let layout = get_holder_layout::<T>(cur_ptr);
833
834 // SAFETY:
835 // The type '*mut ItemHolder<T, NoMeta>' is not actually a fat pointer. But since T:?Sized,
836 // the compiler treats it like a fat pointer with a zero-size metadata, which is not
837 // the exact same thing as a thin pointer (is my guess).
838 // Using transmute_copy to trim off the metadata is sound.
839 let item_holder_ptr: *mut ItemHolder<T, SizedMetadata> =
840 unsafe { core::mem::transmute_copy(&alloc::alloc::alloc(layout)) };
841
842 // SAFETY:
843 // The copy is safe because MaybeUninit<ItemHolder<T>> is guaranteed to have the same
844 // memory layout as ItemHolder<T>, so we're just copying a value of T to a new location.
845 unsafe {
846 (addr_of_mut!((*item_holder_ptr).payload) as *mut u8)
847 .copy_from(cur_ptr as *mut u8, the_size);
848 }
849 // SAFETY:
850 // next is just an AtomicPtr-type, for which all bit patterns are valid.
851 unsafe {
852 addr_of_mut!((*item_holder_ptr).prev).write(atomic::AtomicPtr::new(prev as *mut _));
853 }
854 // SAFETY:
855 // refcount is just an AtomicPtr-type, for which all bit patterns are valid.
856 unsafe {
857 addr_of_mut!((*item_holder_ptr).next).write(atomic::AtomicPtr::default());
858 }
859 // SAFETY:
860 // strong_count is just an AtomicUsize-type, for which all bit patterns are valid.
861 unsafe {
862 addr_of_mut!((*item_holder_ptr).strong_count).write(atomic::AtomicUsize::new(1));
863 }
864 // SAFETY:
865 // weak_count is just an AtomicUsize-type, for which all bit patterns are valid.
866 unsafe {
867 addr_of_mut!((*item_holder_ptr).weak_count)
868 .write(atomic::AtomicUsize::new(initial_weak_count(prev)));
869 }
870 // SAFETY:
871 // advance_count is just an AtomicUsize-type, for which all bit patterns are valid.
872 unsafe {
873 addr_of_mut!((*item_holder_ptr).advance_count).write(atomic::AtomicUsize::new(0));
874 }
875
876 // SAFETY:
877 // result_ptr is a valid pointer
878 #[cfg(feature = "validate")]
879 unsafe {
880 addr_of_mut!((*item_holder_ptr).magic1)
881 .write(std::sync::atomic::AtomicUsize::new(0xbeef8111));
882 }
883
884 // SAFETY:
885 // result_ptr is a valid pointer
886 #[cfg(feature = "validate")]
887 unsafe {
888 addr_of_mut!((*item_holder_ptr).magic2)
889 .write(std::sync::atomic::AtomicUsize::new(0x12348111));
890 }
891
892 // SAFETY:
893 // input_ptr is a *mut T that has been created from a Box<T>.
894 // Converting it to a Box<MaybeUninit<T>> is safe, and will make sure that any drop-method
895 // of T is not run. We must not drop T here, since we've moved it to the 'result_ptr'.
896 let _t: Box<ManuallyDrop<T>> = unsafe { Box::from_raw(cur_ptr as *mut ManuallyDrop<T>) }; //Free the memory, but don't drop 'input'
897
898 item_holder_ptr
899}
900fn get_full_ptr_raw<T: ?Sized, M: IMetadata>(
901 dummy: *const ItemHolderDummy<T>,
902) -> *const ItemHolder<T, M> {
903 if is_sized::<T>() {
904 assert_eq!(
905 size_of::<*const ItemHolder<T, M>>(),
906 size_of::<*const ItemHolderDummy<T>>()
907 ); //<- Verify that *const T is not a fat ptr
908
909 // SAFETY:
910 // '*const ItemHolder<T, M>' is not _actually_ a fat pointer (it is just pointer sized),
911 // so transmuting from dummy to it is correct.
912 unsafe { core::mem::transmute_copy(&dummy) }
913 } else {
914 assert_eq!(size_of::<M>(), size_of::<usize>()); //<- Verify that *const ItemHolder<T> is a fat ptr
915
916 let ptr_data = dummy as *mut _;
917 debug_println!("Dummy data: {:?}", ptr_data);
918 let metadata_ptr = dummy as *const UnsizedMetadata<T>;
919 debug_println!(
920 "Unsized, meta: {:?}, val: {:?}",
921 metadata_ptr,
922 // SAFETY:
923 // metadata_ptr is a valid pointer
924 unsafe { *metadata_ptr }
925 );
926 // SAFETY:
927 // metadata_ptr is a valid pointer
928 let metadata = unsafe { *metadata_ptr };
929 arc_from_raw_parts_mut(ptr_data, metadata)
930 }
931}
932
933pub(crate) struct SizedMetadata;
934trait IMetadata {}
935impl IMetadata for SizedMetadata {}
936impl<T: ?Sized> IMetadata for UnsizedMetadata<T> {}
937
938#[doc(hidden)]
939#[repr(transparent)]
940pub struct ItemHolderDummy<T: ?Sized> {
941 // void pointers should point to u8
942 _dummy: u8,
943 _phantom_data: PhantomData<T>,
944}
945
946/*
947Invariants:
948 * ItemHolder cannot be deallocated if weak-count > 0
949 * When strong-count goes from 0->1, weak-count must be incremented by 1
950 * When strong-count goes from 1->0, weak-count must be decremented by 1
951 * The rightmost ItemHolder cannot be dropped if there are other items (to the left).
952 (But it may be possible to apply the janitor operation and drop them)
953 * An ItemHolder can have its strong-count increased even if it is dropped,
954 so just because strong-count is > 0 doesn't mean the item isn't dropped.
955 * If strong-count > 0, weak-count is also > 0.
956 * ItemHolder cannot be dropped if item.prev.advance_count > 0
957 * Linked list defined by prev-pointers is the nominal order of the chain
958 * ItemHolder can only be dropped when holding a lock on item,
959 and a lock on item 'item.next', after having set 'item.prev.next' to 'item.next' (or further to the right).
960 See do_advance_impl for details regarding 'advance_count'.
961 * While dropping, the janitor process must be run. If a concurrent janitor task is detected,
962 it must be marked as 'disturbed', and the disturbed task must re-run from the beginning
963 after completing.
964 * The weak-count has encoded in it 2 flags: HAVE_WEAK_NEXT and HAVE_WEAK_PREV. These decide
965 if the item has an item to the right (next) and/or to the left (prev). The HAVE_WEAK_NEXT
966 means there is, or soon will be, a neighbor to the right. HAVE_WEAK_PREV there isn't,
967 or soon won't be, a neighbor to the left. When using compare_exchange to set the weak_count
968 to 0, the code will atomically ensure no prev/next exist and that count is 0, in one operation.
969
970*/
971
972/// This structure represents the heap allocation in which ArcShift payloads reside.
973/// Align 8 is needed, since we store flags in the lower 2 bits of the ItemHolder-pointers
974/// In practice, the alignment of ItemHolder is 8 anyway, but we specify it here for clarity.
975#[repr(align(8))]
976#[repr(C)] // Just to get the 'magic' first and last in memory. Shouldn't hurt.
977struct ItemHolder<T: ?Sized, M: IMetadata> {
978 the_meta: M,
979 #[cfg(feature = "validate")]
980 magic1: std::sync::atomic::AtomicUsize,
981 /// Can be incremented to keep the 'next' value alive. If 'next' is set to a new value, and
982 /// 'advance_count' is read as 0 after, then this node is definitely not going to advance to
983 /// some node *before* 'next'.
984 advance_count: atomic::AtomicUsize,
985
986 /// Pointer to the prev value, or null.
987 prev: atomic::AtomicPtr<ItemHolderDummy<T>>,
988 /// Strong count. When 0, the item is dropped.
989 /// Count must never be incremented from 0. This means increment must be done with
990 /// compare-exchange, not fetch_add.
991 strong_count: atomic::AtomicUsize,
992 /// Weak count. When reaches 0, the 'janitor' task may deallocate the node, iff
993 /// there are no previous nodes with 'advance_count' > 0.
994 /// The next node holds a weak ref to the prev. But prev nodes don't
995 /// hold weak counts on next.
996 /// MSB of 'weak_count' is a 'nonext' flag, that tells that this item is the last in the
997 /// chain. It is set to 1 *before* 'next' is assigned, so this is 0 when it is known
998 /// that this won't be the last ever.
999 weak_count: atomic::AtomicUsize,
1000 #[cfg(feature = "validate")]
1001 magic2: std::sync::atomic::AtomicUsize,
1002
1003 /// Pointer to the next value or null. Possibly decorated (i.e, least significant bit set)
1004 /// The decoration determines:
1005 /// * If janitor process is currently active for this node and those left of it
1006 /// * If payload has been deallocated
1007 /// * The current lock-holder (janitor) has been disturbed (i.e, some other thread couldn't
1008 /// do its job because it encountered the lock, and set the disturbed flag).
1009 next: atomic::AtomicPtr<ItemHolderDummy<T>>,
1010 pub(crate) payload: UnsafeCell<ManuallyDrop<T>>,
1011}
1012
1013// T is SizedMetadata or UnsizedMetadata
1014const fn get_holder_base_layout<T: Sized>() -> Layout {
1015 const {
1016 if size_of::<atomic::AtomicUsize>() % (size_of::<usize>()) != 0 {
1017 panic!("platform had unsupported size of atomic usize")
1018 }
1019 if size_of::<atomic::AtomicPtr<ItemHolderDummy<T>>>() % (size_of::<usize>()) != 0 {
1020 panic!("platform had unsupported size of atomic pointers")
1021 }
1022 if align_of::<atomic::AtomicUsize>() != size_of::<usize>() {
1023 panic!("platform had unsupported size of atomic usize")
1024 }
1025 if align_of::<atomic::AtomicPtr<ItemHolderDummy<T>>>() != size_of::<usize>() {
1026 panic!("platform had unsupported size of atomic pointers")
1027 }
1028 }
1029
1030 let base_size = 3 * size_of::<atomic::AtomicUsize>()
1031 + 2 * size_of::<atomic::AtomicPtr<ItemHolderDummy<T>>>();
1032
1033 #[cfg(not(feature = "validate"))]
1034 const EXTRA_WORDS: usize = 0;
1035 #[cfg(feature = "validate")]
1036 const EXTRA_WORDS: usize = 2;
1037
1038 // SAFETY:
1039 // repr(C) ignores zero sized types, and
1040 // since all items have the same size and padding,
1041 // the resulting size is just the number of elements
1042 // times the size of each element.
1043 unsafe {
1044 Layout::from_size_align_unchecked(
1045 EXTRA_WORDS * size_of::<usize>() + base_size,
1046 align_of::<usize>(),
1047 )
1048 }
1049}
1050
1051impl<T: ?Sized, M: IMetadata> PartialEq for ItemHolder<T, M> {
1052 #[allow(clippy::ptr_eq)] //We actually want to use addr_eq, but it's not available on older rust versions. Let's just use this for now.
1053 fn eq(&self, other: &ItemHolder<T, M>) -> bool {
1054 self as *const _ as *const u8 == other as *const _ as *const u8
1055 }
1056}
1057
1058impl<T: ?Sized, M: IMetadata> ItemHolder<T, M> {
1059 fn set_next(&self, undecorated_new_next: *const ItemHolderDummy<T>) {
1060 #[cfg(feature = "validate")]
1061 assert_eq!(
1062 get_decoration(undecorated_new_next),
1063 ItemStateEnum::UndisturbedUndecorated
1064 );
1065
1066 // Synchronizes with:
1067 // * 'has_payload' - irrelevant, has payload only cares about 'Dropped'-flag, not changed here
1068 // * 'lock_node_for_gc' - irrelevant, we already own the lock, and we're not changing the lock-bit
1069 // * 'do_upgrade_weak'
1070 //
1071 // Lock-free because we only loop if 'next' changes, which means
1072 // some other node has made progress.
1073 let mut prior_next = self.next.load(atomic::Ordering::Relaxed);
1074 loop {
1075 let new_next = decorate(undecorated_new_next, get_decoration(prior_next));
1076 match self.next.compare_exchange(
1077 prior_next,
1078 new_next as *mut _,
1079 atomic::Ordering::SeqCst, //atomic set_next
1080 atomic::Ordering::SeqCst,
1081 ) {
1082 Ok(_) => {
1083 debug_println!(
1084 "set {:x?}.next = {:x?}",
1085 to_dummy(self as *const _),
1086 new_next
1087 );
1088 return;
1089 }
1090 Err(err) => {
1091 prior_next = err;
1092 }
1093 }
1094 }
1095 }
1096
1097 /// Includes self
1098 #[allow(unused)]
1099 #[cfg(test)]
1100 #[cfg(feature = "std")]
1101 fn debug_all_to_left<'a>(&self) -> std::vec::Vec<&'a ItemHolder<T, M>> {
1102 let mut ret = alloc::vec::Vec::new();
1103 let mut item_ptr = self as *const ItemHolder<T, M>;
1104 // Lock-free because this loop is run in debug harness, and doesn't do any synchronization
1105 loop {
1106 // SAFETY:
1107 // item_ptr has just been created from the reference 'self', or was a non-null
1108 // 'prev' pointer of some valid reference, recursively. This method is only called
1109 // from the debug_validate machinery, that is 'unsafe' and requires the user to
1110 // make sure no concurrent access is occurring. At rest, 'prev' pointers are always
1111 // valid.
1112 ret.push(unsafe { &*item_ptr });
1113 // SAFETY:
1114 // See above. item_ptr is a valid pointer.
1115 let prev = from_dummy::<T, M>(unsafe { (*item_ptr).prev.load(Ordering::Relaxed) });
1116 if prev.is_null() {
1117 break;
1118 }
1119 item_ptr = prev;
1120 }
1121
1122 ret
1123 }
1124
1125 #[inline(always)]
1126 unsafe fn payload<'a>(&self) -> &'a T {
1127 // SAFETY:
1128 // Lifetime extension of shared references like this is permissible, as long as the object,
1129 // remains alive. Because of our refcounting, the object is kept alive as long as
1130 // the accessor (ArcShift-type) that calls this remains alive.
1131 unsafe { transmute::<&T, &'a T>(&*(self.payload.get())) }
1132 }
1133
1134 // SAFETY:
1135 // Node must be uniquely owned, and payload must never be accessed again
1136 #[cfg(not(any(feature = "std", feature = "nostd_unchecked_panics")))]
1137 unsafe fn take_boxed_payload(&self) -> Box<T> {
1138 let payload = &self.payload;
1139 debug_println!("boxing payload");
1140
1141 // SAFETY: item_ptr is now uniquely owned. We can drop it.
1142 let payload_val: &mut T = unsafe { &mut **(&mut *payload.get()) };
1143 let size = size_of_val(payload_val);
1144 let align = align_of_val(payload_val);
1145 let layout = Layout::from_size_align(size, align).unwrap();
1146 let thin_dest_ptr = if size == 0 {
1147 1 as *mut u8
1148 } else {
1149 alloc::alloc::alloc(layout)
1150 };
1151
1152 let fat_dest_ptr = if is_sized::<T>() {
1153 core::ptr::copy_nonoverlapping(payload_val as *mut _ as *mut u8, thin_dest_ptr, size);
1154 core::mem::transmute_copy(&thin_dest_ptr)
1155 } else {
1156 core::ptr::copy_nonoverlapping(payload_val as *mut _ as *mut u8, thin_dest_ptr, size);
1157 let meta = UnsizedMetadata::new(payload_val);
1158 ptr_from_raw_parts_mut(thin_dest_ptr, meta)
1159 };
1160
1161 Box::from_raw(fat_dest_ptr)
1162 }
1163
1164 /// Returns true if the node could be atomically locked
1165 fn lock_node_for_gc(&self) -> bool {
1166 // Lock free, see comment for each 'continue' statement, which are the only statements
1167 // that lead to looping.
1168 loop {
1169 // This can't be 'Relaxed', because if we read it as already locked and disturbed,
1170 // we must make sure that this is still the case at this point in the total order.
1171 // When dropping, after exiting here, we won't clean up the entire chain. Our current
1172 // thread is not going to stop this from occurring (we've advanced to the rightmost), but
1173 // if it doesn't do it itself, we must make sure some other thread does it.
1174 //
1175 // If we read a stale value here, it may have been written by some other node _before_
1176 // we advanced, in which case that node may have already retried the GC, and failed
1177 // (since we hadn't advanced). This would then lead to a memory leak.
1178 let cur_next = self.next.load(atomic::Ordering::SeqCst);
1179
1180 let decoration = get_decoration(cur_next);
1181 debug_println!(
1182 "loaded {:x?}.next, got {:?} (decoration: {:?})",
1183 self as *const ItemHolder<T, M>,
1184 cur_next,
1185 decoration
1186 );
1187 if decoration.is_unlocked() {
1188 let decorated = decorate(undecorate(cur_next), decoration.with_gc());
1189 let success = self
1190 .next
1191 .compare_exchange(
1192 cur_next,
1193 decorated as *mut _,
1194 Ordering::SeqCst, //atomic gc lock
1195 atomic::Ordering::SeqCst,
1196 )
1197 .is_ok();
1198 debug_println!(
1199 "Locking node {:x?}, result: {} (prior decoration: {:?}, new {:?})",
1200 self as *const ItemHolder<T, M>,
1201 success,
1202 decoration,
1203 get_decoration(decorated)
1204 );
1205 if !success {
1206 // Lock free, because we only end up here if some other thread did one of:
1207 // 1) added a new node, which is considered progress
1208 // 2) has locked the node, which is not progress, but will only lead to us
1209 // setting the 'disturbed' flag, which is only one step, and thus lock free.
1210 continue;
1211 }
1212 return true;
1213 } else {
1214 debug_println!(
1215 "Locking node {:x?} failed, already decorated: {:?}",
1216 self as *const ItemHolder<T, M>,
1217 decoration
1218 );
1219 if !decoration.is_disturbed() {
1220 let decorated = decorate(undecorate(cur_next), decoration.with_disturbed());
1221 match self.next.compare_exchange(
1222 cur_next,
1223 decorated as *mut _,
1224 Ordering::SeqCst, //atomic gc disturb
1225 atomic::Ordering::SeqCst,
1226 ) {
1227 Ok(_) => {
1228 debug_println!("Locking node {:x?} failed, set disturbed flag, new value: {:x?} (={:?})", self as * const ItemHolder<T, M>, decorated, get_decoration(decorated));
1229 }
1230 Err(prior) => {
1231 if !get_decoration(prior).is_disturbed() {
1232 // Lock free, because we can only end up here if some other thread did either:
1233 // 1) Add a new node, which is considered progress
1234 // 2) Removed the lock, which will lead to the other node advancing
1235 // to new things (which is progress), unless it was disturbed.
1236 // However, if it was disturbed, it wasn't due to us, since we
1237 // failed to set the disturbed flag.
1238 continue;
1239 }
1240 }
1241 }
1242 }
1243
1244 // Already decorated
1245 return false;
1246 }
1247 }
1248 }
1249
1250 /// Return true if node was disturbed
1251 #[must_use] //Not safe to leave a re-run request un-handled
1252 fn unlock_node(&self) -> bool {
1253 debug_println!("Unlocking node {:x?}", self as *const ItemHolder<T, M>);
1254 // Lock free, because we only loop if some other node modified 'next',
1255 // The only possible modifications are:
1256 // 1) Adding a new node, which is considered progress.
1257 // 2) Setting the disturbed flag, which means the other node has offloaded
1258 // work on us, and is now proceeding to is next task, which means there is
1259 // system progress, so algorithm remains lock free.
1260 loop {
1261 let cur_next = self.next.load(atomic::Ordering::Relaxed);
1262 let decoration = get_decoration(cur_next);
1263 let undecorated = decorate(undecorate(cur_next), decoration.without_gc_and_disturbed());
1264
1265 #[cfg(feature = "validate")]
1266 assert!(
1267 decoration.is_locked(),
1268 "node {:x?} was not actually locked, decoration: {:?}",
1269 self as *const _,
1270 decoration
1271 );
1272
1273 debug_println!("Unlocked dec {:x?}: {:x?}", self as *const Self, decoration);
1274 if self
1275 .next
1276 .compare_exchange(
1277 cur_next,
1278 undecorated as *mut _,
1279 Ordering::SeqCst, //atomic gc unlock
1280 Ordering::Relaxed,
1281 )
1282 .is_ok()
1283 {
1284 return decoration.is_disturbed();
1285 }
1286 }
1287 }
1288}
1289
1290#[cfg(all(feature = "std", any(loom, feature = "shuttle"), feature = "validate"))]
1291static MAGIC: std::sync::atomic::AtomicU16 = std::sync::atomic::AtomicU16::new(0);
1292
1293impl<T: ?Sized, M: IMetadata> ItemHolder<T, M> {
1294 #[cfg_attr(test, mutants::skip)]
1295 #[cfg(feature = "validate")]
1296 fn verify(ptr2: *const ItemHolder<T, M>) {
1297 {
1298 let ptr = ptr2 as *const ItemHolder<(), M>;
1299
1300 // SAFETY:
1301 // This function is never called with a null-ptr. Also, this is
1302 // just used for testing.
1303 let atomic_magic1 = unsafe { &*std::ptr::addr_of!((*ptr).magic1) };
1304 // SAFETY:
1305 // This function is never called with a null-ptr. Also, this is
1306 // just used for testing.
1307 let atomic_magic2 = unsafe { &*std::ptr::addr_of!((*ptr).magic2) };
1308
1309 let magic1 = atomic_magic1.load(Ordering::Relaxed);
1310 let magic2 = atomic_magic2.load(Ordering::Relaxed);
1311 if magic1 >> 16 != 0xbeef {
1312 debug_println!(
1313 "Internal error - bad magic1 in {:?}: {} ({:x})",
1314 ptr,
1315 magic1,
1316 magic1
1317 );
1318 debug_println!("Backtrace: {}", std::backtrace::Backtrace::capture());
1319 std::process::abort();
1320 }
1321 if magic2 >> 16 != 0x1234 {
1322 debug_println!(
1323 "Internal error - bad magic2 in {:?}: {} ({:x})",
1324 ptr,
1325 magic2,
1326 magic2
1327 );
1328 debug_println!("Backtrace: {}", std::backtrace::Backtrace::capture());
1329 std::process::abort();
1330 }
1331 #[cfg(not(any(loom, feature = "shuttle")))]
1332 {
1333 let m1 = magic1 & 0xffff;
1334 let m2 = magic2 & 0xffff;
1335 if m1 != 0x8111 || m2 != 0x8111 {
1336 debug_println!("Internal error - bad magic in {:?} {:x} {:x}", ptr, m1, m2);
1337 }
1338 }
1339
1340 #[cfg(any(loom, feature = "shuttle"))]
1341 {
1342 let diff = (magic1 & 0xffff) as isize - (magic2 & 0xffff) as isize;
1343 if diff != 0 {
1344 debug_println!(
1345 "Internal error - bad magics in {:?}: {} ({:x}) and {} ({:x})",
1346 ptr,
1347 magic1,
1348 magic1,
1349 magic2,
1350 magic2
1351 );
1352 debug_println!("Backtrace: {}", std::backtrace::Backtrace::capture());
1353 panic!();
1354 }
1355 let magic = MAGIC.fetch_add(1, Ordering::Relaxed);
1356 let magic = magic as isize as usize;
1357 atomic_magic1.fetch_and(0xffff_0000, Ordering::Relaxed);
1358 atomic_magic2.fetch_and(0xffff_0000, Ordering::Relaxed);
1359 atomic_magic1.fetch_or(magic, Ordering::Relaxed);
1360 atomic_magic2.fetch_or(magic, Ordering::Relaxed);
1361 }
1362 }
1363 }
1364}
1365
1366/// Check the magic values of the supplied pointer, validating it in a best-effort fashion
1367#[inline]
1368#[allow(unused)]
1369#[cfg_attr(test, mutants::skip)]
1370fn verify_item_impl<T: ?Sized, M: IMetadata>(_ptr: *const ItemHolderDummy<T>) {
1371 #[cfg(feature = "validate")]
1372 {
1373 assert_is_undecorated::<T, M>(_ptr);
1374
1375 let ptr = _ptr;
1376 let x = undecorate(ptr);
1377 if x != ptr {
1378 panic!("Internal error in ArcShift: Pointer given to verify was decorated, it shouldn't have been! {:?}", ptr);
1379 }
1380 if x.is_null() {
1381 return;
1382 }
1383 ItemHolder::<T, M>::verify(from_dummy::<T, M>(x))
1384 }
1385}
1386/// Check the magic values of the supplied pointer, validating it in a best-effort fashion
1387#[inline]
1388#[cfg_attr(test, mutants::skip)]
1389fn verify_item<T: ?Sized>(_ptr: *const ItemHolderDummy<T>) {
1390 #[cfg(feature = "validate")]
1391 {
1392 if is_sized::<T>() {
1393 verify_item_impl::<T, SizedMetadata>(_ptr)
1394 } else {
1395 verify_item_impl::<T, UnsizedMetadata<T>>(_ptr)
1396 }
1397 }
1398}
1399
1400const ITEM_STATE_LOCKED_FLAG: u8 = 1;
1401const ITEM_STATE_DROPPED_FLAG: u8 = 2;
1402const ITEM_STATE_DISTURBED_FLAG: u8 = 4;
1403
1404#[repr(u8)]
1405#[derive(Clone, Copy, Debug, PartialEq, Eq)]
1406#[allow(unused)]
1407enum ItemStateEnum {
1408 /// Pointer is not decorated
1409 UndisturbedUndecorated = 0,
1410 UndisturbedGcIsActive = 1,
1411 UndisturbedPayloadDropped = 2,
1412 UndisturbedPayloadDroppedAndGcActive = 3,
1413 DisturbedUndecorated = 4,
1414 DisturbedGcIsActive = 5,
1415 DisturbedPayloadDropped = 6,
1416 DisturbedPayloadDroppedAndGcActive = 7,
1417}
1418
1419impl ItemStateEnum {
1420 #[allow(unused)]
1421 fn is_locked(self) -> bool {
1422 (self as u8 & ITEM_STATE_LOCKED_FLAG) != 0
1423 }
1424 fn is_disturbed(self) -> bool {
1425 (self as u8 & ITEM_STATE_DISTURBED_FLAG) != 0
1426 }
1427 fn is_unlocked(self) -> bool {
1428 (self as u8 & ITEM_STATE_LOCKED_FLAG) == 0
1429 }
1430 fn dropped(self) -> ItemStateEnum {
1431 // SAFETY:
1432 // All 3-bit values are legal ItemStateEnum values, so oring on 4 is ok
1433 unsafe { core::mem::transmute::<u8, ItemStateEnum>(self as u8 | ITEM_STATE_DROPPED_FLAG) }
1434 }
1435 fn with_gc(self) -> ItemStateEnum {
1436 // SAFETY:
1437 // All 3-bit values are legal ItemStateEnum values, so oring on 1 is ok
1438 unsafe { core::mem::transmute::<u8, ItemStateEnum>(self as u8 | ITEM_STATE_LOCKED_FLAG) }
1439 }
1440 fn with_disturbed(self) -> ItemStateEnum {
1441 // SAFETY:
1442 // All 3-bit values are legal ItemStateEnum values, so oring on 1 is ok
1443 unsafe { core::mem::transmute::<u8, ItemStateEnum>(self as u8 | ITEM_STATE_DISTURBED_FLAG) }
1444 }
1445 fn without_gc_and_disturbed(self) -> ItemStateEnum {
1446 // SAFETY:
1447 // All 3-bit values are legal ItemStateEnum values, so anding out 1 is ok
1448 unsafe {
1449 core::mem::transmute::<u8, ItemStateEnum>(
1450 self as u8 & (!ITEM_STATE_LOCKED_FLAG) & (!ITEM_STATE_DISTURBED_FLAG),
1451 )
1452 }
1453 }
1454 fn is_dropped(self) -> bool {
1455 (self as u8 & ITEM_STATE_DROPPED_FLAG) != 0
1456 }
1457}
1458
1459fn decorate<T: ?Sized>(
1460 ptr: *const ItemHolderDummy<T>,
1461 e: ItemStateEnum,
1462) -> *const ItemHolderDummy<T> {
1463 let curdecoration = (ptr as usize) & 7;
1464 ((ptr as *const u8).wrapping_offset((e as isize) - (curdecoration as isize)))
1465 as *const ItemHolderDummy<T>
1466}
1467
1468/// Get the state encoded in the decoration, if any.
1469/// Returns None if the pointer is undecorated null.
1470/// The pointer must be valid.
1471fn get_decoration<T: ?Sized>(ptr: *const ItemHolderDummy<T>) -> ItemStateEnum {
1472 if ptr.is_null() {
1473 return ItemStateEnum::UndisturbedUndecorated;
1474 }
1475 let raw = ((ptr as usize) & 7) as u8;
1476 // SAFETY:
1477 // All values `0..=7` are valid ItemStateEnum.
1478 // And the bitmask produces a value `0..=7`
1479 unsafe { core::mem::transmute::<u8, ItemStateEnum>(raw) }
1480}
1481
1482/// Panic if the pointer is decorated
1483#[allow(unused)]
1484#[cfg_attr(test, mutants::skip)]
1485#[inline]
1486fn assert_is_undecorated<T: ?Sized, M: IMetadata>(_ptr: *const ItemHolderDummy<T>) {
1487 #[cfg(feature = "validate")]
1488 {
1489 let raw = _ptr as usize & 7;
1490 if raw != 0 {
1491 panic!("Internal error in ArcShift - unexpected decorated pointer");
1492 }
1493 }
1494}
1495
1496/// Return an undecorated version of the given pointer.
1497/// Supplying an already undecorated pointer is not an error, and returns
1498/// the value unmodified.
1499#[inline(always)]
1500fn undecorate<T: ?Sized>(cand: *const ItemHolderDummy<T>) -> *const ItemHolderDummy<T> {
1501 let raw = cand as usize & 7;
1502 (cand as *const u8).wrapping_offset(-(raw as isize)) as *const ItemHolderDummy<T>
1503}
1504
1505impl<T: ?Sized> Clone for ArcShift<T> {
1506 fn clone(&self) -> Self {
1507 let t = with_holder!(self.item, T, item, {
1508 let mut drop_jobs = DropHandler::default();
1509 let result = do_clone_strong(item, &mut drop_jobs);
1510 drop_jobs.resume_any_panics();
1511 result
1512 });
1513 ArcShift {
1514 // SAFETY:
1515 // The pointer returned by 'do_clone_strong' is always valid and non-null.
1516 item: unsafe { NonNull::new_unchecked(t as *mut _) },
1517 pd: PhantomData,
1518 }
1519 }
1520}
1521impl<T: ?Sized> Clone for ArcShiftWeak<T> {
1522 fn clone(&self) -> Self {
1523 let t = with_holder!(self.item, T, item, do_clone_weak(item));
1524 ArcShiftWeak {
1525 // SAFETY:
1526 // The pointer returned by 'do_clone_weak' is always valid and non-null.
1527 item: unsafe { NonNull::new_unchecked(t as *mut _) },
1528 }
1529 }
1530}
1531
1532fn do_clone_strong<T: ?Sized, M: IMetadata>(
1533 item_ptr: *const ItemHolder<T, M>,
1534 drop_job_queue: &mut impl IDropHandler<T, M>,
1535) -> *const ItemHolderDummy<T> {
1536 // SAFETY:
1537 // do_clone_strong must be called with a valid item_ptr
1538 let item = unsafe { &*item_ptr };
1539 debug_println!(
1540 "Strong clone, about to access strong count of {:x?}",
1541 item_ptr
1542 );
1543 let strong_count = item.strong_count.fetch_add(1, Ordering::Relaxed);
1544
1545 if strong_count > MAX_REF_COUNT {
1546 item.strong_count.fetch_sub(1, Ordering::Relaxed);
1547 panic!("strong ref count max limit exceeded")
1548 }
1549
1550 debug_println!(
1551 "strong-clone, new strong count: {:x?} is {}",
1552 item_ptr,
1553 strong_count + 1
1554 );
1555 #[cfg(feature = "validate")]
1556 assert!(strong_count > 0);
1557
1558 do_advance_strong::<T, M>(to_dummy::<T, M>(item_ptr), drop_job_queue)
1559}
1560
1561fn do_clone_weak<T: ?Sized, M: IMetadata>(
1562 item_ptr: *const ItemHolder<T, M>,
1563) -> *const ItemHolderDummy<T> {
1564 // SAFETY:
1565 // do_clone_weak must be called with a valid item_ptr
1566 let item = unsafe { &*item_ptr };
1567 let weak_count = item.weak_count.fetch_add(1, Ordering::Relaxed);
1568 if get_weak_count(weak_count) > MAX_REF_COUNT {
1569 item.weak_count.fetch_sub(1, Ordering::Relaxed);
1570 panic!("weak ref count max limit exceeded")
1571 }
1572 debug_println!(
1573 "==> weak-clone, fetch_add, new weak count: {:x?} is {}",
1574 item_ptr,
1575 format_weak(weak_count + 1)
1576 );
1577 #[cfg(feature = "validate")]
1578 assert!(weak_count > 0);
1579 to_dummy(item_ptr)
1580}
1581
1582fn do_upgrade_weak<T: ?Sized, M: IMetadata>(
1583 item_ptr: *const ItemHolder<T, M>,
1584 jobq: &mut DropHandler<T>,
1585) -> Option<*const ItemHolderDummy<T>> {
1586 debug_println!("executing do_upgrade_weak");
1587 // SAFETY:
1588 // do_upgrade_weak must be called with a valid item_ptr
1589 let start_item = unsafe { &*(item_ptr) };
1590 {
1591 // This is needed, since ostensibly, this method works on a weak clone of 'item_ptr'.
1592 // It is this weak clone that is converted to a strong item.
1593 // A Relaxed add is sufficient here, since we don't need to synchronize with anything.
1594 // We already have a weak link. The extra weak link added here only becomes relevant
1595 // the next time we _decrease_ weak_count, and since that will be an op on the same
1596 // atomic, it will definitely see the value written here, which is all we require.
1597 let weak_count = start_item.weak_count.fetch_add(1, Ordering::Relaxed);
1598 if get_weak_count(weak_count) > MAX_REF_COUNT {
1599 start_item.weak_count.fetch_sub(1, Ordering::Relaxed);
1600 panic!("weak ref count max limit exceeded")
1601 }
1602 debug_println!(
1603 "==> do_upgrade_weak {:x?} incr weak count to {}",
1604 item_ptr,
1605 format_weak(weak_count + 1)
1606 );
1607 }
1608 let mut item_ptr = to_dummy(item_ptr);
1609 // Lock free: See comment on each 'continue'
1610 loop {
1611 item_ptr = do_advance_weak::<T, M>(item_ptr);
1612 // We still have one weak count on 'item_ptr' at this point. do_advance_weak
1613 // decrements the weak on the original value, and gives you an advanced value with
1614 // one count on.
1615
1616 // SAFETY:
1617 // do_advance_weak always returns a valid non-null pointer
1618 let item = unsafe { &*from_dummy::<T, M>(item_ptr) };
1619 let prior_strong_count = item.strong_count.load(Ordering::SeqCst); //atomic upgrade read strong
1620 let item_next = item.next.load(Ordering::SeqCst); //atomic upgrade read next
1621
1622 if !undecorate(item_next).is_null() {
1623 // Race, there's a new node we need to advance to.
1624 continue;
1625 }
1626
1627 if !get_decoration(item_next).is_dropped() {
1628 if prior_strong_count == 0 {
1629 let _prior_weak_count = item.weak_count.fetch_add(1, Ordering::SeqCst); //atomic upgrade inc weak
1630 debug_println!(
1631 "==> pre-upgrade strong=0 {:x?}, increase weak to {}",
1632 item_ptr,
1633 get_weak_count(_prior_weak_count) + 1
1634 );
1635 }
1636
1637 if item
1638 .strong_count
1639 .compare_exchange(
1640 prior_strong_count,
1641 prior_strong_count + 1,
1642 Ordering::SeqCst, //atomic upgrade inc strong
1643 Ordering::Relaxed,
1644 )
1645 .is_ok()
1646 {
1647 // SAFETY:
1648 // item_ptr has been advanced to, and we thus hold a weak ref.
1649 // it is a valid pointer.
1650 let item = unsafe { &*from_dummy::<T, M>(item_ptr) };
1651 let item_next = item.next.load(Ordering::SeqCst); //atomic upgrade check next drop
1652 if !undecorate(item_next).is_null() {
1653 let new_prior_strong = item.strong_count.fetch_sub(1, Ordering::SeqCst); //atomic upgrade fail dropped
1654 if new_prior_strong == 1 {
1655 let _prior_weak_count = item.weak_count.fetch_sub(1, Ordering::SeqCst); //atomic upgrade fail dec last strong
1656 #[cfg(feature = "validate")]
1657 assert!(get_weak_count(_prior_weak_count) > 1);
1658 debug_println!(
1659 "==> {:x?} reduced weak to {}",
1660 item_ptr,
1661 get_weak_count(_prior_weak_count - 1)
1662 );
1663 }
1664 // Lock free. We only get here if some other node has added a new node,
1665 // which is considered progress.
1666 continue;
1667 }
1668
1669 let _reduce_weak = item.weak_count.fetch_sub(1, Ordering::SeqCst); //atomic upgrade sub weak
1670 debug_println!(
1671 "==> upgrade success, new strong_count: {}, new weak: {}",
1672 prior_strong_count + 1,
1673 format_weak(_reduce_weak - 1)
1674 );
1675 #[cfg(feature = "validate")]
1676 assert!(
1677 get_weak_count(_reduce_weak) > 1,
1678 "assertion failed: in do_upgrade_weak(1), reduced weak {item_ptr:x?} to 0"
1679 );
1680
1681 return Some(item_ptr);
1682 } else {
1683 if prior_strong_count == 0 {
1684 let _prior_weak_count = item.weak_count.fetch_sub(1, Ordering::SeqCst); //atomic upgrade race dec weak
1685 debug_println!(
1686 "==> pre-upgrade strong=0 race2 {:x?}, decrease weak to {}",
1687 item_ptr,
1688 get_weak_count(_prior_weak_count) - 1
1689 );
1690 #[cfg(feature = "validate")]
1691 assert!(get_weak_count(_prior_weak_count) > 1);
1692 }
1693 debug_println!("Race on strong_count _increase_ - loop.");
1694 // Lcok free. We only get here if some other node has succeeded in increasing or
1695 // decreasing strong count, which only happens if there is progress, unless
1696 // the node undid a strong_count modification because of a detected race.
1697 // The possible strong_count undo operations are:
1698 // 1) If ArcShift::clone exceeds the maximum strong count limit. This is an
1699 // error case, but we'll also consider it progress.
1700 // 2) If ArcShift::upgrade detects a race on 'next' (new node added), which
1701 // is considered system-wide progress.
1702 // 3) In ArcShift::update, if it races with adding a new node, which is
1703 // considered progress.
1704 continue; //Race on strong count, try again
1705 }
1706 } else {
1707 do_drop_weak::<T, M>(item_ptr, jobq);
1708 return None;
1709 }
1710 }
1711}
1712
1713/// Returns an up-to-date pointer.
1714/// The closure is called with references to the old and new items, and allows
1715/// the caller to, for instance, decrease/inrcease strong counts. If no advance could be done,
1716/// this is always because we're already at the most up-to-date value. The closure is not called
1717/// in this case.
1718/// If the update closure fails, this can only be because there's a new 'next', and the
1719/// current item has been dropped.
1720///
1721/// This function requires the caller to have some sort of reference to 'item_ptr', so that
1722/// it can't go away during execution. This method does not touch that reference count.
1723///
1724/// The 'update' closure is called with a weak-ref taken on 'b', and 'a' in its original state.
1725/// It must either keep the weak ref, or possibly convert it to a strong ref.
1726/// It must also release whatever refcount it had on the original 'item_ptr'.
1727///
1728/// Guaranteed to return a valid, non-null pointer
1729fn do_advance_impl<T: ?Sized, M: IMetadata>(
1730 mut item_ptr: *const ItemHolderDummy<T>,
1731 mut update: impl FnMut(*const ItemHolder<T, M>, *const ItemHolder<T, M>) -> bool,
1732) -> *const ItemHolderDummy<T> {
1733 let start_ptr = item_ptr;
1734 verify_item(item_ptr);
1735
1736 debug_println!("advancing from {:x?}", item_ptr);
1737 // Lock free, because we traverse the chain and eventually reach the end,
1738 // except that we may fail 'update', in which case we loop.
1739 // However, update fails only if there has been system-wide progress (see comment in
1740 // do_advance_strong).
1741 loop {
1742 debug_println!("In advance-loop, item_ptr: {:x?}", item_ptr);
1743 // SAFETY:
1744 // item_ptr is a pointer we have advanced to. Forward advance always visits only
1745 // valid pointers. This is a core algorithm of ArcShift. By the careful atomic operations
1746 // below, and by knowing how the janitor-task works, we make sure that either 'advance_count'
1747 // or 'weak_count' keep all visited references alive, such that they cannot be garbage
1748 // collected.
1749 let item: &ItemHolder<T, M> = unsafe { &*from_dummy(item_ptr as *mut _) };
1750
1751 let next_ptr = undecorate(item.next.load(Ordering::SeqCst)); //atomic advance load next
1752 debug_println!("advancing from {:x?}, next_ptr = {:x?}", item_ptr, next_ptr);
1753 if next_ptr.is_null() {
1754 #[allow(clippy::collapsible_if)]
1755 if item_ptr != start_ptr {
1756 if !update(from_dummy(start_ptr), from_dummy(item_ptr)) {
1757 debug_println!("upgrade failed, probably no payload");
1758 continue;
1759 }
1760 }
1761 return item_ptr;
1762 }
1763
1764 let _advanced = item.advance_count.fetch_add(1, Ordering::SeqCst); //atomic advance inc advance_count
1765 debug_println!(
1766 "advance: Increasing {:x?}.advance_count to {}",
1767 item_ptr,
1768 _advanced + 1
1769 );
1770
1771 // This fence shouldn't be necessary, but without it loom fails, because
1772 // it doesn't actually model 'SeqCst' correctly. In principle, we could
1773 // have this fence only when running under loom, but then we'd not be testing
1774 // exactly the same thing as we run in prod, which seems unfortunate.
1775 //
1776 // Specifically, without this fence, loom might reorder this load before
1777 // the 'fetch_add' above, opening up the possibility that 'item.next' may be
1778 // dropped before we can do 'advance_count', and we'd still get that 'next'
1779 // in the load below.
1780 atomic::fence(Ordering::SeqCst);
1781
1782 // We must reload next, because it's only the next that is _now_ set that is actually
1783 // protected by 'advance_count' above!
1784 let next_ptr = undecorate(item.next.load(Ordering::SeqCst)); //atomic advance reload next
1785
1786 #[cfg(feature = "validate")]
1787 assert_ne!(next_ptr, item_ptr);
1788 // SAFETY:
1789 // next_ptr is guarded by our advance_count ref that we grabbed above.
1790 let next: &ItemHolder<T, M> = unsafe { &*from_dummy(next_ptr) };
1791 debug_println!("advance: Increasing next(={:x?}).weak_count", next_ptr);
1792 let _res = next.weak_count.fetch_add(1, Ordering::SeqCst); //atomic
1793 debug_println!(
1794 "==> do_advance_impl: increment weak of {:?}, was {}, now {}",
1795 next_ptr,
1796 format_weak(_res),
1797 format_weak(_res + 1),
1798 );
1799
1800 let _advanced = item.advance_count.fetch_sub(1, Ordering::SeqCst); //atomic advance dec advance_count
1801 debug_println!(
1802 "advance: Decreasing {:x?}.advance_count to {}",
1803 item_ptr,
1804 _advanced - 1
1805 );
1806
1807 if item_ptr != start_ptr {
1808 debug_println!("do_advance_impl: decrease weak from {:x?}", item_ptr);
1809 let _res = item.weak_count.fetch_sub(1, Ordering::SeqCst); //atomic advance dec weak_count
1810 debug_println!(
1811 "==> do_advance_impl: decrease weak from {:x?} - decreased to {}",
1812 item_ptr,
1813 format_weak(_res - 1)
1814 );
1815
1816 // This should never reach 0 here. We _know_ that 'item' has a 'next'. Thus, this 'next'
1817 // must hold a reference count on this.
1818 #[cfg(feature = "validate")]
1819 assert!(
1820 get_weak_count(_res) > 1,
1821 "non rightmost node {:x?} can't be left with 0 weak count, but weak became {}",
1822 item_ptr,
1823 get_weak_count(_res) - 1
1824 );
1825 }
1826 item_ptr = next_ptr;
1827 }
1828}
1829
1830// Guaranteed to return a valid, non-null pointer
1831fn do_advance_weak<T: ?Sized, M: IMetadata>(
1832 item_ptr: *const ItemHolderDummy<T>,
1833) -> *const ItemHolderDummy<T> {
1834 do_advance_impl(
1835 item_ptr,
1836 |a: *const ItemHolder<T, M>, _b: *const ItemHolder<T, M>| {
1837 // SAFETY:
1838 // a is a valid pointer. do_advance_impl only supplies usable pointers to the callback.
1839 let _a_weak = unsafe { (*a).weak_count.fetch_sub(1, Ordering::SeqCst) }; //atomic advance weak dec weak_count a
1840
1841 // We have a weak ref count on 'b' given to use by `do_advance_impl`,
1842 // which we're fine with this and don't need to adjust b.weak_count
1843
1844 debug_println!(
1845 "==> weak advance {:x?}, decremented weak counts to a:{:x?}={}",
1846 a,
1847 a,
1848 format_weak(_a_weak.wrapping_sub(1)),
1849 );
1850 #[cfg(feature = "validate")]
1851 assert!(get_weak_count(_a_weak) > 1);
1852 true
1853 },
1854 )
1855}
1856
1857// Always returns valid, non-null, pointers
1858#[inline(never)]
1859fn do_advance_strong<T: ?Sized, M: IMetadata>(
1860 item_ptr: *const ItemHolderDummy<T>,
1861 drop_job_queue: &mut impl IDropHandler<T, M>,
1862) -> *const ItemHolderDummy<T> {
1863 do_advance_impl::<_, M>(item_ptr, move |a, b| {
1864 // SAFETY:
1865 // b is a valid pointer. do_advance_impl supplies the closure with only valid pointers.
1866 let mut b_strong = unsafe { (*b).strong_count.load(Ordering::SeqCst) }; //atomic advance strong load b strong_count
1867
1868 // Lock free, since we only loop when compare_exchange fails on 'strong_count', something
1869 // which only occurs when 'strong_count' changes, which only occurs when there is
1870 // system wide progress.
1871 //
1872 // Note, do_advance_strong_impl will loop if this returns false. See each
1873 // such return for an argument why this only happens if there's been system side
1874 // progress.
1875 loop {
1876 debug_println!(
1877 "do_advance_strong {:x?} -> {:x?} (b-count = {})",
1878 a,
1879 b,
1880 b_strong
1881 );
1882
1883 if b_strong == 0 {
1884 // a strong ref must _always_ imply a weak count, and that weak count must
1885 // always be decrementable by whoever happens to decrement the count to 0.
1886 // That might not be us (because we might race with another invocation of this code),
1887 // that might also increment the count.
1888 // SAFETY:
1889 // b is a valid pointer (see above)
1890 let _prior_weak = unsafe { (*b).weak_count.fetch_add(1, Ordering::SeqCst) }; //atomic advance strong inc b weak_count
1891 debug_println!(
1892 "==> Prior to strong_count increment {:x?} from 0, added weak count. Weak now: {}",
1893 b,
1894 format_weak(_prior_weak + 1)
1895 );
1896 }
1897
1898 // SAFETY:
1899 // b is a valid pointer. See above.
1900 match unsafe {
1901 (*b).strong_count.compare_exchange(
1902 b_strong,
1903 b_strong + 1,
1904 Ordering::SeqCst, //atomic advance icn b strong_count
1905 atomic::Ordering::SeqCst,
1906 )
1907 } {
1908 Ok(_) => {
1909 debug_println!(
1910 "b-strong increased {:x?}: {} -> {}",
1911 b,
1912 b_strong,
1913 b_strong + 1
1914 );
1915 // SAFETY:
1916 // b is a valid pointer. See above.
1917 let next_ptr = unsafe { (*b).next.load(Ordering::SeqCst) }; //atomic advance reload next
1918 if !undecorate(next_ptr).is_null() {
1919 // Race - even though _did_ have payload prior to us grabbing the strong count, it now might not.
1920 // This can only happen if there's some other node to the right that _does_ have a payload.
1921
1922 // SAFETY:
1923 // b is a valid pointer. See above.
1924 let prior_strong =
1925 unsafe { (*b).strong_count.fetch_sub(1, Ordering::SeqCst) }; //atomic advance fail dec b strong_count
1926 debug_println!("do_advance_strong:rare race, new _next_ appeared when increasing strong count: {:?} (decr strong back to {}) <|||||||||||||||||||||||||||||||||||||||||||||||||||||||>", b, prior_strong -1);
1927 #[cfg(feature = "validate")]
1928 assert!(
1929 prior_strong > 0,
1930 "strong count {b:x?} must be >0, but was 0"
1931 );
1932
1933 // If b_strong was not 0, it had a weak count. Our increment of it above would have made it so that
1934 // no other thread could observe it going to 0. After our decrement below, it might go to zero.
1935 // If it goes to zero in our decrement, and b_strong was not 0, it falls on us to free the weak_count.
1936 // Note, in the span from the increment to this decrement, it could be argued that the implicitly owned weak count
1937 // is "wrong". However, this is not observable, since the weak count is definitely not 0 (this closure does own a
1938 // weak ref).
1939 if prior_strong == 1 {
1940 // SAFETY:
1941 // b is a valid pointer. See above.
1942 let _b_weak_count =
1943 unsafe { (*b).weak_count.fetch_sub(1, Ordering::SeqCst) }; //atomic advance fail dec weak_count
1944 debug_println!("==> do_advance_strong:rare2 {:x?} reduced strong count back to 0, reduced weak to: {}", b, get_weak_count(_b_weak_count-1));
1945 #[cfg(feature = "validate")]
1946 assert!(get_weak_count(_b_weak_count) > 1);
1947 }
1948 debug_println!("race, other thread added new node");
1949 // Lock free, we can only get here if another thread has added a new node.
1950 return false;
1951 }
1952 #[cfg(feature = "validate")]
1953 assert!(!get_decoration(next_ptr).is_dropped());
1954
1955 // SAFETY:
1956 // b is a valid pointer. See above.
1957 let _b_weak_count = unsafe { (*b).weak_count.fetch_sub(1, Ordering::SeqCst) }; //atomic advance dec weak_count
1958 debug_println!(
1959 "==> strong advance {:x?}, reducing free weak b-count to {:?}",
1960 b,
1961 format_weak(_b_weak_count - 1)
1962 );
1963 #[cfg(feature = "validate")]
1964 assert!(get_weak_count(_b_weak_count) > 1);
1965
1966 // SAFETY:
1967 // a is a valid pointer. See above.
1968 let a_strong = unsafe { (*a).strong_count.fetch_sub(1, Ordering::SeqCst) }; //atomic advance dec a strong_count
1969 #[cfg(feature = "validate")]
1970 assert_ne!(a_strong, 0);
1971 debug_println!(
1972 "a-strong {:x?} decreased {} -> {}",
1973 a,
1974 a_strong,
1975 a_strong - 1,
1976 );
1977 if a_strong == 1 {
1978 do_drop_payload_if_possible(a, false, drop_job_queue);
1979 // Remove the implicit weak granted by the strong
1980 // SAFETY:
1981 // a is a valid pointer. See above.
1982 let _a_weak = unsafe { (*a).weak_count.fetch_sub(1, Ordering::SeqCst) }; //atomic advance drop payload a
1983 debug_println!("==> do_advance_strong:maybe dropping payload of a {:x?} (because strong-count is now 0). Adjusted weak to {}", a, format_weak(_a_weak.wrapping_sub(1)));
1984 #[cfg(feature = "validate")]
1985 assert!(get_weak_count(_a_weak) > 0);
1986 }
1987 return true;
1988 }
1989 Err(err_value) => {
1990 if b_strong == 0 {
1991 // SAFETY:
1992 // b is a valid pointer. See above.
1993 let _prior_weak = unsafe { (*b).weak_count.fetch_sub(1, Ordering::SeqCst) }; //atomic advance race
1994 debug_println!("==> do_advance_strong: race on strong_count for {:x?} (restore-decr weak to {})", b, get_weak_count(_prior_weak - 1));
1995 #[cfg(feature = "validate")]
1996 assert!(get_weak_count(_prior_weak) > 1);
1997 }
1998 b_strong = err_value;
1999 }
2000 }
2001 }
2002 })
2003}
2004
2005fn raw_deallocate_node<T: ?Sized, M: IMetadata>(
2006 item_ptr: *const ItemHolder<T, M>,
2007 jobq: &mut impl IDropHandler<T, M>,
2008) {
2009 verify_item(to_dummy(item_ptr));
2010 raw_do_unconditional_drop_payload_if_not_dropped(item_ptr as *mut ItemHolder<T, M>, jobq);
2011
2012 // SAFETY:
2013 // item_ptr is going to be deallocated, and the caller must make sure
2014 // that it has exclusive access.
2015 #[cfg(feature = "validate")]
2016 assert_eq!(
2017 // SAFETY:
2018 // item_ptr is guaranteed valid by caller
2019 unsafe { (*item_ptr).strong_count.load(Ordering::SeqCst) },
2020 0,
2021 "{item_ptr:x?} strong count must be 0 when deallocating"
2022 );
2023
2024 #[cfg(feature = "validate")]
2025 {
2026 // SAFETY:
2027 // item_ptr is guaranteed valid by caller
2028 let item_ref = unsafe { &*(item_ptr as *mut ItemHolder<T, M>) };
2029 item_ref.magic1.store(0xdeaddead, Ordering::Relaxed);
2030 item_ref.magic2.store(0xdeaddea2, Ordering::Relaxed);
2031 }
2032
2033 do_dealloc(item_ptr);
2034}
2035
2036#[derive(PartialEq)]
2037enum NodeStrongStatus {
2038 // We saw nodes with strong refs, that weren't the rightmost node
2039 StrongRefsFound,
2040 // We visited all nodes all the way to the leftmost one, and there were no strong refs
2041 // to the left of the rightmost one.
2042 NoStrongRefsExist,
2043 // We had to stop our leftward traverse, because nodes were locked, so we couldn't
2044 // say for sure if strong refs existed. This means that some other node is running an
2045 // old gc, and upon completion of this gc it will realize it has to advance, so this
2046 // other node will have a chance to detect that no strong refs exist.
2047 Indeterminate,
2048}
2049
2050fn do_janitor_task<T: ?Sized, M: IMetadata>(
2051 start_ptr: *const ItemHolder<T, M>,
2052 jobq: &mut impl IDropHandler<T, M>,
2053) -> (bool /*need re-run*/, NodeStrongStatus) {
2054 verify_item(to_dummy(start_ptr));
2055
2056 let mut have_seen_left_strong_refs = false;
2057 fn get_strong_status(strong: bool, leftmost: bool) -> NodeStrongStatus {
2058 if strong {
2059 NodeStrongStatus::StrongRefsFound
2060 } else if leftmost {
2061 NodeStrongStatus::NoStrongRefsExist
2062 } else {
2063 NodeStrongStatus::Indeterminate
2064 }
2065 }
2066
2067 debug_println!("Janitor task for {:x?}", start_ptr);
2068 // SAFETY:
2069 // start_ptr must be a valid pointer. This falls on the caller toensure.
2070 let start = unsafe { &*start_ptr };
2071
2072 let start_ptr = to_dummy(start_ptr);
2073
2074 if !start.lock_node_for_gc() {
2075 debug_println!("Janitor task for {:x?} - gc already active!", start_ptr);
2076 return (false, get_strong_status(have_seen_left_strong_refs, false));
2077 }
2078
2079 debug_println!("Loading prev of {:x?}", start_ptr);
2080 let mut cur_ptr: *const _ = start.prev.load(Ordering::SeqCst); //atomic janitor load prev
2081 debug_println!("Loaded prev of {:x?}, got: {:x?}", start_ptr, cur_ptr);
2082 let rightmost_candidate_ptr = cur_ptr;
2083 if cur_ptr.is_null() {
2084 debug_println!("Janitor task for {:x?} - no earlier nodes exist", start_ptr);
2085 // There are no earlier nodes to clean up
2086 let need_rerun = start.unlock_node();
2087 return (
2088 need_rerun,
2089 get_strong_status(have_seen_left_strong_refs, true),
2090 );
2091 }
2092
2093 let mut rightmost_deletable = null();
2094
2095 // It may seems that we iterate through the chain in many different ways
2096 // in this method, and that it should be possible to abstract this.
2097 // However, it turns out to be slightly tricky, since the iteration is always
2098 // done in a very careful way to maintain correctness, and exchanging the order
2099 // of two innocent looking statements can cause it to fail.
2100
2101 {
2102 let mut any_failed_locks = false;
2103 let cur_lock_start = cur_ptr;
2104 let mut cur_ptr = cur_ptr;
2105 debug_println!("Lock sweep start at {:x?}", cur_lock_start);
2106 let mut failed_lock = null();
2107 loop {
2108 // SAFETY: cur_ptr is a valid pointer It starts out equal to
2109 // 'start.prev', which is always valid or null, and we check it for null.
2110 // We then fetch 'cur.prev' pointers, which is valid, since we only do so if
2111 // we managed to take a lock. No other thread may free prev if it is locked.
2112 let cur: &ItemHolder<T, M> = unsafe { &*from_dummy(cur_ptr) };
2113
2114 if !cur.lock_node_for_gc() {
2115 failed_lock = cur_ptr;
2116 any_failed_locks = true;
2117 debug_println!(
2118 "Janitor task for {:x?} - found node already being gced: {:x?}",
2119 start_ptr,
2120 cur_ptr
2121 );
2122 break;
2123 }
2124
2125 let prev_ptr: *const _ = cur.prev.load(Ordering::SeqCst); //atomic janitor load prev 2
2126 debug_println!("Janitor loop considering {:x?} -> {:x?}", cur_ptr, prev_ptr);
2127 if prev_ptr.is_null() {
2128 debug_println!(
2129 "Janitor task for {:x?} - found leftmost node: {:x?}",
2130 start_ptr,
2131 cur_ptr
2132 );
2133 break;
2134 }
2135 cur_ptr = prev_ptr;
2136 }
2137 if any_failed_locks {
2138 let mut cur_ptr = cur_lock_start;
2139 debug_println!("Cleanup sweep start at {:x?}", cur_lock_start);
2140 let mut anyrerun = false;
2141 loop {
2142 debug_println!("Cleanup sweep at {:x?}", cur_ptr);
2143 if cur_ptr.is_null() {
2144 anyrerun = true;
2145 break;
2146 }
2147 if cur_ptr == failed_lock {
2148 debug_println!("Cleanup sweep is at end {:x?}", cur_ptr);
2149 break;
2150 }
2151 // SAFETY:
2152 // cur_ptr is a valid pointer. It's just the left traverse, across
2153 // a chain of locked nodes.
2154 let cur: &ItemHolder<T, M> = unsafe { &*from_dummy(cur_ptr) };
2155
2156 let prev_ptr: *const _ = cur.prev.load(Ordering::SeqCst); //atomic janitor cleanup load prev
2157 anyrerun |= cur.unlock_node();
2158 debug_println!("Cleanup sweep going to {:x?}", prev_ptr);
2159 cur_ptr = prev_ptr;
2160 }
2161 anyrerun |= start.unlock_node();
2162 debug_println!("Janitor failed to grab locks. Rerun: {:?}", anyrerun);
2163 return (
2164 anyrerun,
2165 get_strong_status(have_seen_left_strong_refs, false),
2166 );
2167 }
2168 }
2169
2170 // Lock free, since this just iterates through the chain of nodes,
2171 // which has a bounded length.
2172 loop {
2173 debug_println!("Accessing {:x?} in first janitor loop", cur_ptr);
2174 // SAFETY:
2175 // cur_ptr remains a valid pointer. Our lock on the nodes, as we traverse them backward,
2176 // ensures that no other janitor task can free them.
2177 let cur: &ItemHolder<T, M> = unsafe { &*from_dummy(cur_ptr) };
2178 debug_println!("Setting next of {:x?} to {:x?}", cur_ptr, start_ptr,);
2179 #[cfg(feature = "validate")]
2180 assert_ne!(cur_ptr, start_ptr);
2181
2182 if cur.strong_count.load(Ordering::SeqCst) > 0 {
2183 //atomic janitor check strong_count
2184 have_seen_left_strong_refs = true;
2185 }
2186
2187 let prev_ptr: *const _ = cur.prev.load(Ordering::SeqCst); //atomic janitor load prev 4
2188 debug_println!("Janitor loop considering {:x?} -> {:x?}", cur_ptr, prev_ptr);
2189 if prev_ptr.is_null() {
2190 debug_println!(
2191 "Janitor task for {:x?} - found leftmost node: {:x?}",
2192 start_ptr,
2193 cur_ptr
2194 );
2195 break;
2196 }
2197
2198 // SAFETY:
2199 // We guarded 'cur_ptr', and it is thus alive. It holds a weak-ref on its 'prev', which
2200 // means that ref must now be keeping 'prev' alive. This means 'prev' is known to be a
2201 // valid pointer. We never decorate prev-pointers, so no undecorate is needed.
2202 let prev: &ItemHolder<T, M> = unsafe { &*from_dummy(prev_ptr) };
2203 #[cfg(feature = "validate")]
2204 assert_ne!(prev_ptr, start_ptr);
2205
2206 prev.set_next(start_ptr);
2207
2208 // We need a fence here, because otherwise this load could be reordered before the
2209 // `set_next`call above, while running under loom. This could lead to a situation
2210 // where a thread reads the previous next-pointer, and we still don't see the new
2211 // 'advance_count' here. With the fence, we know that either:
2212 // 1) The other thread has seen our new 'next' value supplied above
2213 // or
2214 // 2) We see their incremented 'advance_count'.
2215 // Note, we may thus block the deletion of next unnecessarily, if the other thread
2216 // read our new 'next' value just after we wrote it, and incremented 'advance_count'
2217 // here before we loaded it. This race condition is benign however, the other thread
2218 // will also try janitoring, and will delete the node if necessary.
2219 atomic::fence(Ordering::SeqCst);
2220
2221 let adv_count = prev.advance_count.load(Ordering::SeqCst); //atomic janitor check advance_count
2222 debug_println!("advance_count of {:x?} is {}", prev_ptr, adv_count);
2223 if adv_count > 0 {
2224 debug_println!(
2225 "Janitor task for {:x?} - node {:x?} has advance in progress",
2226 start_ptr,
2227 prev_ptr
2228 );
2229 // All to the right of cur_ptr could be targets of the advance.
2230 // Because of races, we don't know that their target is actually 'start_ptr'
2231 // (though it could be).
2232 rightmost_deletable = cur_ptr;
2233 }
2234 cur_ptr = prev_ptr;
2235 }
2236
2237 fn delete_nodes_that_can_be_deleted<T: ?Sized, M: IMetadata>(
2238 mut item_ptr: *const ItemHolderDummy<T>,
2239 jobq: &mut impl IDropHandler<T, M>,
2240 ) -> Option<*const ItemHolderDummy<T>> {
2241 let mut deleted_count = 0;
2242
2243 // Lock free, since this loop at most iterates once per node in the chain.
2244 loop {
2245 if item_ptr.is_null() {
2246 debug_println!(
2247 "Find non-deleted {:x?}, count = {}, item_ptr = {:x?}",
2248 item_ptr,
2249 deleted_count,
2250 item_ptr,
2251 );
2252 return (deleted_count > 0).then_some(item_ptr);
2253 }
2254 // SAFETY:
2255 // caller must ensure item_ptr is valid, or null. And it is not null, we checked above.
2256 let item: &ItemHolder<T, M> = unsafe { &*from_dummy(item_ptr as *mut _) };
2257 // Item is *known* to have a 'next'!=null here, so
2258 // we know weak_count == 1 implies it is only referenced by its 'next'
2259 let item_weak_count = item.weak_count.load(Ordering::SeqCst); //atomic janitor load weak_count
2260 if get_weak_count(item_weak_count) > 1 {
2261 debug_println!(
2262 "==> Find non-deleted {:x?}, count = {}, found node with weak count > 1 ({})",
2263 item_ptr,
2264 deleted_count,
2265 format_weak(item_weak_count)
2266 );
2267 return (deleted_count > 0).then_some(item_ptr);
2268 }
2269 debug_println!(
2270 "==> Deallocating node in janitor task: {:x?}, dec: ?, weak count: {}",
2271 item_ptr,
2272 // SAFETY:
2273 // item_ptr is valid
2274 format_weak(item_weak_count),
2275 );
2276
2277 #[cfg(feature = "validate")]
2278 {
2279 let prior_item_weak_count = item.weak_count.load(Ordering::SeqCst);
2280 #[cfg(feature = "validate")]
2281 assert_eq!(
2282 get_weak_count(prior_item_weak_count),
2283 1,
2284 "{item_ptr:x?} weak_count should still be 1, and we decrement it to 0"
2285 );
2286 }
2287
2288 let prev_item_ptr = item.prev.load(Ordering::SeqCst); //atomic janitor deletion load prev
2289 raw_deallocate_node(from_dummy::<T, M>(item_ptr as *mut _), jobq);
2290 deleted_count += 1;
2291 item_ptr = prev_item_ptr;
2292 }
2293 }
2294
2295 let mut cur_ptr = rightmost_candidate_ptr;
2296 let mut right_ptr = start_ptr;
2297 let mut must_see_before_deletes_can_be_made = rightmost_deletable;
2298 debug_println!("Starting final janitor loop");
2299 let mut need_rerun = false;
2300 // We must retain the locks one-past the node with the lock (lock works leftward)
2301 let mut unlock_carry: Option<*const ItemHolder<T, M>> = None;
2302 let mut do_unlock = |node: Option<*const ItemHolder<T, M>>| {
2303 if let Some(unlock) = unlock_carry.take() {
2304 // SAFETY:
2305 // Caller ensures pointer is valid.
2306 need_rerun |= unsafe { (*unlock).unlock_node() };
2307 }
2308 unlock_carry = node;
2309 };
2310
2311 #[cfg(feature = "validate")]
2312 assert!(!cur_ptr.is_null());
2313
2314 // Iterate through chain, deleting nodes when possible
2315 // Lock free, since this loop just iterates through each node in the chain.
2316 while !cur_ptr.is_null() {
2317 let new_predecessor = must_see_before_deletes_can_be_made
2318 .is_null()
2319 .then(|| delete_nodes_that_can_be_deleted::<T, M>(cur_ptr, jobq))
2320 .flatten();
2321
2322 if cur_ptr == must_see_before_deletes_can_be_made {
2323 debug_println!("Found must_see node: {:x?}", cur_ptr);
2324 // cur_ptr can't be deleted itself, but nodes further to the left can
2325 must_see_before_deletes_can_be_made = null_mut();
2326 }
2327
2328 if let Some(new_predecessor_ptr) = new_predecessor {
2329 // SAFETY:
2330 // right_ptr is always the start pointer, or the previous pointer we visited.
2331 // in any case, it is non-null and valid.
2332 let right: &ItemHolder<T, M> = unsafe { &*from_dummy(right_ptr) };
2333 debug_println!(
2334 "After janitor delete, setting {:x?}.prev = {:x?}",
2335 right_ptr,
2336 new_predecessor_ptr
2337 );
2338 if new_predecessor_ptr.is_null() {
2339 right
2340 .weak_count
2341 .fetch_and(!WEAK_HAVE_PREV, Ordering::SeqCst); //atomic janitor clear prev
2342 }
2343 right
2344 .prev
2345 .store(new_predecessor_ptr as *mut _, Ordering::SeqCst); //atomic janitor unlink
2346
2347 if new_predecessor_ptr.is_null() {
2348 debug_println!("new_predecessor is null");
2349 break;
2350 }
2351 debug_println!("found new_predecessor: {:x?}", new_predecessor_ptr);
2352 right_ptr = new_predecessor_ptr;
2353 // SAFETY:
2354 // We only visit nodes we have managed to lock. new_predecessor is such a node
2355 // and it is not being deleted. It is safe to access.
2356 let new_predecessor = unsafe { &*from_dummy::<T, M>(new_predecessor_ptr) };
2357 cur_ptr = new_predecessor.prev.load(Ordering::SeqCst); //atomic janitor load prev 6
2358 #[cfg(feature = "validate")]
2359 assert_ne!(
2360 new_predecessor_ptr as *const _,
2361 null(),
2362 "assert failed, {new_predecessor_ptr:?} was endptr"
2363 );
2364 do_unlock(Some(new_predecessor));
2365 debug_println!("New candidate advanced to {:x?}", cur_ptr);
2366 } else {
2367 debug_println!("Advancing to left, no node to delete found {:x?}", cur_ptr);
2368 // SAFETY:
2369 // All the pointers we traverse (leftward) through the chain are valid,
2370 // since we lock them succesively as we traverse.
2371 let cur: &ItemHolder<T, M> = unsafe { &*from_dummy(cur_ptr) };
2372 right_ptr = cur_ptr;
2373 cur_ptr = cur.prev.load(Ordering::SeqCst); //atomic janitor load prev 7
2374 debug_println!("Advancing to left, advanced to {:x?}", cur_ptr);
2375 do_unlock(Some(cur));
2376 }
2377 }
2378 do_unlock(None);
2379 debug_println!("Unlock start-node {:x?}", start_ptr);
2380 need_rerun |= start.unlock_node();
2381 debug_println!("start-node unlocked: {:x?}", start_ptr);
2382 (
2383 need_rerun,
2384 get_strong_status(have_seen_left_strong_refs, true),
2385 )
2386}
2387
2388/// drop_payload should be set to drop the payload of 'original_item_ptr' iff it is
2389/// either not rightmost, or all refs to all nodes are just weak refs (i.e, no strong refs exist)
2390fn do_drop_weak<T: ?Sized, M: IMetadata>(
2391 original_item_ptr: *const ItemHolderDummy<T>,
2392 drop_job_queue: &mut impl IDropHandler<T, M>,
2393) {
2394 debug_println!("drop weak {:x?}", original_item_ptr);
2395 let mut item_ptr = original_item_ptr;
2396 // Lock free, since we only loop in these cases:
2397 // 1) If the janitor task was disturbed. If it was, some other thread has offloaded
2398 // work on us, which means the other thread could carry on and do other stuff, which
2399 // means there is system wide progress.
2400 // 2) If a new 'next' pointer has been set, i.e, a new node has been added. This is
2401 // considered progress.
2402 // 3) Some other node changed the weak_count. In that case, there are two cases:
2403 // 3a) That other node was running 'do_drop_weak'. After changing the weak_count
2404 // there are no more loop conditions, so it has definitely made progress.
2405 // 3b) That node was running some other work. It managed to change the weak count,
2406 // and is not running 'do_drop_weak', so the system is making progress as long as
2407 // that other thread is making progress, which it must be (see all other comments),
2408 // and it cannot have been obstructed by this thread.
2409 loop {
2410 debug_println!("drop weak loop {:?}", item_ptr);
2411 item_ptr = do_advance_weak::<T, M>(item_ptr);
2412 let (need_rerun, strong_refs) =
2413 do_janitor_task(from_dummy::<T, M>(item_ptr), drop_job_queue);
2414 if need_rerun {
2415 debug_println!("Janitor {:?} was disturbed. Need rerun", item_ptr);
2416 continue;
2417 }
2418
2419 // When we get here, we know we are advanced to the most recent node
2420
2421 debug_println!("Accessing {:?}", item_ptr);
2422 // SAFETY:
2423 // item_ptr is a pointer that was returned by 'do_advance_weak'. The janitor task
2424 // never deallocates the pointer given to it (only prev pointers of that pointer).
2425 // do_advance_weak always returns valid non-null pointers.
2426 let item: &ItemHolder<T, M> = unsafe { &*from_dummy(item_ptr) };
2427 let prior_weak = item.weak_count.load(Ordering::SeqCst); //atomic drop_weak load weak_count
2428
2429 #[cfg(feature = "validate")]
2430 assert!(get_weak_count(prior_weak) > 0);
2431
2432 let next_ptr = item.next.load(Ordering::SeqCst); //atomic drop_weak load next
2433 let have_next = !undecorate(next_ptr).is_null();
2434 if have_next {
2435 // drop_weak raced with 'add'.
2436 debug_println!(
2437 "add race {:x?}, prior_weak: {}",
2438 item_ptr,
2439 format_weak(prior_weak)
2440 );
2441 continue;
2442 }
2443 // Note, 'next' may be set here, immediately after us loading 'next' above.
2444 // This is benign. In this case, 'prior_weak' will also have get_weak_next(prior_weak)==true
2445 // and get_weak_count(prior_weak)>1. We will not drop the entire chain, but whoever
2446 // just set 'next' definitely will.
2447
2448 #[cfg(feature = "debug")]
2449 if get_weak_next(prior_weak) {
2450 debug_println!("raced in drop weak - {:x?} was previously advanced to rightmost, but now it has a 'next' again (next is ?)", item_ptr);
2451 }
2452
2453 debug_println!("No add race, prior_weak: {}", format_weak(prior_weak));
2454
2455 // We now have enough information to know if we can drop payload, if desired
2456 {
2457 // SAFETY:
2458 // item_ptr was returned by do_advance_weak, and is thus valid.
2459 let o_item = unsafe { &*from_dummy::<T, M>(item_ptr) };
2460 let o_strong = o_item.strong_count.load(Ordering::SeqCst); //atomic drop_weak check strong_count
2461 if o_strong == 0 {
2462 let can_drop_now_despite_next_check = if strong_refs
2463 == NodeStrongStatus::NoStrongRefsExist
2464 {
2465 debug_println!("final drop analysis {:x?}: Can drop this payload, because no strong refs exists anywhere", item_ptr);
2466 true
2467 } else {
2468 debug_println!("final drop analysis {:x?}: no exemption condition found, can't drop this payload", item_ptr);
2469 false
2470 };
2471 if can_drop_now_despite_next_check {
2472 do_drop_payload_if_possible(
2473 o_item,
2474 can_drop_now_despite_next_check,
2475 drop_job_queue,
2476 );
2477 }
2478 }
2479 }
2480
2481 match item.weak_count.compare_exchange(
2482 prior_weak,
2483 prior_weak - 1,
2484 Ordering::SeqCst, //atomic drop_weak dec weak
2485 Ordering::SeqCst,
2486 ) {
2487 Ok(_) => {
2488 debug_println!(
2489 "==> drop weak {:x?}, did reduce weak to {}",
2490 item_ptr,
2491 format_weak(prior_weak - 1)
2492 );
2493 }
2494 Err(_err_weak) => {
2495 debug_println!(
2496 "drop weak {:x?}, raced on count decrement (was {}, expected {})",
2497 item_ptr,
2498 format_weak(_err_weak),
2499 format_weak(prior_weak)
2500 );
2501
2502 // Note:
2503 // This case occurs if two threads race to drop a weak ref.
2504 // Crucially, it is also in play to distinguish between these two scenarios:
2505 // 1: Two nodes have weak refs. One of them (A) is dropped, the other (B) remains.
2506 // 2: Two nodes have weak refs. One of them (A) updates the value, and is then
2507 // dropped. The other (B) remains.
2508 //
2509 // In both cases, when B is dropped, it will see a total weak count of 2.
2510 // However, in case 2, B must re-run 'advance', and do another janitor cycle.
2511 // This *has* to be checked during the weak_count compare_exchange, since doing
2512 // it at any point prior to that opens up the risk of a race where node A above
2513 // manages to update and drop before the compare-exchange.
2514
2515 continue;
2516 }
2517 }
2518
2519 debug_println!(
2520 "Prior weak of {:x?} is {}",
2521 item_ptr,
2522 format_weak(prior_weak)
2523 );
2524 if get_weak_count(prior_weak) == 1 {
2525 #[cfg(feature = "validate")]
2526 if get_weak_next(prior_weak) {
2527 std::eprintln!("This case should not be possible, since it implies the 'next' object didn't increase the refcount");
2528 std::process::abort();
2529 }
2530 // We know there's no next here either, since we checked 'get_weak_next' before the cmpxch, so we _KNOW_ we're the only user now.
2531 if !get_weak_prev(prior_weak) {
2532 debug_println!("drop weak {:x?}, prior count = 1, raw deallocate", item_ptr);
2533 drop_job_queue.report_sole_user();
2534 raw_deallocate_node(from_dummy::<T, M>(item_ptr), drop_job_queue);
2535 } else {
2536 debug_println!(
2537 "drop weak {:x?}, couldn't drop node, because there are nodes to the left",
2538 item_ptr
2539 );
2540 // These nodes will presumably realize they're holding leftwise refs, and before dropping
2541 // they must advance. And thus the whole chain _will_ be collected.
2542 }
2543 }
2544 return;
2545 }
2546}
2547
2548fn do_update<T: ?Sized, M: IMetadata>(
2549 initial_item_ptr: *const ItemHolder<T, M>,
2550 mut val_dummy_factory: impl FnMut(&ItemHolder<T, M>) -> Option<*const ItemHolderDummy<T>>,
2551 drop_job_queue: &mut impl IDropHandler<T, M>,
2552) -> *const ItemHolderDummy<T> {
2553 let mut item_ptr = to_dummy::<T, M>(initial_item_ptr);
2554 verify_item(item_ptr);
2555
2556 // Lock free. This loop only loops if another thread has changed 'next', which
2557 // means there is system wide progress.
2558 loop {
2559 item_ptr = do_advance_strong::<T, M>(item_ptr, drop_job_queue);
2560
2561 // SAFETY:
2562 // item_ptr was returned by do_advance_strong, and is thus valid.
2563 let item = unsafe { &*from_dummy::<T, M>(item_ptr) };
2564 debug_println!(
2565 "Updating {:x?} , loading next of {:x?}",
2566 initial_item_ptr,
2567 item_ptr
2568 );
2569
2570 let cur_next = item.next.load(Ordering::SeqCst); //atomic update check_next
2571 if !undecorate(cur_next).is_null() {
2572 continue;
2573 }
2574 let Some(val_dummy) = val_dummy_factory(item) else {
2575 return item_ptr;
2576 };
2577 let new_next = decorate(val_dummy, get_decoration(cur_next));
2578
2579 debug_println!("Updating {:x?} to {:x?}", item_ptr, val_dummy);
2580 let val = from_dummy::<T, M>(val_dummy) as *mut ItemHolder<T, M>;
2581 // SAFETY:
2582 // val is the new value supplied by the caller. It is always valid, and it is not
2583 // yet linked into the chain, so no other node could free it.
2584 unsafe { (*val).prev = atomic::AtomicPtr::new(item_ptr as *mut _) };
2585
2586 // We must increment weak_count here, because this is the signal that
2587 // causes the drop_weak impl to realize that it is racing with an 'update',
2588 // (in contrast to perhaps racing with another drop, in which case it might
2589 // win the race and might be required to deallocate, to avoid memory leaks).
2590 let _weak_was = item.weak_count.fetch_add(1, Ordering::SeqCst);
2591
2592 #[cfg(feature = "validate")]
2593 assert!(_weak_was > 0);
2594 debug_println!(
2595 "do_update: increment weak of {:?} (weak now: {})",
2596 item_ptr,
2597 format_weak(_weak_was + 1)
2598 );
2599
2600 debug_println!("bef item.next.compare_exchange");
2601 match item.next.compare_exchange(
2602 cur_next,
2603 new_next as *mut _,
2604 Ordering::SeqCst, //atomic update exchange next
2605 Ordering::SeqCst,
2606 ) {
2607 Ok(_) => {
2608 item.weak_count.fetch_or(WEAK_HAVE_NEXT, Ordering::SeqCst); //atomic update mark have next
2609 debug_println!("aft1 item.next.compare_exchange");
2610 }
2611 Err(_) => {
2612 debug_println!("aft2 item.next.compare_exchange");
2613 debug_println!("race, update of {:x?} to {:x?} failed", item_ptr, new_next);
2614 let _res = item.weak_count.fetch_sub(1, Ordering::SeqCst); //atomic update fail dec weak_count
2615 debug_println!(
2616 "==> race, decreasing {:x?} weak to {}",
2617 item_ptr,
2618 format_weak(_res - 1)
2619 );
2620 #[cfg(feature = "validate")]
2621 assert!(get_weak_count(_res) > 1);
2622
2623 continue;
2624 }
2625 }
2626
2627 let strong_count = item.strong_count.fetch_sub(1, Ordering::SeqCst); //atomic update dec strong_count
2628 debug_println!(
2629 "do_update: strong count {:x?} is now decremented to {}",
2630 item_ptr,
2631 strong_count - 1,
2632 );
2633 if strong_count == 1 {
2634 // It's safe to drop payload here, we've just now added a new item
2635 // that has its payload un-dropped, so there exists something to advance to.
2636 do_drop_payload_if_possible(from_dummy::<T, M>(item_ptr), false, drop_job_queue);
2637 let _weak_count = item.weak_count.fetch_sub(1, Ordering::SeqCst); //atomic update dec weak_count
2638 debug_println!(
2639 "==> do_update: decrement weak of {:?}, new weak: {}",
2640 item_ptr,
2641 format_weak(_weak_count.saturating_sub(1))
2642 );
2643 #[cfg(feature = "validate")]
2644 assert!(get_weak_count(_weak_count) > 1);
2645 }
2646
2647 item_ptr = val_dummy;
2648
2649 //If the strong_count of the previous went to 0, do a janitor-cycle
2650 if strong_count == 1 {
2651 // Lock free
2652 // Only loops if another thread has set the 'disturbed' flag, meaning it has offloaded
2653 // work on us, and it has made progress, which means there is system wide progress.
2654 loop {
2655 let (need_rerun, _strong_refs) =
2656 do_janitor_task(from_dummy::<T, M>(item_ptr), drop_job_queue);
2657 if need_rerun {
2658 item_ptr = do_advance_strong::<T, M>(item_ptr, drop_job_queue);
2659 debug_println!("Janitor {:?} was disturbed. Need rerun", item_ptr);
2660 continue;
2661 }
2662 break;
2663 }
2664 }
2665 return item_ptr;
2666 }
2667}
2668
2669// Drops payload, unless either:
2670// * We're the rightmost node
2671// * The payload is already dropped
2672// the item_ptr must be valid.
2673fn do_drop_payload_if_possible<T: ?Sized, M: IMetadata>(
2674 item_ptr: *const ItemHolder<T, M>,
2675 override_rightmost_check: bool,
2676 drop_queue: &mut impl IDropHandler<T, M>,
2677) {
2678 // SAFETY:
2679 // caller must supply valid pointer.
2680 let item = unsafe { &*(item_ptr) };
2681 // Lock free. This only loops if another thread has changed next, which implies
2682 // progress.
2683 loop {
2684 let next_ptr = item.next.load(Ordering::Relaxed);
2685 let decoration = get_decoration(next_ptr);
2686 if decoration.is_dropped() || (undecorate(next_ptr).is_null() && !override_rightmost_check)
2687 {
2688 debug_println!(
2689 "Not dropping {:x?} payload, because it's rightmost",
2690 item_ptr
2691 );
2692 return;
2693 }
2694 debug_println!(
2695 "Drop if possible, {:x?}, cur dec: {:?}, dropped dec {:?}",
2696 item_ptr,
2697 decoration,
2698 decoration.dropped()
2699 );
2700 match item.next.compare_exchange(
2701 next_ptr,
2702 decorate(undecorate(next_ptr), decoration.dropped()) as *mut _,
2703 Ordering::SeqCst, //atomic mark dropped
2704 Ordering::SeqCst,
2705 ) {
2706 Ok(_) => {
2707 break;
2708 }
2709 Err(err_ptr) => {
2710 if !get_decoration(err_ptr).is_dropped() {
2711 debug_println!("Raced, new attempt to drop {:x?} payload", item_ptr);
2712 continue;
2713 }
2714 debug_println!("Raced, {:x?} now already dropped", item_ptr);
2715 return;
2716 }
2717 }
2718 }
2719 debug_println!("Dropping payload {:x?} (dec: ?)", item_ptr,);
2720 // SAFETY: item_ptr must be valid, guaranteed by caller.
2721 drop_queue.do_drop(item_ptr as *mut _);
2722}
2723
2724// Caller must guarantee poiner is uniquely owned
2725fn raw_do_unconditional_drop_payload_if_not_dropped<T: ?Sized, M: IMetadata>(
2726 item_ptr: *mut ItemHolder<T, M>,
2727 drop_job_queue: &mut impl IDropHandler<T, M>,
2728) {
2729 debug_println!("Unconditional drop of payload {:x?}", item_ptr);
2730 // SAFETY: Caller must guarantee pointer is uniquely owned
2731 let item = unsafe { &*(item_ptr) };
2732 let next_ptr = item.next.load(Ordering::SeqCst); //atomic drop check dropped
2733 let decoration = get_decoration(next_ptr);
2734 if !decoration.is_dropped() {
2735 debug_println!(
2736 "Actual drop of payload {:x?}, it wasn't already dropped",
2737 item_ptr
2738 );
2739 drop_job_queue.do_drop(item_ptr);
2740 }
2741}
2742fn do_dealloc<T: ?Sized, M: IMetadata>(item_ptr: *const ItemHolder<T, M>) {
2743 // SAFETY:
2744 // item_ptr is still a valid, exclusively owned pointer
2745 let layout = get_holder_layout(&unsafe { &*item_ptr }.payload);
2746 // SAFETY:
2747 // fullptr is a valid uniquely owned pointer that we must deallocate
2748 debug_println!("Calling dealloc {:x?}", item_ptr);
2749
2750 #[cfg(feature = "validate")]
2751 {
2752 // SAFETY:
2753 // item_ptr is valid
2754 let item_ref = unsafe { &*(item_ptr as *mut ItemHolder<T, M>) };
2755 item_ref.magic1.store(0xdeaddead, Ordering::Relaxed);
2756 item_ref.magic2.store(0xdeaddea2, Ordering::Relaxed);
2757 }
2758
2759 // SAFETY:
2760 // item_ptr is still a valid, exclusively owned pointer
2761 unsafe { alloc::alloc::dealloc(item_ptr as *mut _, layout) }
2762}
2763
2764fn do_drop_strong<T: ?Sized, M: IMetadata>(
2765 full_item_ptr: *const ItemHolder<T, M>,
2766 drop_job_queue: &mut impl IDropHandler<T, M>,
2767) {
2768 debug_println!("drop strong of {:x?}", full_item_ptr,);
2769 let mut item_ptr = to_dummy(full_item_ptr);
2770
2771 item_ptr = do_advance_strong::<T, M>(item_ptr, drop_job_queue);
2772 // SAFETY: do_advance_strong always returns a valid pointer
2773 let item: &ItemHolder<T, M> = unsafe { &*from_dummy(item_ptr) };
2774
2775 let prior_strong = item.strong_count.fetch_sub(1, Ordering::SeqCst); //atomic drop dec strong_count
2776 debug_println!(
2777 "drop strong of {:x?}, prev count: {}, now {}",
2778 item_ptr,
2779 prior_strong,
2780 prior_strong - 1,
2781 );
2782 if prior_strong == 1 {
2783 // This is the only case where we actually might need to drop the rightmost node's payload
2784 // (if all nodes have only weak references)
2785 do_drop_weak::<T, M>(item_ptr, drop_job_queue);
2786 }
2787}
2788
2789impl<T: ?Sized> Drop for ArcShift<T> {
2790 fn drop(&mut self) {
2791 verify_item(self.item.as_ptr());
2792 debug_println!("executing ArcShift::drop({:x?})", self.item.as_ptr());
2793 with_holder!(self.item, T, item, {
2794 let mut jobq = DropHandler::default();
2795 do_drop_strong(item, &mut jobq);
2796 jobq.resume_any_panics();
2797 })
2798 }
2799}
2800impl<T: ?Sized> Drop for ArcShiftWeak<T> {
2801 fn drop(&mut self) {
2802 verify_item(self.item.as_ptr());
2803
2804 fn drop_weak_helper<T: ?Sized, M: IMetadata>(item: *const ItemHolder<T, M>) {
2805 let mut jobq = DropHandler::default();
2806 do_drop_weak::<T, M>(to_dummy(item), &mut jobq);
2807 jobq.resume_any_panics();
2808 }
2809
2810 with_holder!(self.item, T, item, drop_weak_helper(item))
2811 }
2812}
2813
2814/// This is a marker for methods that have been removed in the
2815/// most recent version of ArcShift.
2816///
2817/// Any methods that take this type as parameters, is completely
2818/// impossible to call. This is by design. The methods have been
2819/// kept, to hopefully make migration easier.
2820pub struct NoLongerAvailableMarker {
2821 _priv: (),
2822}
2823
2824impl<T: ?Sized> ArcShiftWeak<T> {
2825 /// Upgrade this ArcShiftWeak instance to a full ArcShift.
2826 /// This is required to be able to access any stored value.
2827 /// If the ArcShift instance has no value (because the last ArcShift instance
2828 /// had been deallocated), this method returns None.
2829 pub fn upgrade(&self) -> Option<ArcShift<T>> {
2830 let t = with_holder!(self.item, T, item, {
2831 let mut drop_handler = DropHandler::default();
2832 let temp = do_upgrade_weak(item, &mut drop_handler);
2833 drop_handler.resume_any_panics();
2834 temp
2835 });
2836 Some(ArcShift {
2837 // SAFETY:
2838 // do_upgrade_weak returns a valid upgraded pointer
2839 item: unsafe { NonNull::new_unchecked(t? as *mut _) },
2840 pd: PhantomData,
2841 })
2842 }
2843}
2844impl<T> ArcShift<T> {
2845 /// Crate a new ArcShift instance with the given value.
2846 pub fn new(val: T) -> ArcShift<T> {
2847 let holder = make_sized_holder(val, null_mut());
2848 ArcShift {
2849 // SAFETY:
2850 // The newly created holder-pointer is valid and non-null
2851 item: unsafe { NonNull::new_unchecked(to_dummy(holder) as *mut _) },
2852 pd: PhantomData,
2853 }
2854 }
2855
2856 /// Drops this ArcShift instance. If this was the last instance of the entire chain,
2857 /// the payload value is returned.
2858 pub fn try_into_inner(self) -> Option<T> {
2859 let mut drop_handler =
2860 deferred_panics_helper::stealing_drop::StealingDropHandler::default();
2861 verify_item(self.item.as_ptr());
2862 debug_println!("executing ArcShift::into_inner({:x?})", self.item.as_ptr());
2863 do_drop_strong(
2864 from_dummy::<T, SizedMetadata>(self.item.as_ptr()),
2865 &mut drop_handler,
2866 );
2867 core::mem::forget(self);
2868 drop_handler.take_stolen()
2869 }
2870
2871 /// Update the value of this ArcShift instance (and all derived from it) to the given value.
2872 ///
2873 /// Note, other ArcShift instances will not see the new value until they call the
2874 /// [`ArcShift::get`] or [`ArcShift::reload`] methods.
2875 pub fn update(&mut self, val: T) {
2876 let holder = make_sized_holder(val, self.item.as_ptr() as *const ItemHolderDummy<T>);
2877
2878 with_holder!(self.item, T, item, {
2879 let mut jobq = DropHandler::default();
2880 // SAFETY:
2881 // * do_update returns non-null values.
2882 // * 'holder' is in fact a thin pointer, just what we need for T, since T is Sized.
2883 let new_item = unsafe {
2884 NonNull::new_unchecked(
2885 do_update(item, |_| Some(holder as *const _), &mut jobq) as *mut _
2886 )
2887 };
2888 jobq.resume_any_panics();
2889 self.item = new_item;
2890 });
2891 }
2892 /// Atomically update the value.
2893 ///
2894 /// The supplied closure 'update' is called with the previous value as an argument.
2895 /// It then constructs a new, updated value. This value replaces the previous value
2896 /// of the ArcShift instance. Note, if other threads are updating the arcshift chain
2897 /// concurrently, it may take multiple retries for the update to succeed. The closure
2898 /// will be called once for every attempt, until an attempt is successful. This means that
2899 /// the final updated value will have always been calculated from the previous set value.
2900 pub fn rcu(&mut self, mut update: impl FnMut(&T) -> T) -> &T {
2901 self.rcu_maybe(|x| Some(update(x)));
2902 (*self).shared_non_reloading_get()
2903 }
2904
2905 /// Atomically update the value.
2906 ///
2907 /// The supplied closure 'update' is called with the previous value as an argument.
2908 /// It then constructs a new, updated value. This value replaces the previous value
2909 /// of the ArcShift instance. Note, if other threads are updating the arcshift chain
2910 /// concurrently, it may take multiple retries for the update to succeed. The closure
2911 /// will be called once for every attempt, until an attempt is successful. This means that
2912 /// the final updated value will have always been calculated from the previous set value.
2913 ///
2914 /// If the closure returns None, the update is cancelled, and this method returns false.
2915 pub fn rcu_maybe(&mut self, mut update: impl FnMut(&T) -> Option<T>) -> bool {
2916 let mut holder: Option<*const ItemHolderDummy<T>> = None;
2917
2918 let mut cancelled = false;
2919
2920 let mut jobq = DropHandler::default();
2921 // SAFETY:
2922 // do_update returns a valid non-null pointer
2923 let new_item_ptr = do_update(
2924 from_dummy::<T, SizedMetadata>(self.item.as_ptr()),
2925 |prev: &ItemHolder<T, SizedMetadata>| -> Option<*const ItemHolderDummy<T>> {
2926 // SAFETY:
2927 // we hold a strong ref on prev, so it cannot be dropped. Shared
2928 // access to 'payload' is thus sound.
2929 let prev_payload: &ManuallyDrop<T> = unsafe { &*prev.payload.get() };
2930 let prev_payload: &T = prev_payload;
2931 let new_candidate: Option<T> = update(prev_payload);
2932 let Some(new_candidate) = new_candidate else {
2933 // Ok, the closure decided to NOT do an update after all
2934 // We must now free any previous allocated candidate (both payload and holder).
2935 if let Some(holder) = holder {
2936 let mut jobq = DropHandler::default();
2937 let full_ptr = from_dummy::<T, SizedMetadata>(holder);
2938 raw_do_unconditional_drop_payload_if_not_dropped(
2939 full_ptr as *mut ItemHolder<T, SizedMetadata>,
2940 &mut jobq,
2941 );
2942 do_dealloc(full_ptr);
2943 jobq.resume_any_panics();
2944 }
2945 cancelled = true;
2946 return None;
2947 };
2948 if let Some(holder) = holder {
2949 // SAFETY:
2950 // If we get called when we already have a holder, then this is a retry
2951 // and that holder was never visible to any other thread. We still have
2952 // unique access to it.
2953 let holder_mut: &mut ItemHolder<T, SizedMetadata> =
2954 unsafe { &mut *(from_dummy::<T, SizedMetadata>(holder) as *mut _) };
2955 // SAFETY:
2956 // See above
2957 unsafe {
2958 ManuallyDrop::drop(&mut *holder_mut.payload.get());
2959 }
2960 holder_mut.payload = UnsafeCell::new(ManuallyDrop::new(new_candidate));
2961 Some(holder)
2962 } else {
2963 let new_holder = to_dummy(make_sized_holder(
2964 new_candidate,
2965 self.item.as_ptr() as *const ItemHolderDummy<T>,
2966 ));
2967 holder = Some(new_holder);
2968 Some(new_holder)
2969 }
2970 },
2971 &mut jobq,
2972 );
2973 // SAFETY:
2974 // new_item_ptr returned by do_update is never null
2975 let new_unique_ptr = unsafe { NonNull::new_unchecked(new_item_ptr as *mut _) };
2976 jobq.resume_any_panics();
2977 self.item = new_unique_ptr;
2978 !cancelled
2979 }
2980}
2981
2982#[doc(hidden)]
2983#[deprecated = "ArcShiftLight has been removed. Please consider using ArcShiftWeak. It is similar, though not a direct replacement."]
2984pub struct ArcShiftLight {}
2985
2986/// Return value of [`ArcShift::shared_get`]
2987pub enum SharedGetGuard<'a, T: ?Sized> {
2988 /// The pointer already referenced the most recent value
2989 Raw(&'a T),
2990 /// We had to advance, and to do this we unfortunately had to clone
2991 LightCloned {
2992 #[doc(hidden)]
2993 next: *const ItemHolderDummy<T>,
2994 },
2995 /// Most expensive case, only used in some rare race-cases
2996 FullClone {
2997 #[doc(hidden)]
2998 cloned: ArcShift<T>,
2999 },
3000}
3001
3002impl<T: ?Sized> Drop for SharedGetGuard<'_, T> {
3003 #[inline(always)]
3004 fn drop(&mut self) {
3005 match self {
3006 SharedGetGuard::Raw(_) => {}
3007 SharedGetGuard::FullClone { .. } => {}
3008 SharedGetGuard::LightCloned { next } => {
3009 if is_sized::<T>() {
3010 let item_ptr = from_dummy::<_, SizedMetadata>(*next);
3011 // SAFETY:
3012 // The 'next' pointer of a SharedGetGuard::Cloned is always valid
3013 let item = unsafe { &*item_ptr };
3014 let mut dropq = DropHandler::default();
3015 do_drop_strong(item, &mut dropq);
3016 dropq.resume_any_panics();
3017 } else {
3018 let item_ptr = from_dummy::<_, UnsizedMetadata<T>>(*next);
3019 // SAFETY:
3020 // The 'next' pointer of a SharedGetGuard::Cloned is always valid
3021 let item = unsafe { &*item_ptr };
3022 let mut dropq = DropHandler::default();
3023 do_drop_strong(item, &mut dropq);
3024 dropq.resume_any_panics();
3025 }
3026 }
3027 }
3028 }
3029}
3030
3031impl<T: ?Sized> core::ops::Deref for SharedGetGuard<'_, T> {
3032 type Target = T;
3033
3034 #[inline(always)]
3035 fn deref(&self) -> &Self::Target {
3036 match self {
3037 SharedGetGuard::Raw(r) => r,
3038 SharedGetGuard::LightCloned { next } => {
3039 if is_sized::<T>() {
3040 // SAFETY:
3041 // The 'next' pointer of a SharedGetGuard::Cloned is always valid
3042 unsafe { (*from_dummy::<_, SizedMetadata>(*next)).payload() }
3043 } else {
3044 // SAFETY:
3045 // The 'next' pointer of a SharedGetGuard::Cloned is always valid
3046 unsafe { (*from_dummy::<_, UnsizedMetadata<T>>(*next)).payload() }
3047 }
3048 }
3049 SharedGetGuard::FullClone { cloned } => cloned.shared_non_reloading_get(),
3050 }
3051 }
3052}
3053
3054fn slow_shared_get<T: ?Sized, M: IMetadata>(
3055 item: &ItemHolder<T, M>,
3056) -> Option<SharedGetGuard<'_, T>> {
3057 debug_println!("slow_shared_get: {:?} (advancing count)", item as *const _);
3058 item.advance_count.fetch_add(1, Ordering::SeqCst);
3059
3060 #[cfg(loom)]
3061 atomic::fence(Ordering::SeqCst);
3062
3063 debug_println!("advanced count");
3064 let next = item.next.load(Ordering::SeqCst);
3065 assert!(!undecorate(next).is_null());
3066 debug_println!("slow_shared_get: {:?}, next = {:?}", item as *const _, next);
3067
3068 let next_val = from_dummy::<_, SizedMetadata>(undecorate(next));
3069 // SAFETY:
3070 // The 'item.next' pointer is not null as per method precondition, and since we have
3071 // incremented advance_count, 'item.next' is a valid pointer.
3072 let sc = unsafe { (*next_val).strong_count.load(Ordering::Relaxed) };
3073 if sc == 0 {
3074 debug_println!("slow_shared sc == 0");
3075 item.advance_count.fetch_sub(1, Ordering::SeqCst);
3076 return None;
3077 }
3078 debug_println!("slow_shared sc #1 for {:?}", next_val);
3079 // SAFETY:
3080 // The 'item.next' pointer is not null as per method precondition, and since we have
3081 // incremented advance_count, 'item.next' is a valid pointer.
3082 let exc = unsafe {
3083 (*next_val)
3084 .strong_count
3085 .compare_exchange(sc, sc + 1, Ordering::SeqCst, Ordering::SeqCst)
3086 };
3087 debug_println!("slow_shared sc #1.5");
3088 if exc.is_err() {
3089 debug_println!("slow_shared sc race");
3090 debug_println!("slow_shared_get: {:?}, sc == 0", item as *const _);
3091 item.advance_count.fetch_sub(1, Ordering::SeqCst);
3092 return None;
3093 }
3094 debug_println!("slow_shared sc #2");
3095
3096 // SAFETY:
3097 // The 'item.next' pointer is not null as per method precondition, and since we have
3098 // incremented advance_count, 'item.next' is a valid pointer.
3099 let next_next = unsafe { (*next_val).next.load(Ordering::SeqCst) };
3100 if !undecorate(next_next).is_null() {
3101 debug_println!("slow_shared_get: {:?}, was dropped", item as *const _);
3102 let mut dropq = DropHandler::default();
3103 do_drop_strong(next_val, &mut dropq);
3104 item.advance_count.fetch_sub(1, Ordering::SeqCst);
3105 dropq.resume_any_panics();
3106 return None;
3107 }
3108 debug_println!("slow_shared sc #3");
3109 item.advance_count.fetch_sub(1, Ordering::SeqCst);
3110
3111 debug_println!(
3112 "slow_shared_get: {:?}, advanced to {:?}",
3113 item as *const _,
3114 next
3115 );
3116
3117 Some(SharedGetGuard::LightCloned {
3118 next: undecorate(next),
3119 })
3120}
3121
3122impl<T: ?Sized> ArcShift<T> {
3123 #[doc(hidden)]
3124 #[cfg_attr(test, mutants::skip)]
3125 #[deprecated = "rcu_project was completely removed in ArcShift 0.2. Please use method 'rcu' instead, and just manually access the desired field."]
3126 pub fn rcu_project(&mut self, _marker: NoLongerAvailableMarker) {
3127 unreachable!("method cannot be called")
3128 }
3129 #[doc(hidden)]
3130 #[cfg_attr(test, mutants::skip)]
3131 #[deprecated = "rcu_maybe2 was completely removed in ArcShift 0.2. Please use 'rcu_maybe' instead. It has the same features."]
3132 pub fn rcu_maybe2(&mut self, _marker: NoLongerAvailableMarker) {
3133 unreachable!("method cannot be called")
3134 }
3135
3136 #[doc(hidden)]
3137 #[cfg_attr(test, mutants::skip)]
3138 #[deprecated = "update_shared_box was completely removed in ArcShift 0.2. Shared references can no longer be updated. Please file an issue if this is a blocker!"]
3139 pub fn update_shared_box(&mut self, _marker: NoLongerAvailableMarker) {
3140 unreachable!("method cannot be called")
3141 }
3142 #[doc(hidden)]
3143 #[cfg_attr(test, mutants::skip)]
3144 #[deprecated = "update_shared was completely removed in ArcShift 0.2. Shared references can no longer be updated. Please file an issue if this is a blocker!"]
3145 pub fn update_shared(&mut self, _marker: NoLongerAvailableMarker) {
3146 unreachable!("method cannot be called")
3147 }
3148 #[doc(hidden)]
3149 #[cfg_attr(test, mutants::skip)]
3150 #[deprecated = "make_light was completely removed in ArcShift 0.2. Please use ArcShift::downgrade(&item) instead."]
3151 pub fn make_light(&mut self, _marker: NoLongerAvailableMarker) {
3152 unreachable!("method cannot be called")
3153 }
3154
3155 /// Basically the same as doing [`ArcShift::new`], but avoids copying the contents of 'input'
3156 /// to the stack, even as a temporary variable. This can be useful, if the type is too large
3157 /// to fit on the stack.
3158 pub fn from_box(input: Box<T>) -> ArcShift<T> {
3159 let holder = make_sized_or_unsized_holder_from_box(input, null_mut());
3160 ArcShift {
3161 // SAFETY:
3162 // from_box_impl never creates a null-pointer from a Box.
3163 item: unsafe { NonNull::new_unchecked(holder as *mut _) },
3164 pd: PhantomData,
3165 }
3166 }
3167
3168 /// Returns a mutable reference to the inner value.
3169 ///
3170 /// This method returns None if there are any other ArcShift or ArcShiftWeak instances
3171 /// pointing to the same object chain.
3172 pub fn try_get_mut(&mut self) -> Option<&mut T> {
3173 self.reload();
3174 with_holder!(self.item, T, item_ptr, {
3175 // SAFETY:
3176 // self.item is always a valid pointer for shared access
3177 let item = unsafe { &*item_ptr };
3178 let weak_count = item.weak_count.load(Ordering::SeqCst); //atomic try_get_mut check weak_count
3179 if get_weak_count(weak_count) == 1
3180 && !get_weak_prev(weak_count)
3181 // There can't be a 'next', since if so we wouldn't have a weak count of 1
3182 && item.strong_count.load(Ordering::SeqCst) == 1
3183 //atomic try_get_mut check strong count
3184 {
3185 // Yes, we need to check this again, see comment further down.
3186 let weak_count = item.weak_count.load(Ordering::SeqCst); //atomic try_get_mut double check weak_count
3187 if get_weak_count(weak_count) == 1 {
3188 // SAFETY:
3189 // The above constraints actually guarantee there can be no concurrent access.
3190 //
3191 // If we can prove that no such thread existed at time of 'weak_count.load' above,
3192 // no such thread can later appear, since we hold '&mut self' to the only
3193 // ArcShift referencing the chain. I.e, if we can prove this, we're done.
3194 // No other weak-ref can exist, since we've checked that weak count is 1.
3195 //
3196 // Since the weak ref flags say there's no next and no prev, the only possible other
3197 // ref that could exist at 'weak_count.load'-time would be a strong ref.
3198 //
3199 // We therefore check the strong count. If it is 1, there's only three possible cases:
3200 // 1: We truly are the only remaining ref
3201 // 2: There was one, and it has since gone away
3202 // 3: There was one, and it has updated the chain and added a new value.
3203 // 4: There was one, and it has downgraded to a weak ref.
3204 //
3205 // Cases 1 and 2 are benign. We guard against case 3 by loading 'weak_count'
3206 // again. If it still shows there's no 'next', then case 3 is off the table.
3207 // If the count is also still 1, option 4 is off the table. Meaning, we truly
3208 // are alone.
3209 let temp: &mut ManuallyDrop<T> = unsafe { &mut *item.payload.get() };
3210 Some(&mut **temp)
3211 } else {
3212 None
3213 }
3214 } else {
3215 None
3216 }
3217 })
3218 }
3219
3220 /// See [`ArcShift::update`] .
3221 ///
3222 /// This method allows converting from [`Box<T>`]. This can be useful since it means it's
3223 /// possible to use unsized types, such as [`str`] or [`[u8]`].
3224 pub fn update_box(&mut self, new_payload: Box<T>) {
3225 let holder = make_sized_or_unsized_holder_from_box(new_payload, self.item.as_ptr());
3226
3227 with_holder!(self.item, T, item, {
3228 let mut jobq = DropHandler::default();
3229 // SAFETY:
3230 // do_update returns a valid non-null pointer
3231 let new_item = unsafe {
3232 NonNull::new_unchecked(do_update(item, |_| Some(holder), &mut jobq) as *mut _)
3233 };
3234 jobq.resume_any_panics();
3235 self.item = new_item;
3236 });
3237 }
3238
3239 /// Reload this ArcShift instance, and return the latest value.
3240 #[inline]
3241 pub fn get(&mut self) -> &T {
3242 if is_sized::<T>() {
3243 let ptr = from_dummy::<T, SizedMetadata>(self.item.as_ptr());
3244 // SAFETY:
3245 // self.item is always a valid pointer
3246 let item = unsafe { &*ptr };
3247 let next = item.next.load(Ordering::Relaxed);
3248 if !undecorate(next).is_null() {
3249 return Self::advance_strong_helper2::<SizedMetadata>(&mut self.item, true);
3250 }
3251 // SAFETY:
3252 // self.item is always a valid pointer
3253 unsafe { &*(item.payload.get() as *const T) }
3254 } else {
3255 let ptr = from_dummy::<T, UnsizedMetadata<T>>(self.item.as_ptr());
3256 // SAFETY:
3257 // self.item is always a valid pointer
3258 let item = unsafe { &*ptr };
3259 let next = item.next.load(Ordering::Relaxed);
3260 if !undecorate(next).is_null() {
3261 return Self::advance_strong_helper2::<UnsizedMetadata<T>>(&mut self.item, true);
3262 }
3263 // SAFETY:
3264 // self.item is always a valid pointer
3265 unsafe { item.payload() }
3266 }
3267 }
3268
3269 /// Get the current value of this ArcShift instance.
3270 ///
3271 /// WARNING!
3272 /// Note, this method is fast if the ArcShift is already up to date. If a more recent
3273 /// value exists, this more recent value will be returned, but at a severe
3274 /// performance penalty (i.e, 'shared_get' on stale values is significantly slower than
3275 /// RwLock). This method is still lock-free. The performance penalty will be even more
3276 /// severe if there is also contention.
3277 ///
3278 /// This method has the advantage that it doesn't require `&mut self` access,
3279 /// but is otherwise inferior to [`ArcShift::get`].
3280 #[inline(always)]
3281 pub fn shared_get(&self) -> SharedGetGuard<'_, T> {
3282 if is_sized::<T>() {
3283 let ptr = from_dummy::<T, SizedMetadata>(self.item.as_ptr());
3284 // SAFETY:
3285 // self.item is always a valid pointer
3286 let item = unsafe { &*ptr };
3287 let next = item.next.load(Ordering::Relaxed);
3288 if undecorate(next).is_null() {
3289 // SAFETY:
3290 // self.item is always a valid pointer
3291 return SharedGetGuard::Raw(unsafe { item.payload() });
3292 }
3293
3294 slow_shared_get(item).unwrap_or_else(|| SharedGetGuard::FullClone {
3295 cloned: self.clone(),
3296 })
3297 } else {
3298 let ptr = from_dummy::<T, UnsizedMetadata<T>>(self.item.as_ptr());
3299 // SAFETY:
3300 // self.item is always a valid pointer
3301 let item = unsafe { &*ptr };
3302 let next = item.next.load(Ordering::Relaxed);
3303 if undecorate(next).is_null() {
3304 // SAFETY:
3305 // self.item is always a valid pointer
3306 return SharedGetGuard::Raw(unsafe { item.payload() });
3307 }
3308
3309 slow_shared_get(item).unwrap_or_else(|| SharedGetGuard::FullClone {
3310 cloned: self.clone(),
3311 })
3312 }
3313 }
3314
3315 /// Get the current value of this ArcShift instance.
3316 ///
3317 /// WARNING! This does not reload the pointer. Use [`ArcShift::reload()`] to reload
3318 /// the value, or [`ArcShift::get()`] to always get the newest value.
3319 ///
3320 /// This method has the advantage that it doesn't require `&mut self` access, and is very
3321 /// fast.
3322 #[inline(always)]
3323 pub fn shared_non_reloading_get(&self) -> &T {
3324 with_holder!(self.item, T, item, {
3325 // SAFETY:
3326 // ArcShift has a strong ref, so payload must not have been dropped.
3327 unsafe { (*item).payload() }
3328 })
3329 }
3330
3331 /// Create an [`ArcShiftWeak<T>`] instance.
3332 ///
3333 /// Weak pointers do not prevent the inner from being dropped.
3334 ///
3335 /// The returned pointer can later be upgraded back to a regular ArcShift instance, if
3336 /// there is at least one remaining strong reference (i.e, [`ArcShift<T>`] instance).
3337 #[must_use = "this returns a new `ArcShiftWeak` pointer, \
3338 without modifying the original `ArcShift`"]
3339 pub fn downgrade(this: &ArcShift<T>) -> ArcShiftWeak<T> {
3340 let t = with_holder!(this.item, T, item, do_clone_weak(item));
3341 ArcShiftWeak {
3342 // SAFETY:
3343 // do_clone_weak returns a valid, non-null pointer
3344 item: unsafe { NonNull::new_unchecked(t as *mut _) },
3345 }
3346 }
3347
3348 /// Note, if there is at least one strong count, these collectively hold a weak count.
3349 #[allow(unused)]
3350 pub(crate) fn weak_count(&self) -> usize {
3351 with_holder!(self.item, T, item, {
3352 // SAFETY:
3353 // self.item is a valid pointer
3354 unsafe { get_weak_count((*item).weak_count.load(Ordering::SeqCst)) }
3355 })
3356 }
3357 #[allow(unused)]
3358 pub(crate) fn strong_count(&self) -> usize {
3359 with_holder!(self.item, T, item, {
3360 // SAFETY:
3361 // self.item is a valid pointer
3362 unsafe { (*item).strong_count.load(Ordering::SeqCst) }
3363 })
3364 }
3365
3366 fn advance_strong_helper2<M: IMetadata>(
3367 old_item_ptr: &mut NonNull<ItemHolderDummy<T>>,
3368 gc: bool,
3369 ) -> &T {
3370 let mut jobq = DropHandler::default();
3371 let mut item_ptr = old_item_ptr.as_ptr() as *const _;
3372 // Lock free. Only loops if disturb-flag was set, meaning other thread
3373 // has offloaded work on us, and it has made progress.
3374 loop {
3375 item_ptr = do_advance_strong::<T, M>(item_ptr, &mut jobq);
3376
3377 if gc {
3378 let (need_rerun, _strong_refs) =
3379 do_janitor_task(from_dummy::<T, M>(item_ptr), &mut jobq);
3380 if need_rerun {
3381 debug_println!("Janitor {:?} was disturbed. Need rerun", item_ptr);
3382 continue;
3383 }
3384 }
3385 jobq.resume_any_panics();
3386 // SAFETY:
3387 // pointer returned by do_advance_strong is always a valid pointer
3388 *old_item_ptr = unsafe { NonNull::new_unchecked(item_ptr as *mut _) };
3389 // SAFETY:
3390 // pointer returned by do_advance_strong is always a valid pointer
3391 let item = unsafe { &*from_dummy::<T, M>(item_ptr) };
3392 // SAFETY:
3393 // pointer returned by do_advance_strong is always a valid pointer
3394 return unsafe { item.payload() };
3395 }
3396 }
3397
3398 /// Reload this instance, making it point to the most recent value.
3399 #[inline(always)]
3400 pub fn reload(&mut self) {
3401 if is_sized::<T>() {
3402 let ptr = from_dummy::<T, SizedMetadata>(self.item.as_ptr());
3403 // SAFETY:
3404 // self.item is always a valid pointer
3405 let item = unsafe { &*ptr };
3406 let next = item.next.load(Ordering::Relaxed);
3407 if next.is_null() {
3408 return;
3409 }
3410 Self::advance_strong_helper2::<SizedMetadata>(&mut self.item, true);
3411 } else {
3412 let ptr = from_dummy::<T, UnsizedMetadata<T>>(self.item.as_ptr());
3413 // SAFETY:
3414 // self.item is always a valid pointer
3415 let item = unsafe { &*ptr };
3416 let next = item.next.load(Ordering::Relaxed);
3417 if next.is_null() {
3418 return;
3419 }
3420 Self::advance_strong_helper2::<UnsizedMetadata<T>>(&mut self.item, true);
3421 }
3422 }
3423
3424 /// Check all links and refcounts.
3425 ///
3426 /// # SAFETY
3427 /// This method requires that no other threads access the chain while it is running.
3428 /// It is up to the caller to actually ensure this.
3429 #[allow(unused)]
3430 #[cfg(test)]
3431 #[cfg(feature = "std")]
3432 pub(crate) unsafe fn debug_validate(
3433 strong_handles: &[&Self],
3434 weak_handles: &[&ArcShiftWeak<T>],
3435 ) {
3436 let first = if !strong_handles.is_empty() {
3437 &strong_handles[0].item
3438 } else {
3439 &weak_handles[0].item
3440 };
3441 with_holder!(first, T, item, {
3442 Self::debug_validate_impl(strong_handles, weak_handles, item);
3443 });
3444 }
3445 #[allow(unused)]
3446 #[cfg(test)]
3447 #[cfg(feature = "std")]
3448 fn debug_validate_impl<M: IMetadata>(
3449 strong_handles: &[&ArcShift<T>],
3450 weak_handles: &[&ArcShiftWeak<T>],
3451 prototype: *const ItemHolder<T, M>,
3452 ) {
3453 let mut last = prototype;
3454
3455 debug_println!("Start traverse right at {:?}", last);
3456 // Lock free - this is a debug method not used in production.
3457 loop {
3458 debug_println!("loop traverse right at {:?}", last);
3459 // SAFETY:
3460 // Caller guarantees that initial 'last' prototype is a valid pointer.
3461 // Other pointers are found by walking right from that.
3462 // Caller guarantees there are no concurrent updates.
3463 let next = unsafe { undecorate((*last).next.load(Ordering::SeqCst)) };
3464 debug_println!("Traversed to {:?}", next);
3465 if next.is_null() {
3466 break;
3467 }
3468 last = from_dummy::<T, M>(next);
3469 }
3470
3471 let mut true_weak_refs =
3472 std::collections::HashMap::<*const ItemHolderDummy<T>, usize>::new();
3473 let mut true_strong_refs =
3474 std::collections::HashMap::<*const ItemHolderDummy<T>, usize>::new();
3475 // SAFETY:
3476 // the loop above has yielded a valid last ptr
3477 let all_nodes = unsafe { (*last).debug_all_to_left() };
3478
3479 let strong_refs_exist = all_nodes
3480 .iter()
3481 .any(|x| x.strong_count.load(Ordering::SeqCst) > 0);
3482
3483 if strong_refs_exist {
3484 debug_println!("Reading {:?}.next", to_dummy(last));
3485 // SAFETY:
3486 // the loop above has yielded a valid last ptr
3487 let last_next = unsafe { (*last).next.load(Ordering::SeqCst) };
3488 debug_println!("Reading {:?}.next gave {:?}", to_dummy(last), last_next);
3489 assert!(!get_decoration(last_next).is_dropped(), "the rightmost node must never be dropped (at least at rest, but regardless of refcount)");
3490 }
3491
3492 for _node in all_nodes.iter() {
3493 debug_println!("Node: {:?}", *_node as *const ItemHolder<T, _>);
3494 }
3495 for node in all_nodes.iter().copied() {
3496 let next = undecorate(node.next.load(Ordering::SeqCst));
3497 if !next.is_null() {
3498 // SAFETY:
3499 // next pointer is valid
3500 assert!(all_nodes.contains(unsafe { &&*from_dummy::<T, M>(next) }));
3501 }
3502 let prev: *const _ = node.prev.load(Ordering::SeqCst);
3503 if !prev.is_null() {
3504 *true_weak_refs.entry(prev).or_default() += 1;
3505 // SAFETY:
3506 // prev pointer is valid
3507 assert!(all_nodes.contains(unsafe { &&*from_dummy::<T, M>(prev) }));
3508 }
3509 let advance_count = node.advance_count.load(Ordering::SeqCst);
3510 assert_eq!(advance_count, 0, "advance_count must always be 0 at rest");
3511 }
3512
3513 for handle in weak_handles.iter() {
3514 // SAFETY:
3515 // All ArcShiftWeak instances always have valid, non-null pointers
3516 assert!(all_nodes.contains(unsafe { &&*from_dummy::<T, M>(handle.item.as_ptr()) }));
3517 let true_weak_count = true_weak_refs.entry(handle.item.as_ptr()).or_default();
3518 *true_weak_count += 1;
3519 }
3520 for handle in strong_handles.iter() {
3521 // SAFETY:
3522 // All ArcShift instances always have valid, non-null pointers
3523 assert!(all_nodes.contains(unsafe { &&*from_dummy::<T, M>(handle.item.as_ptr()) }));
3524 // SAFETY:
3525 // All ArcShift instances always have valid, non-null pointers
3526 let handle_next: *const _ = unsafe {
3527 (*from_dummy::<T, M>(handle.item.as_ptr()))
3528 .next
3529 .load(Ordering::SeqCst)
3530 };
3531 assert!(
3532 !get_decoration(handle_next).is_dropped(),
3533 "nodes referenced by strong handles must not be dropped, but {:?} was",
3534 handle.item.as_ptr()
3535 );
3536
3537 let true_strong_count = true_strong_refs.entry(handle.item.as_ptr()).or_default();
3538 if *true_strong_count == 0 {
3539 *true_weak_refs.entry(handle.item.as_ptr()).or_default() += 1;
3540 }
3541 *true_strong_count += 1;
3542 }
3543 for node in all_nodes.iter().copied() {
3544 let node_dummy = to_dummy(node);
3545 let strong_count = node.strong_count.load(Ordering::SeqCst);
3546 let weak_count = get_weak_count(node.weak_count.load(Ordering::SeqCst));
3547 let expected_strong_count = true_strong_refs.get(&node_dummy).unwrap_or(&0);
3548 let expected_weak_count = true_weak_refs.get(&node_dummy).unwrap_or(&0);
3549 assert_eq!(
3550 strong_count, *expected_strong_count,
3551 "strong count of {:x?} should be {}",
3552 node as *const _, *expected_strong_count
3553 );
3554 assert_eq!(
3555 weak_count, *expected_weak_count,
3556 "weak count of {:x?} should be {}",
3557 node as *const _, *expected_weak_count
3558 );
3559
3560 let next = node.next.load(Ordering::SeqCst);
3561 if strong_count > 0 {
3562 assert!(
3563 !get_decoration(next).is_dropped(),
3564 "as strong count is >0, node {:x?} should not be dropped",
3565 node as *const _
3566 );
3567 }
3568 }
3569 }
3570}
3571
3572#[cfg(test)]
3573#[cfg(not(any(loom, feature = "shuttle")))]
3574pub(crate) mod no_std_tests;
3575
3576// Module for tests
3577#[cfg(all(test, feature = "std"))]
3578pub(crate) mod tests;