pub unsafe auto trait Sync { }Expand description
Types for which it is safe to share references between threads.
This trait is automatically implemented when the compiler determines it’s appropriate.
The precise definition is: a type T is Sync if and only if &T is
Send. In other words, if there is no possibility of
undefined behavior (including data races) when passing
&T references between threads.
As one would expect, primitive types like u8 and f64
are all Sync, and so are simple aggregate types containing them,
like tuples, structs and enums. More examples of basic Sync
types include “immutable” types like &T, and those with simple
inherited mutability, such as Box<T>, Vec<T> and
most other collection types. (Generic parameters need to be Sync
for their container to be Sync.)
A somewhat surprising consequence of the definition is that &mut T
is Sync (if T is Sync) even though it seems like that might
provide unsynchronized mutation. The trick is that a mutable
reference behind a shared reference (that is, & &mut T)
becomes read-only, as if it were a & &T. Hence there is no risk
of a data race.
A shorter overview of how Sync and Send relate to referencing:
&TisSendif and only ifTisSync&mut TisSendif and only ifTisSend&Tand&mut TareSyncif and only ifTisSync
Types that are not Sync are those that have “interior
mutability” in a non-thread-safe form, such as Cell
and RefCell. These types allow for mutation of
their contents even through an immutable, shared reference. For
example the set method on Cell<T> takes &self, so it requires
only a shared reference &Cell<T>. The method performs no
synchronization, thus Cell cannot be Sync.
Another example of a non-Sync type is the reference-counting
pointer Rc. Given any reference &Rc<T>, you can clone
a new Rc<T>, modifying the reference counts in a non-atomic way.
For cases when one does need thread-safe interior mutability,
Rust provides atomic data types, as well as explicit locking via
sync::Mutex and sync::RwLock. These types
ensure that any mutation cannot cause data races, hence the types
are Sync. Likewise, sync::Arc provides a thread-safe
analogue of Rc.
Any types with interior mutability must also use the
cell::UnsafeCell wrapper around the value(s) which
can be mutated through a shared reference. Failing to doing this is
undefined behavior. For example, transmute-ing
from &T to &mut T is invalid.
See the Nomicon for more details about Sync.
Implementors§
impl !Sync for Arguments<'_>
impl !Sync for LocalWaker
impl !Sync for Args
impl !Sync for ArgsOs
impl Sync for alloc::string::Drain<'_>
impl Sync for TypeId
impl Sync for Bytes<'_>
impl Sync for Location<'_>
impl Sync for AtomicBool
target_has_atomic_load_store=8 only.impl Sync for AtomicI8
impl Sync for AtomicI16
impl Sync for AtomicI32
impl Sync for AtomicI64
impl Sync for AtomicIsize
impl Sync for AtomicU8
impl Sync for AtomicU16
impl Sync for AtomicU32
impl Sync for AtomicU64
impl Sync for AtomicUsize
impl Sync for Waker
impl Sync for ConstGeneric
impl Sync for ROnce
impl Sync for SyncUnsend
impl Sync for NulStr<'_>
impl Sync for abi_stable::std_types::string::iters::IntoIter
impl Sync for UTypeId
impl Sync for MonoTypeLayout
impl Sync for TypeLayout
impl Sync for MonoTLEnum
impl Sync for TLDiscriminants
impl Sync for CompTLFields
impl Sync for TLFunctions
impl Sync for BoxBytes
impl Sync for Select<'_>
impl Sync for Collector
impl Sync for Unparker
impl Sync for Scope<'_>
impl Sync for FT_MemoryRec
impl Sync for libloading::os::unix::Library
impl Sync for libloading::os::unix::Library
impl Sync for libloading::safe::Library
impl Sync for libloading::safe::Library
impl Sync for GuardNoSend
impl<'a> Sync for IoSlice<'a>
impl<'a> Sync for IoSliceMut<'a>
impl<'a> Sync for RStr<'a>
impl<'a> Sync for abi_stable::std_types::string::iters::Drain<'a>
impl<'a, 'b, K, Q, V, S, A> Sync for OccupiedEntryRef<'a, 'b, K, Q, V, S, A>
impl<'a, 'i, K, S, M> Sync for dashmap::iter_set::Iter<'i, K, S, M>
impl<'a, 'i, K, V, S, M> Sync for dashmap::iter::Iter<'i, K, V, S, M>
impl<'a, 'i, K, V, S, M> Sync for dashmap::iter::IterMut<'i, K, V, S, M>
impl<'a, A, D> Sync for AxisChunksIter<'a, A, D>
impl<'a, A, D> Sync for AxisChunksIterMut<'a, A, D>
impl<'a, A, D> Sync for AxisIter<'a, A, D>
impl<'a, A, D> Sync for AxisIterMut<'a, A, D>
impl<'a, A, D> Sync for ExactChunks<'a, A, D>
impl<'a, A, D> Sync for ExactChunksIter<'a, A, D>
impl<'a, A, D> Sync for ExactChunksIterMut<'a, A, D>
impl<'a, A, D> Sync for ExactChunksMut<'a, A, D>
impl<'a, A, D> Sync for IndexedIter<'a, A, D>
impl<'a, A, D> Sync for IndexedIterMut<'a, A, D>
impl<'a, A, D> Sync for rssn::prelude::ndarray::iter::Iter<'a, A, D>
impl<'a, A, D> Sync for rssn::prelude::ndarray::iter::IterMut<'a, A, D>
impl<'a, A, D> Sync for LanesIter<'a, A, D>
impl<'a, A, D> Sync for LanesIterMut<'a, A, D>
impl<'a, A, D> Sync for Windows<'a, A, D>
impl<'a, K, V> Sync for dashmap::mapref::entry::OccupiedEntry<'a, K, V>
impl<'a, K, V> Sync for VacantEntry<'a, K, V>
impl<'a, K, V> Sync for RefMulti<'a, K, V>
impl<'a, K, V> Sync for RefMutMulti<'a, K, V>
impl<'a, K, V> Sync for Ref<'a, K, V>
impl<'a, K, V> Sync for RefMut<'a, K, V>
impl<'a, P> Sync for PrefixRef<P>where
P: 'a,
&'a WithMetadata_<P, P>: Sync,
impl<'a, R, G, T> Sync for MappedReentrantMutexGuard<'a, R, G, T>
impl<'a, R, G, T> Sync for ReentrantMutexGuard<'a, R, G, T>
impl<'a, R, T> Sync for lock_api::mutex::MappedMutexGuard<'a, R, T>
impl<'a, R, T> Sync for lock_api::mutex::MutexGuard<'a, R, T>
impl<'a, R, T> Sync for lock_api::rwlock::MappedRwLockReadGuard<'a, R, T>
impl<'a, R, T> Sync for lock_api::rwlock::MappedRwLockWriteGuard<'a, R, T>
impl<'a, R, T> Sync for RwLockUpgradableReadGuard<'a, R, T>
impl<'a, T> Sync for MovePtr<'a, T>where
T: Sync,
impl<'a, T> Sync for RMut<'a, T>
impl<'a, T> Sync for RRef<'a, T>
impl<'a, T> Sync for StaticRef<T>
impl<'a, T> Sync for RSliceMut<'a, T>
impl<'a, T> Sync for RSlice<'a, T>
impl<'a, T> Sync for OnceRef<'a, T>where
T: Sync,
impl<'a, T> Sync for smallvec::Drain<'a, T>
impl<'a, T, R, C, RStride, CStride> Sync for rssn::prelude::nalgebra::ViewStorage<'a, T, R, C, RStride, CStride>
impl<'a, T, R, C, RStride, CStride> Sync for rssn::prelude::nalgebra::ViewStorageMut<'a, T, R, C, RStride, CStride>
impl<'a, T, R, C, RStride, CStride> Sync for nalgebra::base::matrix_view::ViewStorage<'a, T, R, C, RStride, CStride>
impl<'a, T, R, C, RStride, CStride> Sync for nalgebra::base::matrix_view::ViewStorageMut<'a, T, R, C, RStride, CStride>
impl<'borr, P, I, EV> Sync for DynTrait<'borr, P, I, EV>
impl<'lib, T> Sync for libloading::safe::Symbol<'lib, T>where
T: Sync,
impl<'lt, P, I, V> Sync for RObject<'lt, P, I, V>
impl<'lt, _ErasedPtr> Sync for StablePlugin_TO<'lt, _ErasedPtr>where
_ErasedPtr: __GetPointerKind,
impl<A> Sync for OwnedRepr<A>where
A: Sync,
impl<Dyn> Sync for DynMetadata<Dyn>where
Dyn: ?Sized,
impl<K, S> Sync for dashmap::iter_set::OwningIter<K, S>
impl<K, V, S> Sync for RHashMap<K, V, S>
impl<K, V, S> Sync for dashmap::iter::OwningIter<K, V, S>
impl<K, V, S, A> Sync for hashbrown::map::OccupiedEntry<'_, K, V, S, A>
impl<K, V, S, A> Sync for hashbrown::map::OccupiedEntry<'_, K, V, S, A>
impl<K, V, S, A> Sync for RawOccupiedEntryMut<'_, K, V, S, A>
impl<R, G> Sync for RawReentrantMutex<R, G>
impl<R, G, T> Sync for ReentrantMutex<R, G, T>
impl<R, T> Sync for lock_api::mutex::Mutex<R, T>
impl<R, T> Sync for lock_api::rwlock::RwLock<R, T>
impl<R, T> Sync for lock_api::rwlock::RwLockReadGuard<'_, R, T>
impl<R, T> Sync for lock_api::rwlock::RwLockWriteGuard<'_, R, T>
impl<S, D> Sync for ArrayBase<S, D>
ArrayBase is Sync when the storage type is.
impl<T> !Sync for *const Twhere
T: ?Sized,
impl<T> !Sync for *mut Twhere
T: ?Sized,
impl<T> !Sync for core::cell::once::OnceCell<T>
impl<T> !Sync for core::cell::Cell<T>where
T: ?Sized,
impl<T> !Sync for RefCell<T>where
T: ?Sized,
impl<T> !Sync for UnsafeCell<T>where
T: ?Sized,
impl<T> !Sync for NonNull<T>where
T: ?Sized,
NonNull pointers are not Sync because the data they reference may be aliased.
impl<T> !Sync for std::sync::mpsc::Receiver<T>
impl<T> Sync for ThinBox<T>
ThinBox<T> is Sync if T is Sync because the data is owned.
impl<T> Sync for alloc::collections::linked_list::Iter<'_, T>where
T: Sync,
impl<T> Sync for alloc::collections::linked_list::IterMut<'_, T>where
T: Sync,
impl<T> Sync for SyncUnsafeCell<T>
impl<T> Sync for NonZero<T>where
T: ZeroablePrimitive + Sync,
impl<T> Sync for UnsafePinned<T>
impl<T> Sync for ChunksExactMut<'_, T>where
T: Sync,
impl<T> Sync for ChunksMut<'_, T>where
T: Sync,
impl<T> Sync for core::slice::iter::Iter<'_, T>where
T: Sync,
impl<T> Sync for core::slice::iter::IterMut<'_, T>where
T: Sync,
impl<T> Sync for RChunksExactMut<'_, T>where
T: Sync,
impl<T> Sync for RChunksMut<'_, T>where
T: Sync,
impl<T> Sync for AtomicPtr<T>
target_has_atomic_load_store=ptr only.impl<T> Sync for Exclusive<T>where
T: ?Sized,
impl<T> Sync for std::sync::mpmc::Receiver<T>where
T: Send,
impl<T> Sync for std::sync::mpmc::Sender<T>where
T: Send,
impl<T> Sync for std::sync::mpsc::Sender<T>where
T: Send,
impl<T> Sync for std::sync::nonpoison::mutex::MappedMutexGuard<'_, T>
impl<T> Sync for std::sync::nonpoison::mutex::Mutex<T>
T must be Send for Mutex to be Sync.
This ensures that the protected data can be accessed safely from multiple threads
without causing data races or other unsafe behavior.
Mutex<T> provides mutable access to T to one thread at a time. However, it’s essential
for T to be Send because it’s not safe for non-Send structures to be accessed in
this manner. For instance, consider Rc, a non-atomic reference counted smart pointer,
which is not Send. With Rc, we can have multiple copies pointing to the same heap
allocation with a non-atomic reference count. If we were to use Mutex<Rc<_>>, it would
only protect one instance of Rc from shared access, leaving other copies vulnerable
to potential data races.
Also note that it is not necessary for T to be Sync as &T is only made available
to one thread at a time if T is not Sync.
impl<T> Sync for std::sync::nonpoison::mutex::MutexGuard<'_, T>
T must be Sync for a MutexGuard<T> to be Sync
because it is possible to get a &T from &MutexGuard (via Deref).
impl<T> Sync for std::sync::nonpoison::rwlock::MappedRwLockReadGuard<'_, T>
impl<T> Sync for std::sync::nonpoison::rwlock::MappedRwLockWriteGuard<'_, T>
impl<T> Sync for std::sync::nonpoison::rwlock::RwLock<T>
impl<T> Sync for std::sync::nonpoison::rwlock::RwLockReadGuard<'_, T>
impl<T> Sync for std::sync::nonpoison::rwlock::RwLockWriteGuard<'_, T>
impl<T> Sync for OnceLock<T>
impl<T> Sync for std::sync::poison::mutex::MappedMutexGuard<'_, T>
impl<T> Sync for std::sync::poison::mutex::Mutex<T>
T must be Send for Mutex to be Sync.
This ensures that the protected data can be accessed safely from multiple threads
without causing data races or other unsafe behavior.
Mutex<T> provides mutable access to T to one thread at a time. However, it’s essential
for T to be Send because it’s not safe for non-Send structures to be accessed in
this manner. For instance, consider Rc, a non-atomic reference counted smart pointer,
which is not Send. With Rc, we can have multiple copies pointing to the same heap
allocation with a non-atomic reference count. If we were to use Mutex<Rc<_>>, it would
only protect one instance of Rc from shared access, leaving other copies vulnerable
to potential data races.
Also note that it is not necessary for T to be Sync as &T is only made available
to one thread at a time if T is not Sync.
impl<T> Sync for std::sync::poison::mutex::MutexGuard<'_, T>
T must be Sync for a MutexGuard<T> to be Sync
because it is possible to get a &T from &MutexGuard (via Deref).