Struct heron::rapier_plugin::rapier2d::crossbeam::epoch::Shared [−]
Expand description
A pointer to an object protected by the epoch GC.
The pointer is valid for use only during the lifetime 'g
.
The pointer must be properly aligned. Since it is aligned, a tag can be stored into the unused least significant bits of the address.
Implementations
impl<'g, T> Shared<'g, T>
impl<'g, T> Shared<'g, T>
Converts the pointer to a raw pointer (without the tag).
Examples
use crossbeam_epoch::{self as epoch, Atomic, Owned};
use std::sync::atomic::Ordering::SeqCst;
let o = Owned::new(1234);
let raw = &*o as *const _;
let a = Atomic::from(o);
let guard = &epoch::pin();
let p = a.load(SeqCst, guard);
assert_eq!(p.as_raw(), raw);
Returns a new null pointer.
Examples
use crossbeam_epoch::Shared;
let p = Shared::<i32>::null();
assert!(p.is_null());
Returns true
if the pointer is null.
Examples
use crossbeam_epoch::{self as epoch, Atomic, Owned};
use std::sync::atomic::Ordering::SeqCst;
let a = Atomic::null();
let guard = &epoch::pin();
assert!(a.load(SeqCst, guard).is_null());
a.store(Owned::new(1234), SeqCst);
assert!(!a.load(SeqCst, guard).is_null());
Dereferences the pointer.
Returns a reference to the pointee that is valid during the lifetime 'g
.
Safety
Dereferencing a pointer is unsafe because it could be pointing to invalid memory.
Another concern is the possibility of data races due to lack of proper synchronization. For example, consider the following scenario:
- A thread creates a new object:
a.store(Owned::new(10), Relaxed)
- Another thread reads it:
*a.load(Relaxed, guard).as_ref().unwrap()
The problem is that relaxed orderings don’t synchronize initialization of the object with
the read from the second thread. This is a data race. A possible solution would be to use
Release
and Acquire
orderings.
Examples
use crossbeam_epoch::{self as epoch, Atomic};
use std::sync::atomic::Ordering::SeqCst;
let a = Atomic::new(1234);
let guard = &epoch::pin();
let p = a.load(SeqCst, guard);
unsafe {
assert_eq!(p.deref(), &1234);
}
Dereferences the pointer.
Returns a mutable reference to the pointee that is valid during the lifetime 'g
.
Safety
-
There is no guarantee that there are no more threads attempting to read/write from/to the actual object at the same time.
The user must know that there are no concurrent accesses towards the object itself.
-
Other than the above, all safety concerns of
deref()
applies here.
Examples
use crossbeam_epoch::{self as epoch, Atomic};
use std::sync::atomic::Ordering::SeqCst;
let a = Atomic::new(vec![1, 2, 3, 4]);
let guard = &epoch::pin();
let mut p = a.load(SeqCst, guard);
unsafe {
assert!(!p.is_null());
let b = p.deref_mut();
assert_eq!(b, &vec![1, 2, 3, 4]);
b.push(5);
assert_eq!(b, &vec![1, 2, 3, 4, 5]);
}
let p = a.load(SeqCst, guard);
unsafe {
assert_eq!(p.deref(), &vec![1, 2, 3, 4, 5]);
}
Converts the pointer to a reference.
Returns None
if the pointer is null, or else a reference to the object wrapped in Some
.
Safety
Dereferencing a pointer is unsafe because it could be pointing to invalid memory.
Another concern is the possibility of data races due to lack of proper synchronization. For example, consider the following scenario:
- A thread creates a new object:
a.store(Owned::new(10), Relaxed)
- Another thread reads it:
*a.load(Relaxed, guard).as_ref().unwrap()
The problem is that relaxed orderings don’t synchronize initialization of the object with
the read from the second thread. This is a data race. A possible solution would be to use
Release
and Acquire
orderings.
Examples
use crossbeam_epoch::{self as epoch, Atomic};
use std::sync::atomic::Ordering::SeqCst;
let a = Atomic::new(1234);
let guard = &epoch::pin();
let p = a.load(SeqCst, guard);
unsafe {
assert_eq!(p.as_ref(), Some(&1234));
}
pub unsafe fn into_owned(self) -> Owned<T>
pub unsafe fn into_owned(self) -> Owned<T>
Takes ownership of the pointee.
Panics
Panics if this pointer is null, but only in debug mode.
Safety
This method may be called only if the pointer is valid and nobody else is holding a reference to the same object.
Examples
use crossbeam_epoch::{self as epoch, Atomic};
use std::sync::atomic::Ordering::SeqCst;
let a = Atomic::new(1234);
unsafe {
let guard = &epoch::unprotected();
let p = a.load(SeqCst, guard);
drop(p.into_owned());
}
Returns the tag stored within the pointer.
Examples
use crossbeam_epoch::{self as epoch, Atomic, Owned};
use std::sync::atomic::Ordering::SeqCst;
let a = Atomic::<u64>::from(Owned::new(0u64).with_tag(2));
let guard = &epoch::pin();
let p = a.load(SeqCst, guard);
assert_eq!(p.tag(), 2);
Returns the same pointer, but tagged with tag
. tag
is truncated to be fit into the
unused bits of the pointer to T
.
Examples
use crossbeam_epoch::{self as epoch, Atomic};
use std::sync::atomic::Ordering::SeqCst;
let a = Atomic::new(0u64);
let guard = &epoch::pin();
let p1 = a.load(SeqCst, guard);
let p2 = p1.with_tag(2);
assert_eq!(p1.tag(), 0);
assert_eq!(p2.tag(), 2);
assert_eq!(p1.as_raw(), p2.as_raw());
Trait Implementations
pub fn partial_cmp(&self, other: &Shared<'g, T>) -> Option<Ordering>
pub fn partial_cmp(&self, other: &Shared<'g, T>) -> Option<Ordering>
This method returns an ordering between self
and other
values if one exists. Read more
This method tests less than (for self
and other
) and is used by the <
operator. Read more
This method tests less than or equal to (for self
and other
) and is used by the <=
operator. Read more
This method tests greater than (for self
and other
) and is used by the >
operator. Read more
pub fn into_usize(self) -> usize
pub fn into_usize(self) -> usize
Returns the machine representation of the pointer.
pub unsafe fn from_usize(data: usize) -> Shared<'_, T>
pub unsafe fn from_usize(data: usize) -> Shared<'_, T>
Returns a new pointer pointing to the tagged pointer data
. Read more
Auto Trait Implementations
impl<'g, T: ?Sized> RefUnwindSafe for Shared<'g, T> where
T: RefUnwindSafe,
impl<'g, T: ?Sized> UnwindSafe for Shared<'g, T> where
T: RefUnwindSafe,
Blanket Implementations
Mutably borrows from an owned value. Read more
impl<T> Downcast for T where
T: Any,
impl<T> Downcast for T where
T: Any,
Convert Box<dyn Trait>
(where Trait: Downcast
) to Box<dyn Any>
. Box<dyn Any>
can
then be further downcast
into Box<ConcreteType>
where ConcreteType
implements Trait
. Read more
pub fn into_any_rc(self: Rc<T>) -> Rc<dyn Any + 'static>
pub fn into_any_rc(self: Rc<T>) -> Rc<dyn Any + 'static>
Convert Rc<Trait>
(where Trait: Downcast
) to Rc<Any>
. Rc<Any>
can then be
further downcast
into Rc<ConcreteType>
where ConcreteType
implements Trait
. Read more
Convert &Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &Any
’s vtable from &Trait
’s. Read more
pub fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
pub fn as_any_mut(&mut self) -> &mut (dyn Any + 'static)
Convert &mut Trait
(where Trait: Downcast
) to &Any
. This is needed since Rust cannot
generate &mut Any
’s vtable from &mut Trait
’s. Read more
impl<T> FromWorld for T where
T: Default,
impl<T> FromWorld for T where
T: Default,
pub fn from_world(_world: &mut World) -> T
pub fn from_world(_world: &mut World) -> T
Creates Self
using data from the given [World]
impl<T> Pointable for T
impl<T> Pointable for T
impl<SS, SP> SupersetOf<SS> for SP where
SS: SubsetOf<SP>,
impl<SS, SP> SupersetOf<SS> for SP where
SS: SubsetOf<SP>,
The inverse inclusion map: attempts to construct self
from the equivalent element of its
superset. Read more
pub fn is_in_subset(&self) -> bool
pub fn is_in_subset(&self) -> bool
Checks if self
is actually part of its subset T
(and can be converted to it).
pub fn to_subset_unchecked(&self) -> SS
pub fn to_subset_unchecked(&self) -> SS
Use with care! Same as self.to_subset
but without any property checks. Always succeeds.
pub fn from_subset(element: &SS) -> SP
pub fn from_subset(element: &SS) -> SP
The inclusion map: converts self
to the equivalent element of its superset.
pub fn vzip(self) -> V
Attaches the provided Subscriber
to this type, returning a
WithDispatch
wrapper. Read more
Attaches the current default Subscriber
to this type, returning a
WithDispatch
wrapper. Read more