Struct lock_api::MappedRwLockWriteGuard [−][src]
#[must_use]pub struct MappedRwLockWriteGuard<'a, R: RawRwLock + 'a, T: ?Sized + 'a> { /* fields omitted */ }
An RAII write lock guard returned by RwLockWriteGuard::map, which can point to a
subfield of the protected data.
The main difference between MappedRwLockWriteGuard and RwLockWriteGuard is that the
former doesn't support temporarily unlocking and re-locking, since that
could introduce soundness issues if the locked object is modified by another
thread.
Methods
impl<'a, R: RawRwLock + 'a, T: ?Sized + 'a> MappedRwLockWriteGuard<'a, R, T>[src]
impl<'a, R: RawRwLock + 'a, T: ?Sized + 'a> MappedRwLockWriteGuard<'a, R, T>pub fn map<U: ?Sized, F>(orig: Self, f: F) -> MappedRwLockWriteGuard<'a, R, U> where
F: FnOnce(&mut T) -> &mut U, [src]
pub fn map<U: ?Sized, F>(orig: Self, f: F) -> MappedRwLockWriteGuard<'a, R, U> where
F: FnOnce(&mut T) -> &mut U, Make a new MappedRwLockWriteGuard for a component of the locked data.
This operation cannot fail as the MappedRwLockWriteGuard passed
in already locked the data.
This is an associated function that needs to be
used as MappedRwLockWriteGuard::map(...). A method would interfere with methods of
the same name on the contents of the locked data.
impl<'a, R: RawRwLockDowngrade + 'a, T: ?Sized + 'a> MappedRwLockWriteGuard<'a, R, T>[src]
impl<'a, R: RawRwLockDowngrade + 'a, T: ?Sized + 'a> MappedRwLockWriteGuard<'a, R, T>pub fn downgrade(s: Self) -> MappedRwLockReadGuard<'a, R, T>[src]
pub fn downgrade(s: Self) -> MappedRwLockReadGuard<'a, R, T>Atomically downgrades a write lock into a read lock without allowing any writers to take exclusive access of the lock in the meantime.
Note that if there are any writers currently waiting to take the lock then other readers may not be able to acquire the lock even if it was downgraded.
impl<'a, R: RawRwLockFair + 'a, T: ?Sized + 'a> MappedRwLockWriteGuard<'a, R, T>[src]
impl<'a, R: RawRwLockFair + 'a, T: ?Sized + 'a> MappedRwLockWriteGuard<'a, R, T>pub fn unlock_fair(s: Self)[src]
pub fn unlock_fair(s: Self)Unlocks the RwLock using a fair unlock protocol.
By default, RwLock is unfair and allow the current thread to re-lock
the RwLock before another has the chance to acquire the lock, even if
that thread has been blocked on the RwLock for a long time. This is
the default because it allows much higher throughput as it avoids
forcing a context switch on every RwLock unlock. This can result in one
thread acquiring a RwLock many more times than other threads.
However in some cases it can be beneficial to ensure fairness by forcing
the lock to pass on to a waiting thread if there is one. This is done by
using this method instead of dropping the MappedRwLockWriteGuard normally.
Trait Implementations
impl<'a, R: RawRwLock + 'a, T: ?Sized + Sync + 'a> Sync for MappedRwLockWriteGuard<'a, R, T>[src]
impl<'a, R: RawRwLock + 'a, T: ?Sized + Sync + 'a> Sync for MappedRwLockWriteGuard<'a, R, T>impl<'a, R: RawRwLock + 'a, T: ?Sized + 'a> Send for MappedRwLockWriteGuard<'a, R, T> where
R::GuardMarker: Send, [src]
impl<'a, R: RawRwLock + 'a, T: ?Sized + 'a> Send for MappedRwLockWriteGuard<'a, R, T> where
R::GuardMarker: Send, impl<'a, R: RawRwLock + 'a, T: ?Sized + 'a> Deref for MappedRwLockWriteGuard<'a, R, T>[src]
impl<'a, R: RawRwLock + 'a, T: ?Sized + 'a> Deref for MappedRwLockWriteGuard<'a, R, T>type Target = T
The resulting type after dereferencing.
fn deref(&self) -> &T[src]
fn deref(&self) -> &TDereferences the value.
impl<'a, R: RawRwLock + 'a, T: ?Sized + 'a> DerefMut for MappedRwLockWriteGuard<'a, R, T>[src]
impl<'a, R: RawRwLock + 'a, T: ?Sized + 'a> DerefMut for MappedRwLockWriteGuard<'a, R, T>impl<'a, R: RawRwLock + 'a, T: ?Sized + 'a> Drop for MappedRwLockWriteGuard<'a, R, T>[src]
impl<'a, R: RawRwLock + 'a, T: ?Sized + 'a> Drop for MappedRwLockWriteGuard<'a, R, T>