[][src]Struct crossbeam_utils::CachePadded

pub struct CachePadded<T> { /* fields omitted */ }

Pads and aligns a value to the length of a cache line.

In concurrent programming, sometimes it is desirable to make sure commonly accessed pieces of data are not placed into the same cache line. Updating an atomic value invalides the whole cache line it belongs to, which makes the next access to the same cache line slower for other CPU cores. Use CachePadded to ensure updating one piece of data doesn't invalidate other cached data.

Size and alignment

Cache lines are assumed to be N bytes long, depending on the architecture:

  • On x86-64, N = 128.
  • On all others, N = 64.

Note that N is just a reasonable guess and is not guaranteed to match the actual cache line length of the machine the program is running on. On modern Intel architectures, spatial prefetcher is pulling pairs of 64-byte cache lines at a time, so we pessimistically assume that cache lines are 128 bytes long.

The size of CachePadded<T> is the smallest multiple of N bytes large enough to accommodate a value of type T.

The alignment of CachePadded<T> is the maximum of N bytes and the alignment of T.

Examples

Alignment and padding:

use crossbeam_utils::CachePadded;

let array = [CachePadded::new(1i8), CachePadded::new(2i8)];
let addr1 = &*array[0] as *const i8 as usize;
let addr2 = &*array[1] as *const i8 as usize;

assert!(addr2 - addr1 >= 64);
assert_eq!(addr1 % 64, 0);
assert_eq!(addr2 % 64, 0);

When building a concurrent queue with a head and a tail index, it is wise to place them in different cache lines so that concurrent threads pushing and popping elements don't invalidate each other's cache lines:

use crossbeam_utils::CachePadded;
use std::sync::atomic::AtomicUsize;

struct Queue<T> {
    head: CachePadded<AtomicUsize>,
    tail: CachePadded<AtomicUsize>,
    buffer: *mut T,
}

Methods

impl<T> CachePadded<T>[src]

pub fn new(t: T) -> CachePadded<T>[src]

Pads and aligns a value to the length of a cache line.

Examples

use crossbeam_utils::CachePadded;

let padded_value = CachePadded::new(1);

pub fn into_inner(self) -> T[src]

Returns the inner value.

Examples

use crossbeam_utils::CachePadded;

let padded_value = CachePadded::new(7);
let value = padded_value.into_inner();
assert_eq!(value, 7);

Trait Implementations

impl<T: Sync> Sync for CachePadded<T>[src]

impl<T: Clone> Clone for CachePadded<T>[src]

fn clone_from(&mut self, source: &Self)1.0.0[src]

Performs copy-assignment from source. Read more

impl<T: Default> Default for CachePadded<T>[src]

impl<T> From<T> for CachePadded<T>[src]

impl<T: PartialEq> PartialEq<CachePadded<T>> for CachePadded<T>[src]

impl<T: Copy> Copy for CachePadded<T>[src]

impl<T: Send> Send for CachePadded<T>[src]

impl<T: Eq> Eq for CachePadded<T>[src]

impl<T> Deref for CachePadded<T>[src]

type Target = T

The resulting type after dereferencing.

impl<T: Debug> Debug for CachePadded<T>[src]

impl<T> DerefMut for CachePadded<T>[src]

impl<T: Hash> Hash for CachePadded<T>[src]

fn hash_slice<H>(data: &[Self], state: &mut H) where
    H: Hasher
1.3.0[src]

Feeds a slice of this type into the given [Hasher]. Read more

Auto Trait Implementations

impl<T> Unpin for CachePadded<T> where
    T: Unpin

impl<T> UnwindSafe for CachePadded<T> where
    T: UnwindSafe

impl<T> RefUnwindSafe for CachePadded<T> where
    T: RefUnwindSafe

Blanket Implementations

impl<T> From<T> for T[src]

impl<T> ToOwned for T where
    T: Clone
[src]

type Owned = T

The resulting type after obtaining ownership.

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> Any for T where
    T: 'static + ?Sized
[src]