[][src]Struct atomic_float::AtomicF64

#[repr(transparent)]pub struct AtomicF64(_);

A floating point type which can be safely shared between threads.

This type has the same in-memory representation as the underlying floating point type, f64.

See the module documentation for core::sync::atomic for information about the portability of various atomics (this one is mostly as portable as AtomicU64, with the caveat that we additionally that the platform support 64-bit floats).

Example

The intended use case is situations like this:

static DELTA_TIME: AtomicF64 = AtomicF64::new(1.0);

// In some main simulation loop:
DELTA_TIME.store(compute_delta_time(), Ordering::Release);

// elsewhere, perhaps on other threads:
let dt = DELTA_TIME.load(Ordering::Acquire);
// Use `dt` to compute simulation...

As well as any other cases where use of locking would be too substantial of overhead.

Note that when used like this (with Acquire and Release orderings), on x86_64 this compiles to the same code as you would get from a C++ global, (or a Rust static mut), while offering full synchronization.

(While caveats exist, the cases where you'd need the total order guaranteed by SeqCst for something like AtomicF64 seem very rare).

Implementation

Note: These details are not part of the stability guarantee of this crate, and are subject to change without a semver-breaking change.

Under the hood we use a transparent UnsafeCell<f64>, and cast the &UnsafeCell<f64> to an &AtomicU64 in order to perform atomic operations.

However, operations like fetch_add are considerably slower than would be the case for integer atomics.

Implementations

impl AtomicF64[src]

pub const fn new(float: f64) -> Self[src]

Initialize a AtomicF64 from an f64.

Example

Use as a variable

let x = AtomicF64::new(3.0f64);
assert_eq!(x.load(Relaxed), 3.0f64);

Use as a static:

static A_STATIC: AtomicF64 = AtomicF64::new(800.0);
assert_eq!(A_STATIC.load(Relaxed), 800.0);

pub fn get_mut(&mut self) -> &mut f64[src]

Returns a mutable reference to the underlying float.

This is safe because the mutable reference guarantees that no other threads are concurrently accessing the atomic data.

Example

let mut some_float = AtomicF64::new(1.0);
assert_eq!(*some_float.get_mut(), 1.0);
*some_float.get_mut() += 1.0;
assert_eq!(some_float.load(Ordering::SeqCst), 2.0);

pub fn into_inner(self) -> f64[src]

Consumes the atomic and returns the contained value.

This is safe because passing self by value guarantees that no other threads are concurrently accessing the atomic data.

Example

let v = AtomicF64::new(6.0);
assert_eq!(v.into_inner(), 6.0f64);

pub fn load(&self, ordering: Ordering) -> f64[src]

Loads a value from the atomic float.

load takes an Ordering argument which describes the memory ordering of this operation. Possible values are SeqCst, Acquire and Relaxed.

Panics

Panics if ordering is Release or AcqRel.

Example

use atomic_float::AtomicF64;
use std::sync::atomic::Ordering;
let v = AtomicF64::new(22.5);
assert_eq!(v.load(Ordering::SeqCst), 22.5);

pub fn store(&self, value: f64, ordering: Ordering)[src]

Store a value into the atomic float.

store takes an Ordering argument which describes the memory ordering of this operation. Possible values are SeqCst, Release and Relaxed.

Panics

Panics if ordering is Acquire or AcqRel.

Example

use atomic_float::AtomicF64;
use std::sync::atomic::Ordering;
let v = AtomicF64::new(22.5);
assert_eq!(v.load(Ordering::SeqCst), 22.5);
v.store(30.0, Ordering::SeqCst);
assert_eq!(v.load(Ordering::SeqCst), 30.0);

pub fn swap(&self, new_value: f64, ordering: Ordering) -> f64[src]

Stores a value into the atomic float, returning the previous value.

swap takes an Ordering argument which describes the memory ordering of this operation. All ordering modes are possible. Note that using Acquire makes the store part of this operation Relaxed, and using Release makes the load part Relaxed.

Example

use atomic_float::AtomicF64;
use std::sync::atomic::Ordering;
let v = AtomicF64::new(4.5);
assert_eq!(v.swap(100.0, Ordering::Relaxed), 4.5);
assert_eq!(v.load(Ordering::Relaxed), 100.0);

pub fn compare_and_swap(&self, current: f64, new: f64, order: Ordering) -> f64[src]

Stores a value into the atomic float if the current value is bitwise identical to the current value.

The return value is always the previous value. If it is equal to current, then the value was updated.

compare_and_swap also takes an Ordering argument which describes the memory ordering of this operation. Notice that even when using AcqRel, the operation might fail and hence just perform an Acquire load, but not have Release semantics. Using Acquire makes the store part of this operation Relaxed if it happens, and using Release makes the load part Relaxed.

Caveats

As the current must be bitwise identical to the previous value, you should not get the current value using any sort of arithmetic (both because of rounding, and to avoid any situation where -0.0 and +0.0 would be compared). Additionally, on some platforms (WASM and ASM.js currently) LLVM will canonicalize NaNs during loads, which can cause unexpected behavior here — typically in the other direction (two values being unexpectedly equal).

In practice, typical patterns for CaS tend to avoid these issues, but you're encouraged to avoid relying on the behavior of cas-family APIs in the face of rounding, signed zero, and NaNs.

Example

let v = AtomicF64::new(5.0);
assert_eq!(v.compare_and_swap(5.0, 10.0, Ordering::Relaxed), 5.0);
assert_eq!(v.load(Ordering::Relaxed), 10.0);

assert_eq!(v.compare_and_swap(6.0, 12.0, Ordering::Relaxed), 10.0);
assert_eq!(v.load(Ordering::Relaxed), 10.0);

pub fn compare_exchange(
    &self,
    current: f64,
    new: f64,
    success: Ordering,
    failure: Ordering
) -> Result<f64, f64>
[src]

Stores a value into the atomic float if the current value is the bitwise identical as the current value.

The return value is a result indicating whether the new value was written and containing the previous value. On success this value is guaranteed to be bitwise identical to current.

compare_exchange takes two Ordering arguments to describe the memory ordering of this operation. The first describes the required ordering if the operation succeeds while the second describes the required ordering when the operation fails. Using Acquire as success ordering makes the store part of this operation Relaxed, and using Release makes the successful load Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed and must be equivalent to or weaker than the success ordering.

Notes

Note that in many cases, when while loops where the condition contains a compare_exchange operation are better written to use a compare_exchange_weak in the condition instead (as on weakly ordered platforms like ARM, the compare_exchange operation itself can require a loop to perform).

Caveats

As the current parameter must be bitwise identical to the previous value, you should not get the current value using any sort of arithmetic (both because of rounding, and to avoid any situation where -0.0 and +0.0 would be compared). Additionally, on Wasm, in some cases NaN values have been known to cause problems for non-typical usage of this API. See AtomicF64::as_atomic_bits if performing the compare_exchange on the raw bits of this atomic float would solve an issue for you.

Example

let v = AtomicF64::new(5.0);
assert_eq!(v.compare_exchange(5.0, 10.0, Acquire, Relaxed), Ok(5.0));
assert_eq!(v.load(Relaxed), 10.0);

assert_eq!(v.compare_exchange(6.0, 12.0, SeqCst, Relaxed), Err(10.0));
assert_eq!(v.load(Relaxed), 10.0);

pub fn compare_exchange_weak(
    &self,
    current: f64,
    new: f64,
    success: Ordering,
    failure: Ordering
) -> Result<f64, f64>
[src]

Stores a value into the atomic integer if the current value is the same as the current value.

Unlike compare_exchange, this function is allowed to spuriously fail even when the comparison succeeds, which can result in more efficient code on some platforms. The return value is a result indicating whether the new value was written and containing the previous value.

compare_exchange_weak takes two Ordering arguments to describe the memory ordering of this operation. The first describes the required ordering if the operation succeeds while the second describes the required ordering when the operation fails. Using Acquire as success ordering makes the store part of this operation Relaxed, and using Release makes the successful load Relaxed. The failure ordering can only be SeqCst, Acquire or Relaxed and must be equivalent to or weaker than the success ordering.

Caveats

As the current parameter must be bitwise identical to the previous value, you should not get the current value using any sort of arithmetic (both because of rounding, and to avoid any situation where -0.0 and +0.0 would be compared). Additionally, on Wasm, in some cases NaN values have been known to cause problems for non-typical usage of this API. See AtomicF64::as_atomic_bits if performing the compare_exchange on the raw bits of this atomic float would solve an issue for you.

Example

Note that this sort of CaS loop should generally use fetch_update instead.

let v = AtomicF64::new(5.0);
let mut old = v.load(Relaxed);
loop {
    let new = old * 2.0;
    match v.compare_exchange_weak(old, new, SeqCst, Relaxed) {
        Ok(_) => break,
        Err(x) => old = x,
    }
}

pub fn fetch_update<F>(
    &self,
    set_order: Ordering,
    fetch_order: Ordering,
    update: F
) -> Result<f64, f64> where
    F: FnMut(f64) -> Option<f64>, 
[src]

Fetches the value, and applies a function to it that returns an optional new value. Returns a Result of Ok(previous_value) if the function returned Some(_), else Err(previous_value).

Note: This may call the function multiple times if the value has been changed from other threads in the meantime, as long as the function returns Some(_), but the function will have been applied only once to the stored value.

fetch_update takes two Ordering arguments to describe the memory ordering of this operation. The first describes the required ordering for when the operation finally succeeds while the second describes the required ordering for loads. These correspond to the success and failure orderings of compare_exchange respectively.

Using Acquire as success ordering makes the store part of this operation Relaxed, and using Release makes the final successful load Relaxed. The (failed) load ordering can only be SeqCst, Acquire or Relaxed and must be equivalent to or weaker than the success ordering.

Example

let x = AtomicF64::new(7.0);

assert_eq!(x.fetch_update(SeqCst, SeqCst, |_| None), Err(7.0));
assert_eq!(x.fetch_update(SeqCst, SeqCst, |x| Some(x + 1.0)), Ok(7.0));
assert_eq!(x.fetch_update(SeqCst, SeqCst, |x| Some(x + 1.0)), Ok(8.0));
assert_eq!(x.load(SeqCst), 9.0);

pub fn fetch_add(&self, val: f64, order: Ordering) -> f64[src]

Adds to the current value, returning the previous value.

Because this returns the previous value, you may want to call it like: atom.fetch_add(x, order) + x

Examples

let x = AtomicF64::new(7.0);

assert_eq!(x.fetch_add(2.0, Relaxed), 7.0);
assert_eq!(x.fetch_add(1.0, SeqCst), 9.0);
assert_eq!(x.fetch_add(-100.0, AcqRel), 10.0);

pub fn fetch_sub(&self, val: f64, order: Ordering) -> f64[src]

Subtract from the current value, returning the previous value.

Because this returns the previous value, you may want to call it like: atom.fetch_sub(x, order) - x

Note: This operation uses fetch_update under the hood, and is likely to be slower than the equivalent operation for atomic integers.

Examples

let x = AtomicF64::new(7.0);
assert_eq!(x.fetch_sub(2.0, Relaxed), 7.0);
assert_eq!(x.fetch_sub(-1.0, SeqCst), 5.0);
assert_eq!(x.fetch_sub(0.5, AcqRel), 6.0);

pub fn fetch_abs(&self, order: Ordering) -> f64[src]

Produce the absolute value of the current value, returning the previous value.

Because this returns the previous value, you may want to call it like: atom.fetch_abs(x, order).abs()

Examples

let x = AtomicF64::new(-7.0);
assert_eq!(x.fetch_abs(Relaxed), -7.0);
assert_eq!(x.fetch_abs(SeqCst), 7.0);

pub fn fetch_neg(&self, order: Ordering) -> f64[src]

Negates the current value, returning the previous value.

As a result of returning the previous value, you may want to invoke it like: -atom.fetch_neg(Relaxed).

Examples

let x = AtomicF64::new(-7.0);
assert_eq!(x.fetch_neg(Relaxed), -7.0);
assert_eq!(x.fetch_neg(SeqCst), 7.0);
assert_eq!(x.fetch_neg(AcqRel), -7.0);

pub fn fetch_min(&self, value: f64, order: Ordering) -> f64[src]

Minimum with the current value.

Finds the minimum of the current value and the argument val, and sets the new value to the result.

Returns the previous value. Because of this, you may want to call it like: atom.fetch_min(x, order).min(x)

Note: This operation uses fetch_update under the hood, and is likely to be slower than the equivalent operation for atomic integers.

Examples


let foo = AtomicF64::new(23.0);
assert_eq!(foo.fetch_min(42.0, Ordering::Relaxed), 23.0);
assert_eq!(foo.load(Ordering::Relaxed), 23.0);
assert_eq!(foo.fetch_min(22.0, Ordering::Relaxed), 23.0);
assert_eq!(foo.load(Ordering::Relaxed), 22.0);

pub fn fetch_max(&self, value: f64, order: Ordering) -> f64[src]

Maximum with the current value.

Finds the maximum of the current value and the argument val, and sets the new value to the result.

Returns the previous value. Because of this, you may want to call it like: atom.fetch_max(x, order).max(x)

Note: This operation uses fetch_update under the hood, and is likely to be slower than the equivalent operation for atomic integers.

Examples


let foo = AtomicF64::new(23.0);
assert_eq!(foo.fetch_max(22.0, Ordering::Relaxed), 23.0);
assert_eq!(foo.load(Ordering::Relaxed), 23.0);
assert_eq!(foo.fetch_max(42.0, Ordering::Relaxed), 23.0);
assert_eq!(foo.load(Ordering::Relaxed), 42.0);

pub fn as_atomic_bits(&self) -> &AtomicU64[src]

Returns a reference to an atomic integer which can be used to access the atomic float's underlying bits in a thread safe manner.

This is essentially a transmute::<&Self, &AtomicU64>(self), and is zero cost.

Motivation

This is exposed as an escape hatch because of the caveats around the AtomicF64 CaS-family APIs (compare_and_swap, compare_exchange, compare_exchange_weak, ...) and the notion of bitwise identicality which they require being somewhat problematic for NaNs, especially on targets like Wasm (see rust-lang/rust#73328).

In general despite how bad this might sound, in practice we're fairly safe: LLVM almost never optimizes through atomic operations, this library is written to try to avoid potential issues from most naive usage, and I'm optimistic the situation will clean itself up in the short-to-medium-term future.

However, if you need peace of mind, or find yourself in a case where you suspect you're hitting this issue, you can access the underlying atomic value using this function.

Examples

let v = AtomicF64::new(22.5);
assert_eq!(v.as_atomic_bits().load(Ordering::Relaxed), 22.5f64.to_bits());

Trait Implementations

impl Debug for AtomicF64[src]

Equivalent to <f64 as core::fmt::Debug>::fmt.

Example

let v = AtomicF64::new(40.0);
assert_eq!(format!("{:?}", v), format!("{:?}", 40.0));

impl Default for AtomicF64[src]

Return a zero-initialized atomic.

Example

let x = AtomicF64::default();
assert_eq!(x.load(Ordering::SeqCst), 0.0);

impl From<f64> for AtomicF64[src]

Equivalent to AtomicF64::new.

Example

let v = AtomicF64::from(10.0);
assert_eq!(v.load(Ordering::SeqCst), 10.0);

impl Send for AtomicF64[src]

impl Sync for AtomicF64[src]

Auto Trait Implementations

impl Unpin for AtomicF64

Blanket Implementations

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T> Borrow<T> for T where
    T: ?Sized
[src]

impl<T> BorrowMut<T> for T where
    T: ?Sized
[src]

impl<T> From<T> for T[src]

impl<T, U> Into<U> for T where
    U: From<T>, 
[src]

impl<T, U> TryFrom<U> for T where
    U: Into<T>, 
[src]

type Error = Infallible

The type returned in the event of a conversion error.

impl<T, U> TryInto<U> for T where
    U: TryFrom<T>, 
[src]

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.