[][src]Module async_std::sync

Synchronization primitives.

This module is an async version of std::sync.

The need for synchronization

async-std's sync primitives are scheduler-aware, making it possible to .await their operations - for example the locking of a Mutex.

Conceptually, a Rust program is a series of operations which will be executed on a computer. The timeline of events happening in the program is consistent with the order of the operations in the code.

Consider the following code, operating on some global static variables:

static mut A: u32 = 0;
static mut B: u32 = 0;
static mut C: u32 = 0;

fn main() {
    unsafe {
        A = 3;
        B = 4;
        A = A + B;
        C = B;
        println!("{} {} {}", A, B, C);
        C = A;
    }
}

It appears as if some variables stored in memory are changed, an addition is performed, result is stored in A and the variable C is modified twice.

When only a single thread is involved, the results are as expected: the line 7 4 4 gets printed.

As for what happens behind the scenes, when optimizations are enabled the final generated machine code might look very different from the code:

  • The first store to C might be moved before the store to A or B, as if we had written C = 4; A = 3; B = 4.

  • Assignment of A + B to A might be removed, since the sum can be stored in a temporary location until it gets printed, with the global variable never getting updated.

  • The final result could be determined just by looking at the code at compile time, so constant folding might turn the whole block into a simple println!("7 4 4").

The compiler is allowed to perform any combination of these optimizations, as long as the final optimized code, when executed, produces the same results as the one without optimizations.

Due to the concurrency involved in modern computers, assumptions about the program's execution order are often wrong. Access to global variables can lead to nondeterministic results, even if compiler optimizations are disabled, and it is still possible to introduce synchronization bugs.

Note that thanks to Rust's safety guarantees, accessing global (static) variables requires unsafe code, assuming we don't use any of the synchronization primitives in this module.

Out-of-order execution

Instructions can execute in a different order from the one we define, due to various reasons:

  • The compiler reordering instructions: If the compiler can issue an instruction at an earlier point, it will try to do so. For example, it might hoist memory loads at the top of a code block, so that the CPU can start prefetching the values from memory.

    In single-threaded scenarios, this can cause issues when writing signal handlers or certain kinds of low-level code. Use compiler fences to prevent this reordering.

  • A single processor executing instructions out-of-order: Modern CPUs are capable of superscalar execution, i.e., multiple instructions might be executing at the same time, even though the machine code describes a sequential process.

    This kind of reordering is handled transparently by the CPU.

  • A multiprocessor system executing multiple hardware threads at the same time: In multi-threaded scenarios, you can use two kinds of primitives to deal with synchronization:

    • memory fences to ensure memory accesses are made visible to other CPUs in the right order.
    • atomic operations to ensure simultaneous access to the same memory location doesn't lead to undefined behavior.

Higher-level synchronization objects

Most of the low-level synchronization primitives are quite error-prone and inconvenient to use, which is why async-std also exposes some higher-level synchronization objects.

These abstractions can be built out of lower-level primitives. For efficiency, the sync objects in async-std are usually implemented with help from the scheduler, which is able to reschedule the tasks while they are blocked on acquiring a lock.

The following is an overview of the available synchronization objects:

  • Arc: Atomically Reference-Counted pointer, which can be used in multithreaded environments to prolong the lifetime of some data until all the threads have finished using it.

  • Barrier: Ensures multiple threads will wait for each other to reach a point in the program, before continuing execution all together.

  • channel: Multi-producer, multi-consumer queues, used for message-based communication. Can provide a lightweight inter-task synchronisation mechanism, at the cost of some extra memory.

  • Mutex: Mutual exclusion mechanism, which ensures that at most one task at a time is able to access some data.

  • RwLock: Provides a mutual exclusion mechanism which allows multiple readers at the same time, while allowing only one writer at a time. In some cases, this can be more efficient than a mutex.

Examples

Spawn a task that updates an integer protected by a mutex:

use std::sync::Arc;

use async_std::sync::Mutex;
use async_std::task;

let m1 = Arc::new(Mutex::new(0));
let m2 = m1.clone();

task::spawn(async move {
    *m2.lock().await = 1;
})
.await;

assert_eq!(*m1.lock().await, 1);

Structs

Arc

A thread-safe reference-counting pointer. 'Arc' stands for 'Atomically Reference Counted'.

Barrierunstable

A barrier enables multiple tasks to synchronize the beginning of some computation.

BarrierWaitResultunstable

A BarrierWaitResult is returned by wait when all threads in the Barrier have rendezvoused.

Mutex

A mutual exclusion primitive for protecting shared data.

MutexGuard

A guard that releases the lock when dropped.

Receiverunstable

The receiving side of a channel.

RwLock

A reader-writer lock for protecting shared data.

RwLockReadGuard

A guard that releases the read lock when dropped.

RwLockWriteGuard

A guard that releases the write lock when dropped.

Senderunstable

The sending side of a channel.

Weak

Weak is a version of Arc that holds a non-owning reference to the managed allocation. The allocation is accessed by calling upgrade on the Weak pointer, which returns an Option<Arc<T>>.

Functions

channelunstable

Creates a bounded multi-producer multi-consumer channel.