concurrent 0.1.0

Hazard-pointer-based concurrent memory reclamation.
Documentation

concurrent — An efficient concurrent reclamation system

concurrent builds upon hazard pointers to create a extremely performant system for concurrently handling memory. It is more general and convinient — and often also faster — than epoch-based reclamation.

Why?

aturon's blog post explains the issues of concurrent memory handling very well, although it take basis in epoch-based reclamation, which this crate is an alternative for.

The gist essentially is that you need to delete objects in most concurrent data structure (otherwise there would be memory leaks), however cannot safely do so, as there is no way to know if another thread is accessing the object in question. This (and other reclamation systems) provides a solution to this problem.

Usage

While the low-level API is available, it is generally sufficient to use the concurrent::Cell abstraction. This acts much like familiar Rust APIs. It allows the programmer to concurrently access a value through references, as well as update it, and more. Refer to the respective docs for more information.

Why not crossbeam/epochs?

Epoch-based reclamation has some unfortunate issues. It cannot work properly if an epoch is constantly active. It assumes that at some point, the thread reads no objects, which is true in some cases, but not always. This makes it unsuitable for many thing, such as event loops and more.

Futhermore, to end an epoch, the system must do some relatively expensive operations, whereas concurrent (and most hazard pointer implementations) need not to do this, as it can reuse the "epochs" (in this case "hazards") later.

While I have no benchmarks yet, the tests I've made (on skiplists and other structures) generally shows that concurrent outperforms crossbeam in most cases.

Internals

It based on hazard pointers, although there are several differences. The idea is essentially that the system keeps track of some number of "hazards". As long as a hazard protects some object, the object cannot be deleted.

Once in a while, a thread performs a garbage collection by scanning the hazards and finding the objects not currently protected by any hazard. These objects are then deleted.

To improve performance, we use a layered approach: Both garbage (objects to be deleted eventually) and hazards are cached thread locally. This reduces the amount of atomic operations and cache misses.

Garbage collection

Garbage collection of the concurrently managed object is done automatically between every n frees where n is chosen from some probability distribution.

Note that a garbage collection cycle might not clear all objects. For example, some objects could be protected by hazards. Others might not have been exported from the thread-local cache yet.