Concurrent Hash map
Package implement Concurrent hash map.
Quoting from [Wikipedia][pds]:
A data structure is partially persistent if all versions can be accessed but only the newest version can be modified. The data structure is fully persistent if every version can be both accessed and modified. If there is also a meld or merge operation that can create a new version from two previous versions, the data structure is called confluently persistent. Structures that are not persistent are called ephemeral data structures.
This implementation of hash map cannot be strictly classified into either of the above definition. It supports concurrent writes, using atomic Load, Store and Cas operations under the hood, and does not provide point in time snapshot for transactional operations or iterative operations.
If point in time snapshots are needed refer to [ppom] package, that implement ordered map with multi-reader concurrency and serialised writes.
- Each entry in [Map] instance correspond to a {Key, Value} pair.
- Parametrised over
key-typeandvalue-type. - Parametrised over hash-builder for application defined hashing.
- API - set(), get(), remove() using key.
- Uses ownership model and borrow semantics to ensure safety.
- Implement a custom epoch-based-garbage-collection to handle write concurrency and memory optimization.
- No Durability guarantee.
- Thread safe for both concurrent writes and concurrent reads.
Refer to rustdoc for details.
Performance
Initial load of 1 million items, 10 million items and 100 million items
Initial load is single threaded, with u32-bit key and u64-bit values using U32Hasher.
Get operations of 1 million items on 10 million data set
Initial load is single threaded, and subsequently incremental load is repeated with 1-thread, 2-threads, 4-threads, 8-threads, 16-threads using u32-bit key and u64-bit values using U32Hasher.
Set operations of 1 million items on 10 million data set
Initial load is single threaded, and subsequently incremental load is repeated with 1-thread, 2-threads, 4-threads, 8-threads, 16-threads using u32-bit key and u64-bit values using U32Hasher.
Mixed load of of 1 million gets, 50K sets and 50K deletes 10 million data set
Initial load is single threaded, and subsequently incremental load is repeated with 1-thread, 2-threads, 4-threads, 8-threads, 16-threads using u32-bit key and u64-bit values using U32Hasher.
Useful links
Contribution
- Simple workflow. Fork, modify and raise a pull request.
- Before making a PR,
- Run
make buildto confirm all versions of build is passing with 0 warnings and 0 errors. - Run
check.shwith 0 warnings, 0 errors and all testcases passing. - Run
perf.shwith 0 warnings, 0 errors and all testcases passing. - Run
cargo +nightly clippy --all-targets --all-featuresto fix clippy issues. - [Install][spellcheck] and run
cargo spellcheckto remove common spelling mistakes.
- Run
- Developer certificate of origin is preferred.