1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
//! A `HashMap` variant that spreads resize load across inserts.
//!
//! Most hash table implementations (including [`hashbrown`], the one in Rust's standard library)
//! must occasionally "resize" the backing memory for the map as the number of elements grows.
//! This means allocating a new table (usually of twice the size), and moving all the elements from
//! the old table to the new one. As your table gets larger, this process takes longer and longer.
//!
//! For most applications, this behavior is fine — if some very small number of inserts take longer
//! than others, the application won't even notice. And if the map is relatively small anyway, even
//! those "slow" inserts are quite fast. Similarly, if your map grow for a while, and then _stops_
//! growing, the "steady state" of your application won't see any resizing pauses at all.
//!
//! Where resizing becomes a problem is in applications that use maps to keep ever-growing state
//! where tail latency is important. At large scale, it is simply not okay for one map insert to
//! take 30 milliseconds when most take single-digit **micro**seconds. Worse yet, these resize
//! pauses can compound to create [significant spikes] in tail latency.
//!
//! This crate implements a technique referred to as "incremental resizing", in contrast to the
//! common "all-at-once" approached outlined above. At its core, the idea is pretty simple: instead
//! of moving all the elements to the resized map immediately, move a couple each time an insert
//! happens. This spreads the cost of moving the elements so that _each_ insert becomes a little
//! slower until the resize has finished, instead of _one_ insert becoming a _lot_ slower.
//!
//! This approach isn't free, however. While the resize is going on, the old table must be
//! kept around (so memory isn't reclaimed immediately), and all reads must check both the old and
//! new map, which makes them slower. Only once the resize completes is the old table reclaimed and
//! full read performance restored.
//!
//! To help you decide whether this implementation is right for you, here's a handy reference for
//! how this implementation compares to the standard library map:
//!
//! - Inserts all take approximately the same time.
//! After a resize, they will be slower for a while, but only by a relatively small factor.
//! - Memory is not reclaimed immediately upon resize.
//! - Reads and removals of **old** or **missing** keys are slower for a while after a resize.
//! - The incremental map is slightly larger on the stack.
//! - The "efficiency" of the resize is slightly lower as the all-at-once resize moves the items
//! from the small table to the large one in batch, whereas the incremental does a series of
//! inserts.
//!
//! Under the hood, griddle uses `hashbrown::raw` to avoid re-implementing the core pieces of
//! `hashbrown`. It also stays _very_ close to `hashbrown`'s `HashMap` and `HashSet` wrappers, and
//! even exposes a [`raw`] module so that you can build your own map on top of griddle's modified
//! table behavior.
//!
//! # Why "griddle"?
//!
//! You can amortize the cost of making hashbrowns by using a griddle..?
//!
//! [`hashbrown`]: https://crates.io/crates/hashbrown
//! [significant spikes]: https://twitter.com/jonhoo/status/1277618908355313670
// this is #[cfg(test)] so that Rust 1.42 can still compile the crate.
// tests don't have to commit to MSRV though, so 1.52 is fine there.
// hashbrown does this to avoid LLVM IR bloat in a few places.
extern crate std;
extern crate alloc;
/// Experimental and unsafe `RawTable` API. This module is only available if the
/// `raw` feature is enabled.
pub use crate HashMap;
pub use crate HashSet;
pub use TryReserveError;