Vec) variant that spreads resize load across pushes.
Most vector-like implementations, such as
VecDeque, must occasionally "resize" the
backing memory for the vector as the number of elements grows. This means allocating a new
vector (usually of twice the size), and moving all the elements from the old vector to the new
one. As your vector gets larger, this process takes longer and longer.
For most applications, this behavior is fine — if some very small number of pushes take longer than others, the application won't even notice. And if the vector is relatively small anyway, even those "slow" pushes are quite fast. Similarly, if your vector grow for a while, and then stops growing, the "steady state" of your application won't see any resizing pauses at all.
Where resizing becomes a problem is in applications that use vectors to keep ever-growing state where tail latency is important. At large scale, it is simply not okay for one push to take 30 milliseconds when most take double-digit nanoseconds. Worse yet, these resize pauses can compound to create significant spikes in tail latency.
This crate implements a technique referred to as "incremental resizing", in contrast to the common "all-at-once" approached outlined above. At its core, the idea is pretty simple: instead of moving all the elements to the resized vector immediately, move a couple each time a push happens. This spreads the cost of moving the elements so that each push becomes a little slower until the resize has finished, instead of one push becoming a lot slower.
This approach isn't free, however. While the resize is going on, the old vector must be kept around (so memory isn't reclaimed immediately), and iterators and other vector-wide operations must access both vectors, which makes them slower. Only once the resize completes is the old vector reclaimed and full performance restored.
To help you decide whether this implementation is right for you, here's a handy reference for how this implementation compares to the standard library vectors:
- Pushes all take approximately the same time. After a resize, they will be slower for a while, but only by a relatively small factor.
- Memory is not reclaimed immediately upon resize.
- Access operations are marginally slower as they must check two vectors.
- The incremental vector is slightly larger on the stack.
- The "efficiency" of the resize is slightly lower as the all-at-once resize moves the items from the small vector to the large one in batch, whereas the incremental does a series of pushes.
Also, since this crate must keep two vectors, it cannot guarantee that the elements are stored
in one contiguous chunk of memory. Since it must move elements between then without losing
their order, it is backed by
VecDeques, which means that this is the case even after the
resize has completed. For this reason, this crate presents an interface that resembles
VecDeque more so than
Vec. If you need contiguous memory, there's no good way to do
incremental resizing without low-level memory mapping magic that I'm aware of.
We make the vector atone with more expensive pushes for the sin it committed by resizing..?