Crate stream_vbyte [] [src]

Encode and decode u32s with the Stream VByte format.

There are two traits, Encoder and Decoder, that allow you to choose what logic to use in the inner hot loops.

The simple, pretty fast way

Use Scalar for your Encoder and Decoder. It will work on all hardware, and is fast enough that most people will probably never notice the time taken to encode/decode.

The more complex, really fast way

If you can use nightly Rust (currently needed for SIMD) and you know which hardware you'll be running on, or you can add runtime detection of CPU features, you can choose to use an implementation that takes advantage of your hardware. Something like raw-cpuid will probably be useful for runtime detection.

Performance numbers are calculated on an E5-1650v3 on encoding/decoding 1 million random numbers at a time. You can run the benchmarks yourself to see how your hardware does.

Both features and target_features are used because target_feature is not in stable Rust yet and this library should remain usable by stable Rust, so non-stable-friendly things are hidden behind features.


Type Performance Hardware target_feature feature
Scalar ≈140 million/s All none none
x86::Sse41 ≈1 billion/s x86 with SSE4.1 (Penryn and above, 2008) sse4.1 x86_sse41


Type Performance Hardware target_feature feature
Scalar ≈140 million/s All none none
x86::Ssse3 ≈2.7 billion/s x86 with SSSE3 (Woodcrest and above, 2006) ssse3 x86_ssse3

If you have a modern x86 and you want to use the all SIMD accelerated versions, you would use target_feature in a compiler invocation like this:

RUSTFLAGS='-C target-feature=+ssse3,+sse4.1' cargo ...

Meanwhile, features for your dependency on this crate are specified in your project's Cargo.toml.


use stream_vbyte::*;

let nums: Vec<u32> = (0..12_345).collect();
let mut encoded_data = Vec::new();
// make some space to encode into
encoded_data.resize(5 * nums.len(), 0x0);

// use Scalar implementation that works on any hardware
let encoded_len = encode::<Scalar>(&nums, &mut encoded_data);
println!("Encoded {} u32s into {} bytes", nums.len(), encoded_len);

// decode all the numbers at once
let mut decoded_nums = Vec::new();
decoded_nums.resize(nums.len(), 0);
let bytes_decoded = decode::<Scalar>(&encoded_data, nums.len(), &mut decoded_nums);
assert_eq!(nums, decoded_nums);
assert_eq!(encoded_len, bytes_decoded);

// or maybe you want to skip some of the numbers while decoding
decoded_nums.resize(nums.len(), 0);
let mut cursor = DecodeCursor::new(&encoded_data, nums.len());
let count = cursor.decode::<Scalar>(&mut decoded_nums);
assert_eq!(12_345 - 10_000, count);
assert_eq!(&nums[10_000..], &decoded_nums[0..count]);
assert_eq!(encoded_len, cursor.input_consumed());


If you use undersized slices (e.g. encoding 10 numbers into 5 bytes), you will get the normal slice bounds check panics.


SIMD code uses unsafe internally because many of the SIMD intrinsics are unsafe.

The Scalar codec does not use unsafe.



x86-specific accelerated code.



Decode in user-selectable batch sizes. Also allows skipping numbers that you don't care about.


Encoder/Decoder that works on every platform, at the cost of speed compared to the SIMD accelerated versions.



Decode bytes to numbers.


Encode numbers to bytes.



Decode count numbers from input, writing them to output.


Encode the input slice into the output slice.