Crate bzip2[][src]

Expand description

Bzip compression for Rust

This library contains bindings to libbz2 to support bzip compression and decompression for Rust. The streams offered in this library are primarily found in the reader and writer modules. Both compressors and decompressors are available in each module depending on what operation you need.

Access to the raw decompression/compression stream is also provided through the raw module which has a much closer interface to libbz2.

Example

use std::io::prelude::*;
use bzip2::Compression;
use bzip2::read::{BzEncoder, BzDecoder};

// Round trip some bytes from a byte source, into a compressor, into a
// decompressor, and finally into a vector.
let data = "Hello, World!".as_bytes();
let compressor = BzEncoder::new(data, Compression::best());
let mut decompressor = BzDecoder::new(compressor);

let mut contents = String::new();
decompressor.read_to_string(&mut contents).unwrap();
assert_eq!(contents, "Hello, World!");

Multistreams (e.g. Wikipedia or pbzip2)

Some tools such as pbzip2 or data from sources such as Wikipedia are encoded as so called bzip2 “multistreams,” meaning they contain back to back chunks of bzip’d data. BzDecoder does not attempt to convert anything after the the first bzip chunk in the source stream. Thus, if you wish to decode all bzip chunks from the input until end of file, use MultiBzDecoder.

Protip: If you use BzDecoder to decode data and the output is incomplete and exactly 900K bytes, you probably need a MultiBzDecoder.

Async I/O

This crate optionally can support async I/O streams with the Tokio stack via the tokio feature of this crate:

bzip2 = { version = "0.4", features = ["tokio"] }

All methods are internally capable of working with streams that may return ErrorKind::WouldBlock when they’re not ready to perform the particular operation.

Note that care needs to be taken when using these objects, however. The Tokio runtime, in particular, requires that data is fully flushed before dropping streams. For compatibility with blocking streams all streams are flushed/written when they are dropped, and this is not always a suitable time to perform I/O. If I/O streams are flushed before drop, however, then these operations will be a noop.

Modules

bufread

I/O streams for wrapping BufRead types as encoders/decoders

read

Reader-based compression/decompression streams

write

Writer-based compression/decompression streams

Structs

Compress

Representation of an in-memory compression stream.

Compression

When compressing data, the compression level can be specified by a value in this enum.

Decompress

Representation of an in-memory decompression stream.

Enums

Action

Possible actions to take on compression.

Error

Fatal errors encountered when compressing/decompressing bytes.

Status

Result of compression or decompression