mca
Reader/Writer for Minecraft regions files (.mca)
This library fully implements the Region File Format from Minecraft 1.2.1+
Both reading and writing regions in anyway you like.
Notably this library implements all compressions in vanilla (GZip, Zlib, Uncompressed, LZ4).
As well as custom compression algorithms for both compressing and decompressing (see below for examples).
It's also one of the fastest (if not fastest) .mca Rust library (see benchmarks below).
Installation
Add this to your cargo.toml
[]
= "2"
Quick start
Read
Reading region files is a very simple process.
Chunks are automatically decompressed (tho non-compressed data can be get via .chunk_data(x, z)).
You only have to handle if the chunk is generated or not (Some(chunk) or None).
use RegionReader;
let file = read?;
let mut region = new?;
let chunk = region.chunk?;
if let Some = chunk
Ok::
Write
Writing is also quite easy, just pass it the coordinates, the nbt in bytes and what compression to use.
Unsure what compression to use? use Compression::default() which will use Zlib, the default in Minecraft.
More advanced writing can be made with RegionWriter::write_packed.
use ;
let mut region = new;
// `Vec::new()` would be your nbt data in bytes
region.set_chunk?;
let mut file = create?;
region.write?;
Ok::
Iterators
Often you might want to iterate over all chunk coordinates or all generated chunks within a region.
This library comes with two different iterators to make this as easy as possible.
use ;
let file = Vecnew;
let mut region = new?;
// Iterate over all generated chunks within a region
let mut iter = region.iter?;
while let Some = iter.next_available_chunk?
// Iterate over all chunk coordinates within a region
for in new
Ok::
Custom Compression
One thing that I haven't seen in any other .mca library is full support of the format.
This includes the rather obscure feature of using custom compression schemes for chunks.
The wiki states that a compression byte of 127 indicates a custom compression.
Where it's then followed by a prefixed string containing the id of the compression algorithm.
Below is a tiny example, for a fully working lzma2 example. Look at custom_compression.rs and it's tests.
Both RegionReader and RegionWriter defaults to () as its custom compression scheme.
Which will return Err(CompressionError::Unsupported) if it's ever called.
One single implementation can support multiple compression schemes,
as one of the argument is the id itself. So you can match on it.
use ;
;
let mut reader = new_with_decompression?;
let mut writer = new_with_compression;
Ok::
Parallel
The .write function on RegionWriter already compresses all chunks in parallel by default.
This can be disabled via disabling default features in your cargo.toml
= { = "2", = false }
But reading chunks in parallel requires a bit more manual work for you to do.
Mainly we can't use the normal .chunk(x, z) or .decompress(chunk) functions on the region.
Both of these make use of some internal buffers to speed single threaded performance quite a bit.
So we have to get each chunks data and decompress it ourself.
use ;
use ;
let region = new?;
let chunks = .into_par_iter.map.?;
Ok::
Reader to Writer
Sometimes you might want to read in a region file and modify it's existing data and write it back.
To make this easier you can use into_writer, it converts a ReginoReader to a RegionWriter.
And it only ever decompresses data that you modify with set_chunk or chunk_mut.
Any unmodified chunk will remain compress and untouched, ensuring maximal performance.
Important to note that you can use PackedChunks with RegionWriter::write_packed,
to gain even more control on the data written and you handle the compression etc.
use ;
let file = Vecnew;
let region = new?;
// we pass `()` to specify no custom compression
let mut writer = region.into_writer?;
// change the chunks compression to Lz4
// here you would access `data.buf` and modify its nbt
if let Some = writer.chunk_mut?
let mut buf = Vecnew;
writer.write?;
Ok::
To find more examples and usage of the library, you can look at any tests
at the bottom of any source file, read.rs and write.rs have some good examples in their tests.
Benchmarks
A benchmark comparing mca against all .mca parsers I could find.
This is in reading a fully generated, zlib compressed region file.
As some of these don't support writing region files.
| Library | Throughput | Ms (mean) |
|---|---|---|
| mca (2.0.0) | 310.71 MiB/s | 24.440 ms |
| anvil-nbt | 261.26 MiB/s | 29.066 ms |
| mca (1.1.0) | 216.61 MiB/s | 35.057 ms |
| mca-parser | 87.721 MiB/s | 86.567 ms |
| simple-anvil | 13.599 MiB/s | 558.41 ms |
When it comes to writing regions, this library can write it at 147.71 MiB/s,
and or 51.374 ms for it to write a filled region with 1024 chunks.
All benchmarks ran on a Hetzner AX52 dedicated server
Tests
Many of the tests are also some really good examples on how to properly use the crate.
Like how to read chunks in parallel, how to easily iterate over chunks,
How to write a custom compression implementation and more.
But to actually run the tests and doctests, refer to the commands below.
As rayon can't be enabled for doctests
and some compression libtests can take a while so we use release mode.
# Run libtests
# Run doctests
There is about 50+ total tests that covers most if not all code in the crate.
And plenty of different scenarios across 13 test region files.
Spanning 8 different versions, different compression schemes and even corrupt regions.
Old versions
This library was fully rewritten in version 2.0.0.
And users who may have used this before in 1.x versions may see
quite the different exposed API, but the functionality is the same.
This rewrite was made partially for fun but also to support
custom compression algorithms and more QoL functions.
Made by an actual human with no AI involved