netcdf-rust
Pure-Rust, read-only decoders for HDF5 and NetCDF. No C libraries, no build scripts, and no unsafe beyond memmap2.
Crates
| Crate | Description |
|---|---|
hdf5-reader |
Low-level HDF5 decoder (superblock, object headers, B-trees, chunked I/O, filters) |
netcdf-reader |
NetCDF reader supporting CDF-1/2/5 classic and NetCDF-4 (HDF5-backed) formats |
Usage
use ;
let file = open?;
println!;
for var in file.variables
// Read typed data (works for both classic and NetCDF-4)
let temp: ArrayD = file.read_variable?;
// Type-promoting read (any numeric type → f64)
let data = file.read_variable_as_f64?;
// String variables (classic char arrays and NetCDF-4 NC_STRING)
let names = file.read_variable_as_strings?;
// CF conventions: unpack packed integer data (scale_factor + add_offset)
let unpacked = file.read_variable_unpacked?;
// CF conventions: mask fill values + unpack in one call
let clean = file.read_variable_unpacked_masked?;
// Hyperslab: read a single time step from a 4D variable
let sel = NcSliceInfo ;
let step: ArrayD = file.read_variable_slice?;
// Lazy iteration over time steps
for slice in file.?
// In-memory open with custom NC4 cache/filter options
let bytes = read?;
let file = from_bytes_with_options?;
Using hdf5-reader directly:
use Hdf5File;
let file = open?;
let ds = file.dataset?;
let data: ArrayD = ds.read_array?;
// Hyperslab selection
use ;
let sel = SliceInfo ;
let slice: ArrayD = ds.read_slice?;
// String datasets
let labels = file.dataset?.read_strings?;
Features
HDF5
- Superblock v0-v3 and object header v1/v2 with checksum verification
- Compact, contiguous, and chunked layouts
- All chunk index types: v1/v2 B-tree, single-chunk, implicit, Fixed Array, Extensible Array
- Deflate, shuffle, Fletcher-32, and optional LZ4 filters
- Custom filters via
FilterRegistry - Fixed-length strings, HDF5 variable-length strings, and byte-vlen string datasets
- Dense-link resolution, soft-link resolution, committed datatypes, global heap strings, and object references
- Parallel chunk decoding, chunk caching, and object-header caching
NetCDF
- CDF-1, CDF-2, CDF-5, and NetCDF-4
- Automatic format detection
- Unified typed reads across formats
- Unified string reads for classic char arrays and NetCDF-4 string variables
- Type promotion to
f64, unpacking, masking, and combined CF helpers - Slice reads, lazy slice iteration, and parallel NC4 slice reads
- Cache and filter configuration through
NcOpenOptions, including in-memory opens
Feature flags
[]
= "0.1" # CDF-1/2/5 + NetCDF-4 (default)
= { = "0.1", = false } # CDF-1/2/5 only
| Flag | Default | Description |
|---|---|---|
netcdf4 |
yes | NetCDF-4 support via hdf5-reader |
rayon |
yes | Parallel chunk reading |
lz4 |
yes | LZ4 filter support (hdf5-reader) |
cf |
no | CF Conventions helpers (axis identification, time decoding, CRS extraction, bounds) |
Custom filters
Register filters before opening files:
use ;
use FilterRegistry;
let mut registry = new;
registry.register;
let file = open_with_options?;
Testing
# Unit tests (no external dependencies)
# Integration tests with generated fixtures
For reference comparisons and current benchmark results against
georust/netcdf, see docs/benchmark-report.md.
Releasing
See RELEASING.md for the release checklist and the required
publish order for hdf5-reader and netcdf-reader.
Known limitations
- External HDF5 links are skipped (soft links are resolved)
- SZIP, N-Bit, and ScaleOffset filters are not built in (register via
FilterRegistry) - SOHM (shared object header message table) resolution returns a descriptive error
- Fractal heap huge/tiny objects are not yet supported (managed objects work)
- CF time decoding uses Gregorian approximation for non-standard calendars (noleap, 360_day)
License
MIT OR Apache-2.0