zoned
Pure Rust library for zoned block device management (SMR/ZNS).
Modern storage devices increasingly use zoned storage — a model where the drive is divided into sequential-write zones that must be written from start to finish and explicitly reset before rewriting. This includes:
- Shingled Magnetic Recording (SMR) hard drives — high-capacity HDDs that overlap tracks to increase density, requiring sequential writes within zones
- Zoned Namespace (ZNS) NVMe SSDs — next-generation SSDs that expose the flash translation layer to the host, reducing write amplification and over-provisioning
The zoned crate provides a safe, idiomatic Rust interface for working with
these devices: reporting zone state, managing zone lifecycles, and performing
I/O — all through the kernel's standard block device ioctls.
Features
- Zone reporting with lazy iteration and client-side filtering
- Zone management — open, close, finish, reset
- Data I/O — positional read/write and vectored (scatter-gather) I/O
- Cursor-based I/O —
ZonedDeviceCursorimplementsstd::io::Read/Write/Seek std::io::WriteonZoneHandle— enablesBufWriterand standard I/O adapters- Exclusive zone handles — compile-time enforcement of single-owner writes via
ZoneHandle - Thread-safe zone allocation —
ZoneAllocatorfor concurrent multi-zone workloads - Device validation — block device, mount, partition, and zoned-model checks
- Builder pattern — composable device opening with opt-in validation
- Newtype safety —
SectorandZoneIndexprevent unit confusion at compile time - sysfs integration — zone model, block sizes, scheduler, vendor/model, capacity
- Async support — optional
tokiofeature withAsyncZonedDeviceandAsyncZoneHandle
Platform Support
- Linux: Full support via kernel ioctls and sysfs (kernel 5.9+)
- FreeBSD: Support via
DIOCZONECMDioctl
Architecture
graph TD
subgraph "User Code"
APP[Application]
end
subgraph "zoned crate"
ZD[ZonedDevice]
AZD[AsyncZonedDevice<br><i>tokio feature</i>]
ZH[ZoneHandle<br><i>Send, !Clone</i>]
AZH[AsyncZoneHandle<br><i>tokio feature</i>]
ZA[ZoneAllocator<br><i>Send + Sync</i>]
DB[DeviceBuilder]
ZI[ZoneIterator]
ZC[ZonedDeviceCursor<br><i>Read + Write + Seek</i>]
ZF[ZoneFilter]
VAL[validate]
SYS[sysfs]
end
subgraph "Types"
S[Sector]
ZX[ZoneIndex]
Z[Zone]
DI[DeviceInfo]
DP[DeviceProperties]
end
subgraph "Platform Layer"
LNX[Linux<br>ioctl + sysfs]
BSD[FreeBSD<br>DIOCZONECMD]
end
APP --> DB
APP --> AZD
APP --> ZA
APP --> SYS
APP --> VAL
DB --> ZD
AZD -->|wraps Arc| ZD
ZD --> ZI
ZD --> ZC
ZD --> ZH
ZA -->|allocates| ZH
AZH -->|wraps| ZH
ZD -->|reports| Z
ZD -->|queries| DI
ZF -->|filters| Z
SYS -->|queries| DP
Z --- S
Z --- ZX
ZD --> LNX
ZD --> BSD
Zone Lifecycle
Zones transition through states via host commands and device writes:
stateDiagram-v2
[*] --> Empty
Empty --> ExplicitlyOpen : open_zones()
Empty --> ImplicitlyOpen : write data
Empty --> Full : finish_zones()
ExplicitlyOpen --> Closed : close_zones()
ExplicitlyOpen --> Full : finish_zones()
ExplicitlyOpen --> Full : zone capacity reached
ImplicitlyOpen --> Closed : close_zones()
ImplicitlyOpen --> ExplicitlyOpen : open_zones()
ImplicitlyOpen --> Full : finish_zones()
ImplicitlyOpen --> Full : zone capacity reached
Closed --> ExplicitlyOpen : open_zones()
Closed --> Full : finish_zones()
Full --> Empty : reset_zones()
ExplicitlyOpen --> Empty : reset_zones()
ImplicitlyOpen --> Empty : reset_zones()
Closed --> Empty : reset_zones()
ReadOnly --> [*]
Offline --> [*]
Note: Conventional zones remain in
NotWritePointerand do not participate in this state machine — they allow random writes with no write pointer.
Quick Start
Add zoned to your Cargo.toml:
[]
= "0.5"
Opening a Device and Reporting Zones
use ;
Lazy Zone Iteration
For large devices, iterate zones in batches instead of loading them all at once:
use ;
Sequential Writes with ZoneHandle
ZoneHandle provides exclusive access to a single zone with a locally-tracked
write pointer — no device queries needed to know your position:
use Write;
use ;
use Arc;
Concurrent Writes with ZoneAllocator
ZoneAllocator hands out ZoneHandles that are Send but not Clone,
so each zone has exactly one writer — enforced at compile time:
use Arc;
use ;
Sysfs Queries (No Device Open Required)
use sysfs;
Device Validation
Validate before opening to get clear error messages:
use validate;
Async Support
Enable the tokio feature for async wrappers that use spawn_blocking
internally — the same approach tokio::fs uses:
[]
= { = "0.5", = ["tokio"] }
= { = "1", = ["rt-multi-thread", "macros"] }
Async Zone Reporting and I/O
use ;
async
Converting Between Sync and Async
use Arc;
use ;
async
CLI Tool
The zcli example exercises the full library API and serves as a practical
tool for inspecting and managing zoned devices:
# Device info (read-only)
# List empty sequential zones
# Zone state transitions
# Read/write with hex dump
# Validation checks
# Concurrent write benchmark
Run zcli --help or zcli <subcommand> --help for full usage.
Testing
# Unit tests (no hardware required)
# Integration tests with emulated zoned device (requires root + null_blk module)
# Read-only tests against a real device (requires /dev/sda to be a zoned device)
Requirements
- Rust 1.88.0+ (edition 2024)
- Linux kernel 5.9+ (for full sysfs attribute support)
- Root or
diskgroup membership for device access
License
MIT OR Apache-2.0