Expand description
HTTP, NBD, and S3 gateway server implementations for exposing Hexz snapshots.
This module provides network-facing interfaces for accessing compressed Hexz snapshot data over standard protocols. It supports three distinct serving modes:
- HTTP Range Server (
serve_http): Exposes disk and memory streams via HTTP 1.1 range requests with DoS protection and partial content support. - NBD (Network Block Device) Server (
serve_nbd): Allows mounting snapshots as Linux block devices using the standard NBD protocol. - S3 Gateway (
serve_s3_gateway): Planned S3-compatible API for cloud integration (currently unimplemented).
§Architecture Overview
All servers expose the same underlying File API, which provides:
- Block-level decompression with LRU caching
- Dual-stream access (disk and memory snapshots)
- Random access with minimal I/O overhead
- Thread-safe concurrent reads via
Arc<File>
The servers differ in protocol semantics and use cases:
| Protocol | Use Case | Access Pattern | Authentication |
|---|---|---|---|
| HTTP | Browser/API access | Range requests | None (planned) |
| NBD | Linux block device mount | Block-level reads | None |
| S3 | Cloud integration | Object API | AWS SigV4 (planned) |
§Design Decisions
§Why HTTP Range Requests?
HTTP range requests (RFC 7233) provide a standardized way to access large files in chunks without loading the entire file into memory. This aligns perfectly with Hexz’s block-indexed architecture, allowing clients to fetch only the data they need. The implementation:
- Returns HTTP 206 (Partial Content) for range requests
- Returns HTTP 416 (Range Not Satisfiable) for invalid ranges
- Clamps requests to
MAX_CHUNK_SIZE(32 MiB) to prevent memory exhaustion - Supports both bounded (
bytes=0-1023) and unbounded (bytes=1024-) ranges
§Why NBD Protocol?
The Network Block Device protocol allows mounting remote storage as a local block device on Linux systems. This enables:
- Transparent filesystem access (mount snapshot, browse files)
- Use of standard Linux tools (
dd,fsck,mount) - Zero application changes (existing software works unmodified)
Trade-offs:
- Pro: Native OS integration, no special client software required
- Pro: Kernel handles caching and buffering
- Con: No built-in encryption or authentication
- Con: TCP-based, higher latency than local disk
§Security Architecture
§Current Security Posture (localhost-only)
All servers bind to 127.0.0.1 (loopback) by default, preventing network exposure.
This is appropriate for:
- Local development and testing
- Forensics workstations accessing local snapshots
- Scenarios where network access is provided via SSH tunnels or VPNs
§Attack Surface
The current implementation has a minimal attack surface:
- DoS via large reads: Mitigated by
MAX_CHUNK_SIZEclamping (32 MiB) - Range header parsing: Simplified parser with strict validation
- Connection exhaustion: Limited by OS socket limits, no artificial cap
- Path traversal: N/A (no filesystem access, only fixed
/diskand/memoryroutes)
§Future Security Enhancements (Planned)
- TLS/HTTPS support for encrypted transport
- Token-based authentication (Bearer tokens)
- Rate limiting per IP address
- Configurable bind addresses (
0.0.0.0for network access) - Request logging and audit trails
§Performance Characteristics
§HTTP Server
- Throughput: ~500-2000 MB/s (limited by decompression, not network)
- Latency: ~1-5 ms per request (includes decompression)
- Concurrency: Handles 1000+ concurrent connections (Tokio async runtime)
- Memory: ~100 KB per connection + block cache overhead
§NBD Server
- Throughput: ~500-1000 MB/s (similar to HTTP, plus NBD protocol overhead)
- Latency: ~2-10 ms per block read (includes TCP RTT + decompression)
- Concurrency: One Tokio task per client connection
§Bottlenecks
For local (localhost) connections, the primary bottleneck is:
- Decompression CPU time (80% of latency for LZ4, more for ZSTD)
- Block cache misses (requires backend I/O)
- Memory allocation for large reads (mitigated by clamping)
Network bandwidth is rarely a bottleneck for localhost connections.
§Examples
§Starting an HTTP Server
use std::sync::Arc;
use hexz_core::File;
use hexz_core::store::local::FileBackend;
use hexz_core::algo::compression::lz4::Lz4Compressor;
use hexz_server::serve_http;
let backend = Arc::new(FileBackend::new("snapshot.hxz".as_ref())?);
let compressor = Box::new(Lz4Compressor::new());
let snap = File::new(backend, compressor, None)?;
// Start HTTP server on port 8080
serve_http(snap, 8080).await?;§Starting an NBD Server
use std::sync::Arc;
use hexz_core::File;
use hexz_core::store::local::FileBackend;
use hexz_core::algo::compression::lz4::Lz4Compressor;
use hexz_server::serve_nbd;
let backend = Arc::new(FileBackend::new("snapshot.hxz".as_ref())?);
let compressor = Box::new(Lz4Compressor::new());
let snap = File::new(backend, compressor, None)?;
// Start NBD server on port 10809
serve_nbd(snap, 10809).await?;§Client Usage Examples
§HTTP Client (curl)
# Fetch the first 4KB of the disk stream
curl -H "Range: bytes=0-4095" http://localhost:8080/disk -o chunk.bin
# Fetch 1MB starting at offset 1MB
curl -H "Range: bytes=1048576-2097151" http://localhost:8080/memory -o mem_chunk.bin
# Fetch from offset to EOF (server will clamp to MAX_CHUNK_SIZE)
curl -H "Range: bytes=1048576-" http://localhost:8080/disk§NBD Client (Linux)
# Connect NBD client to server
sudo nbd-client localhost 10809 /dev/nbd0
# Mount the block device (read-only)
sudo mount -o ro /dev/nbd0 /mnt/snapshot
# Access files normally
ls -la /mnt/snapshot
cat /mnt/snapshot/important.log
# Disconnect when done
sudo umount /mnt/snapshot
sudo nbd-client -d /dev/nbd0§Protocol References
- HTTP Range Requests: RFC 7233
- NBD Protocol: NBD Protocol Specification
- S3 API: AWS S3 API Reference (future work)
Modules§
- nbd
- Network Block Device (NBD) protocol server implementation.
Functions§
- parse_
range - Parses an HTTP
Rangeheader into absolute byte offsets. - serve_
http - Exposes a
Fileover HTTP with range request support. - serve_
http_ with_ listener - Like
serve_http, but accepts a pre-boundTcpListener. - serve_
nbd - Exposes a
Fileover NBD (Network Block Device) protocol. - serve_
s3_ gateway Deprecated - Exposes a
Fileas an S3-compatible object storage gateway.