relayrl_types
Core data types and encoding/decoding utilities for the RelayRL framework.
Features
RelayRLAction: Serializable action container (obs, act, mask, reward, data, done) with UUID agent trackingRelayRLTrajectory: In-memory trajectory buffer with metadata and provenance tracking- Burn backend support: Compatible with both
burn-ndarray(CPU) andburn-tch(GPU) backends - Codec pipeline: Compression, encryption, integrity verification, and chunking
- Utilities: Metadata tracking, quantization, and network transport optimizations
Feature Flags
# Backend selection (choose one)
= ["ndarray-backend"]
= ["burn-ndarray"] # CPU backend
= ["burn-tch"] # GPU backend
# Network transport utilities
= ["lz4_flex", "zstd"] # LZ4/Zstd compression
= ["chacha20poly1305"] # ChaCha20-Poly1305 AEAD
= ["blake3"] # BLAKE3 checksums
= ["bincode"] # Metadata serialization
= ["half"] # FP16/BF16 quantization
= ["bytes"] # Zerocopy data conversions
# Convenience bundles
= ["compression", "integrity", "zerocopy"]
= ["network-basic", "encryption"]
= ["network-secure", "metadata", "quantization"]
Quick Start
Basic Usage
use *;
use Uuid;
use Tensor;
use NdArray; // enable feature: ndarray-backend
// Create a Burn tensor (NdArray backend) and store as RelayRL TensorData
let device = Cpu;
// 1) Burn → RelayRL: Convert any Burn tensor into TensorData with a target dtype/backend
let obs_burn = from_floats;
let obs_td: TensorData = ConversionTensor .try_into?;
let act_burn = from_floats;
let act_td: TensorData = ConversionTensor .try_into?;
// 2) RelayRL → Burn: Build Burn tensors from stored TensorData with a chosen backend/device
// Specify the backend type parameter; device is provided via DeviceType
let obs_tensor_any = ?; // Box<dyn Any>
let act_tensor_any = ?;
// 3) Create an action with tensors
let action = new;
// 4) Work with a trajectory
let mut trajectory = with_agent_id;
trajectory.add_action;
println!;
println!;
// Minimal action without tensors
trajectory.add_action;
Codec Functionality
1. Simple Serialization
use *;
let action = minimal;
// Simple serialization (requires "metadata" feature)
let bytes = action.to_bytes?;
let decoded = from_bytes?;
assert_eq!;
2. Compression
use *;
let trajectory = new;
// ... add actions ...
// Configure codec with LZ4 compression (fast)
let config = CodecConfig ;
// Encode with compression
let encoded = trajectory.encode?;
println!;
// Decode
let decoded = decode?;
3. Compression + Encryption
use *;
let action = minimal;
// Generate encryption key
let key = crategenerate_key;
// Configure codec with compression AND encryption
let config = CodecConfig ;
// Encode (compressed + encrypted)
let encoded = action.encode?;
// Decode (must use same key!)
let decoded = decode?;
assert_eq!;
4. Full Pipeline with Integrity Verification
use *;
let mut trajectory = new;
for i in 0..50
// Full codec configuration
let key = crategenerate_key;
let config = CodecConfig ;
// Encode: Serialize → Compress → Encrypt → Checksum
let encoded = trajectory.encode?;
// Integrity is automatically verified during decode
let decoded = decode?;
println!;
println!;
5. Chunking for Large Data
use *;
let mut trajectory = new;
// ... add many actions ...
let config = default;
let chunk_size = 1024 * 1024; // 1MB chunks
// Encode and split into chunks for network transmission
let chunks = trajectory.encode_chunked?;
println!;
// ... transmit chunks over network ...
// Reassemble on the receiving end
let decoded = decode_chunked?;
6. Metadata Tracking
use *;
use Uuid;
// Create trajectory with full metadata
let trajectory = with_metadata;
// Check age
println!;
// Access metadata
if let Some = trajectory.get_agent_id
Codec Pipeline
The encoding pipeline processes data in this order:
┌─────────────────┐
│ RelayRLAction │
│ RelayRLTraject. │
└────────┬────────┘
│
▼
┌──────────┐
│ Bincode │ Serialize to bytes
└────┬─────┘
│
▼
┌──────────┐
│ Compress │ LZ4 or Zstd (optional)
└────┬─────┘
│
▼
┌──────────┐
│ Encrypt │ ChaCha20-Poly1305 (optional)
└────┬─────┘
│
▼
┌──────────┐
│ Checksum │ BLAKE3 integrity (optional)
└────┬─────┘
│
▼
┌──────────┐
│ Output │ Final encoded bytes
└──────────┘
Decoding reverses this pipeline with automatic verification.
Performance Tips
- LZ4: Best for real-time inference (3-4 GB/s decompression)
- Zstd: Best compression ratio for training data (5-10x reduction)
- Chunking: Use for trajectories > 10MB for network transmission
- Integrity: Minimal overhead (~50ns per MB with BLAKE3)
- Encryption: ~1 GB/s with ChaCha20-Poly1305
Examples
See the tests/ directory for more examples:
- Basic action/trajectory usage
- Compression benchmarks
- Encryption examples
- Chunked network transmission
License
Apache-2.0