rumus
Core crate for the RUMUS native-Rust deep learning framework.
What's Inside
| Module | Description |
|---|---|
tensor |
StorageHandle (CPU Vec or GPU wgpu::Buffer via parking_lot::RwLock<StorageData>), Layout, AutogradState, and all tensor operations (add, mul, matmul, relu, dropout, im2col, flatten, max_pool2d, cross_entropy_loss, etc.) |
autograd |
Thread-local Tape, GradientStore, Kahn's algorithm backward engine, no_grad() RAII guard, VersionSnapshot with Weak references |
backend |
Backend trait (CPU) + feature-gated gpu module: GpuContext singleton, BufferPool, PipelineCache (25 WGSL compute pipelines), and all GPU dispatch functions |
nn |
Parameter, Module trait, #[derive(Module)] (re-exported from rumus-macros), Linear, Conv2d, MaxPool2d, Flatten, Dropout, mse_loss, cross_entropy_loss, safetensors IO |
optim |
Optimizer trait, SGD, Adam, AdamW — all with CPU + GPU dual-path dispatch |
train |
Trainer<O: Optimizer> — closure-based train_step() orchestrator |
Features
default— CPU-only build. No external GPU dependencies.gpu— Enables WGPU compute backend (wgpu+pollster). All tensor ops auto-dispatch to WGSL shaders when data is GPU-resident.
Quick Start
use ;
use ;
use Trainer;
use Tensor;
let model = MLPnew;
let mut trainer = new;
// One training step:
let loss = trainer.train_step.unwrap;
Dependencies
rumus-macros—#[derive(Module)]proc macrosafetensors— model persistencebytemuck— safe f32/u8 castsparking_lot— mappedRwLockguards forStorageDatawgpu+pollster(optional, behindgpufeature)
License
MIT