lumen-core
A lightweight, statically typed Tensor library for Rust, featuring a PyTorch-like API and built-in Automatic Differentiation.
Unlike many dynamic tensor libraries, lumen leverages Rust's type system with Static DTypes (Tensor<T>). This ensures strict type safety at compile time and allows for optimized storage layouts.
Usage
1. Basic Creation & Shapes
Initialize tensors using constructors like new, zeros, rand, or arange.
use Tensor;
2. Indexing and Slicing
The library supports powerful indexing capabilities similar to Python's NumPy, allowing for efficient views of data.
let arr = arange.unwrap.reshape.unwrap;
// Simple index
let sub = arr.index.unwrap;
// Range slicing using s! macro
let sub = arr.index.unwrap;
assert_eq!;
// Complex slicing with strides and dimensions
let sub = arr.index.unwrap;
assert_eq!;
// Using unbounded ranges (..)
let sub = arr.index.unwrap;
3. Matrix Operations
Perform matrix multiplication and reshaping with ease.
let a = arange.unwrap.reshape.unwrap;
let b = arange.unwrap.reshape.unwrap;
// Matrix multiplication
let c = a.matmul.unwrap;
4. Math & Activation Functions
A wide array of unary and floating-point operations are supported directly on Tensor:
- Basic: abs, sqrt, sqr, recip, exp, ln
- Trig: sin, cos, tanh
- Neural Network Activations: relu, gelu, gelu_erf, silu, erf
- Rounding: floor, ceil, round
5. Automatic Differentiation (Autograd)
The library includes a Var type for tracking gradients. Here is an example of a simple backpropagation pass (Perceptron):
// Define inputs and weights as Variables (Var) to track gradients
let w = new.unwrap; // Shape (1, 2)
let x = new.unwrap; // Shape (2, 1)
let b = new.unwrap; // Shape (1, 1)
// Forward pass: y = w x + b
let y = w.matmul.unwrap.add.unwrap;
// Backward pass: Compute gradients
let grads = y.backward.unwrap;
// Verify Gradients
// dy/dw = x^T
assert!;
// dy/dx = w^T
assert!;
// dy/db = 1
assert!;
LICENSE
MIT