briny_ai
A minimal, dependency-free deep learning core for scalar and tensor autograd, written in Rust.
This library provides low-level primitives for defining and training differentiable models on top of multi-dimensional arrays (Tensor<T>), supporting:
- Elementwise operations with autograd
- Matrix multiplication with gradient tracking
- Loss functions (MSE)
- Stochastic gradient descent (SGD)
- JSON and binary-based tensor serialization
- Compile-time tensor creation macros
Features
- Pure Rust, no dependencies
- Compact and fast
.bpatbinary model format - Forward + backward computation via closures
- Extensible tensor structure with runtime shape checking
Usage
To use briny_ai, add the following to your Cargo.toml:
[]
= "0.1.0"
Example
use ;
use ;
Saving & Loading
use ;
let tensor = new;
save_model.unwrap;
let tensors = load_model.unwrap;
assert_eq!;
Limitations
- Only f64 tensors are supported
- No broadcasting or shape inference yet
- No support for convolution or GPU acceleration
- Autograd is manual via backward closures
Roadmap
- Broadcasting + batched ops
- Drop-in replacements for BLAS (SIMD)
- CUDA / WebGPU backend support
- Graph-based autograd (reuse, optimization)
- Custom layers & high-level model struct
Contributing
PRs welcome. This project is early-stage but cleanly structured and easy to extend. Open issues or ideas any time! It's MIT Licensed, too.