briny_ai-0.2.0 has been yanked.
briny_ai
A minimal, dependency-free deep learning core for scalar and tensor autograd, written in Rust.
This library provides low-level primitives for defining and training differentiable models on top of multi-dimensional arrays (Tensor<T>), supporting:
- Elementwise operations with autograd
- Matrix multiplication with gradient tracking
- Loss functions (MSE)
- Stochastic gradient descent (SGD)
- JSON and binary-based tensor serialization
- Compile-time tensor creation macros
Features
- Compact and fast
.bpatbinary model format - Forward + backward computation via closures
- Extensible tensor structure with runtime shape checking
- CPU & GPU acceleration
Usage
To use briny_ai, add the following to your Cargo.toml:
[]
= "0.2.0"
Example
use ;
use ;
Saving & Loading
use ;
let tensor = new;
save_model.unwrap;
let tensors = load_model.unwrap;
assert_eq!;
Limitations
- Only f64 tensors are supported
- No broadcasting or shape inference yet
- No support for convolution or Cuda acceleration
- Autograd is manual via backward closures
Roadmap
- Broadcasting + batched ops
- CUDA backend support
- Graph-based autograd (reuse, optimization)
- Custom layers & high-level model struct
Contributing
PRs welcome. This project is early-stage but cleanly structured and easy to extend. Open issues or ideas any time!
Got an Nvidia GPU or know CUDA? Your help is golden!
License
Licensed under the MIT License.