briny_ai
A minimal deep learning core for scalar and tensor autograd, written in Rust.
This library provides low-level primitives for defining and training differentiable models on top of multi-dimensional arrays (Tensor<T>,Tensor<T, N, D>, VecTensor<T>), supporting:
- Elementwise operations with autograd
- Matrix multiplication with gradient tracking
- Loss functions (MSE)
- Stochastic gradient descent (SGD)
briny_ai may not be a popular crate, but it is a growing one.
v0.1.0 |
v0.2.0 |
v0.2.1 |
v0.2.2 |
|
|---|---|---|---|---|
| Size | 9.92 KiB | 40.4 KiB | 42.3 KiB | 42.1 KiB |
| Downloads | ~200 | ~130 | ~180 | ~150 |
[*] The downloads section is just throughout the first few days
Features
- Compact and fast
.bpatbinary model format with safe, explicit parsing - Forward + backward computation via closures
- Extensible tensor structure with runtime shape checking and strong data validation
- CPU & GPU acceleration with structured error handling
Usage
To use briny_ai, add it to your Cargo.toml or run the command cargo add briny_ai in your terminal.
To enable SIMD, pass the feature flag simd. Similarly, to enable GPU acceleration, pass the feature flag wgpu to the compiler. In order to make use of such features, you should set the backend like:
set_backend;
set_backend; //default
set_backend // same as Wgpu
As of v0.3.0, all std-dependent features like saving and loading files are gated behind the std feature flag. Similarly, all dynamic allocations are gated behind an alloc feature, enabled whenever std is enabled automatically.
NOTE: SIMD only works on AVX2 compatible x86_64 devices.
Example
use ;
use ;
Saving & Loading
use ;
let tensor = new;
save_model.unwrap;
let tensors = load_model.unwrap;
assert_eq!;
Why Choose briny_ai
- Unlike heavyweight frameworks like
tch-rsorburn,briny_aistays small and straightforward. It’s perfect if you want just the core building blocks without bloat or magic. - You get tight integration with Rust’s type system and memory safety guarantees — minimal unsafe code lurking under the hood. Many other Rust ML crates compromise here.
- You control exactly when and how data is validated and trusted. This explicit trust model helps you avoid sneaky bugs and security risks common in other AI libraries.
briny_airelies on your own control flow and simple GPU acceleration via wgpu, avoiding large, complex dependencies or runtime surprises.- If you’re building AI for environments where safety and correctness matter (IoT, secure enclaves, custom hardware),
briny_aiis tailored for that. - Because it’s small and clear, you can adapt it to your needs without wading through complex abstractions or C++ FFI layers.
If you want a no-nonsense, Rust-native AI core that’s lean, secure, and explicit — briny_ai is the right tool for you.
Contributing
PRs welcome. This project is early-stage but cleanly structured and easy to extend. Open issues or ideas any time!
Got an Nvidia GPU or know CUDA? Your help is golden!
License
Under the MIT License.