Cetana (चेतन) is a Sanskrit word meaning "consciousness" or "intelligence," reflecting the library's goal of bringing machine intelligence to your applications.
Overview
Cetana is a Rust-based machine learning library designed to provide efficient and flexible machine learning operations across multiple compute platforms. It focuses on providing a clean, safe API while maintaining high performance and memory safety.
Features
| Core Features | Neural Networks | Compute Backends |
|---|---|---|
| Type-safe Tensor Operations | Linear & Convolutional Layers | CPU (Current) |
| Automatic Differentiation | Activation Functions | CUDA (Planned) |
| Model Serialization | Pooling Layers | MPS (Planned) |
| Loss Functions | Backpropagation | Vulkan (Planned) |
Key Capabilities
- Type-safe Tensor Operations - Memory-safe tensor operations with compile-time guarantees
- Neural Network Building Blocks - Complete set of layers for building complex networks
- Automatic Differentiation - Seamless gradient computation and backpropagation
- Model Serialization - Save and load trained models with ease
- Multiple Activation Functions - ReLU, Sigmoid, Tanh, and more
- Optimizers & Loss Functions - MSE, Cross Entropy, Binary Cross Entropy
- Multi-Platform Support - CPU backend with GPU acceleration planned
Example Usage
Basic Tensor Operations
use ;
// Create tensors
let a = new?;
let b = new?;
// Perform operations
let c = a.add?;
let d = a.matmul?;
println!;
println!;
Simple Neural Network
use ;
use SGD;
// Create a simple neural network
let model = new
.add
.add
.add;
// Define loss function and optimizer
let loss_fn = new;
let optimizer = SGDnew;
// Training loop
for epoch in 0..100
Model Serialization
use ;
// Save trained model
save_model?;
// Load model later
let loaded_model = load_model?;
Compute Backends
| Backend | Status | Platform | Features |
|---|---|---|---|
| CPU | Active | All | Full feature set |
| CUDA | Planned | NVIDIA GPUs | GPU acceleration |
| MPS | Planned | Apple Silicon | Metal Performance Shaders |
| Vulkan | Planned | Cross-platform | Vulkan compute |
Roadmap
Phase 1: Core Implementation (CPU)
- Basic tensor operations
- Neural network modules (Linear, Convolutional, Pooling layers)
- Activation functions (ReLU, Sigmoid, Tanh)
- Automatic differentiation and backpropagation
- Loss functions (MSE, Cross Entropy, Binary Cross Entropy)
- Model serialization (Save/Load)
- Advanced training utilities (batch processing, data loaders)
Phase 2: GPU Acceleration
- CUDA backend for NVIDIA GPUs
- MPS backend for Apple Silicon
- Vulkan backend for cross-platform GPU compute
Phase 3: Advanced Features
- Distributed training (multi-GPU support)
- Automatic mixed precision
- Model quantization
- Performance profiling and optimization
Phase 4: High-Level APIs
- Model zoo and pre-trained models
- Easy-to-use training APIs
- Integration examples and comprehensive documentation
Getting Started
Installation
Add Cetana to your Cargo.toml:
[]
= "0.1.0"
Quick Start
use Tensor;