Tensor Frame
A high-performance, PyTorch-like tensor library for Rust with support for multiple computational backends.
Features
- 🚀 Multiple Backends: CPU (Rayon), WGPU, and CUDA support
- 🔄 Automatic Backend Selection: Falls back to best available backend
- 📐 Broadcasting: NumPy/PyTorch-style automatic broadcasting
- 🎯 Type Safety: Rust's type system for memory safety
- ⚡ Zero-Copy Operations: Efficient memory management
- 🎛️ Feature Flags: Optional dependencies for different backends
Quick Start
Add to your Cargo.toml
:
[]
= "0.0.1-alpha"
# For GPU support
= { = "0.0.1-alpha", = ["wgpu"] }
Basic usage:
use Tensor;
// Create tensors (automatically uses best backend)
let a = ones?;
let b = zeros?;
// Operations with broadcasting
let c = ?;
let sum = c.sum?;
println!;
Backends
CPU Backend (Default)
- Uses Rayon for parallel computation
- Always available
- Good for small to medium tensors
WGPU Backend
- Cross-platform GPU compute
- Supports Metal, Vulkan, DX12, OpenGL
- Enable with
features = ["wgpu"]
CUDA Backend
- NVIDIA GPU acceleration
- Enable with
features = ["cuda"]
- Requires CUDA toolkit
Documentation
- 📖 Complete Guide - Comprehensive documentation with tutorials
- 🚀 Getting Started - Quick start guide
- 📚 API Reference - Detailed API documentation
- 💡 Examples - Practical examples and tutorials
- ⚡ Performance Guide - Optimization tips and benchmarks
- 🔧 Backend Guides - CPU, WGPU, and CUDA backend details
Examples
See the examples directory for more detailed usage:
Contributing
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.
License
Licensed under either of
- Apache License, Version 2.0 (LICENSE-APACHE)
- MIT License (LICENSE-MIT)
at your option.