tensor_frame 0.0.1-alpha

A PyTorch-like tensor library for Rust with CPU, WGPU, and CUDA backends
Documentation

Tensor Frame

Crates.io Documentation License

A high-performance, PyTorch-like tensor library for Rust with support for multiple computational backends.

Features

  • 🚀 Multiple Backends: CPU (Rayon), WGPU, and CUDA support
  • 🔄 Automatic Backend Selection: Falls back to best available backend
  • 📐 Broadcasting: NumPy/PyTorch-style automatic broadcasting
  • 🎯 Type Safety: Rust's type system for memory safety
  • Zero-Copy Operations: Efficient memory management
  • 🎛️ Feature Flags: Optional dependencies for different backends

Quick Start

Add to your Cargo.toml:

[dependencies]
tensor_frame = "0.0.1-alpha"

# For GPU support
tensor_frame = { version = "0.0.1-alpha", features = ["wgpu"] }

Basic usage:

use tensor_frame::Tensor;

// Create tensors (automatically uses best backend)
let a = Tensor::ones(vec![2, 3])?;
let b = Tensor::zeros(vec![2, 3])?;

// Operations with broadcasting
let c = (a + b)?;
let sum = c.sum(None)?;

println!("Result: {:?}", sum.to_vec()?);

Backends

CPU Backend (Default)

  • Uses Rayon for parallel computation
  • Always available
  • Good for small to medium tensors

WGPU Backend

  • Cross-platform GPU compute
  • Supports Metal, Vulkan, DX12, OpenGL
  • Enable with features = ["wgpu"]

CUDA Backend

  • NVIDIA GPU acceleration
  • Enable with features = ["cuda"]
  • Requires CUDA toolkit

Documentation

Examples

See the examples directory for more detailed usage:

Contributing

Contributions are welcome! Please see CONTRIBUTING.md for guidelines.

License

Licensed under either of

at your option.