torsh-functional
Functional operations for ToRSh tensors, providing PyTorch-compatible functional API.
Overview
This crate provides a comprehensive set of functional operations that work on tensors:
- Mathematical Operations: Element-wise, reduction, and special functions
- Neural Network Functions: Activations, normalization, loss functions
- Linear Algebra: Matrix operations, decompositions, solvers
- Signal Processing: FFT, convolution, filtering
- Image Operations: Transforms, filters, augmentations
Note: This crate integrates with various scirs2 modules (scirs2-linalg, scirs2-special, scirs2-signal, scirs2-fft) for optimized implementations.
Usage
Mathematical Operations
use torsh_functional as F;
use *;
// Element-wise operations
let a = tensor!;
let b = tensor!;
let sum = add?;
let product = mul?;
let power = pow?;
// Trigonometric functions
let angles = tensor!;
let sines = sin?;
let cosines = cos?;
// Reductions
let sum_all = sum?;
let mean = mean?;
let std = std?; // unbiased
let max_vals = amax?; // keepdim
Neural Network Functions
// Activation functions
let x = randn;
let relu = relu?;
let sigmoid = sigmoid?;
let tanh = tanh?;
let gelu = gelu?;
let swish = silu?;
// Softmax with temperature
let logits = randn;
let probs = softmax?;
let log_probs = log_softmax?;
// Normalization
let normalized = layer_norm?;
let batch_normed = batch_norm?;
// Dropout
let dropped = dropout?; // training mode
Loss Functions
// Classification losses
let logits = model.forward?;
let targets = tensor!;
let ce_loss = cross_entropy?;
let nll_loss = nll_loss?;
// Regression losses
let predictions = model.forward?;
let targets = randn;
let mse = mse_loss?;
let mae = l1_loss?;
let huber = smooth_l1_loss?;
// Binary classification
let binary_logits = model.forward?;
let binary_targets = rand?;
let bce = binary_cross_entropy_with_logits?;
Convolution and Pooling
// 2D Convolution
let input = randn; // NCHW
let weight = randn;
let bias = randn;
let output = conv2d?;
// Pooling operations
let pooled = max_pool2d?;
let avg_pooled = avg_pool2d?;
let adaptive = adaptive_avg_pool2d?; // global pooling
Linear Algebra
// Matrix operations (leveraging scirs2-linalg)
let a = randn;
let b = randn;
let c = matmul?;
let det = det?;
let inv = inverse?;
// Eigenvalues and eigenvectors
let = eig?;
// SVD
let = svd?;
// Solve linear systems
let x = solve?; // Solve Ax = b
Signal Processing
// FFT operations (leveraging scirs2-signal)
let signal = randn;
let spectrum = fft?;
let reconstructed = ifft?;
// 2D FFT for images
let image = randn;
let freq_domain = fft2?;
// Convolution via FFT
let kernel = randn;
let filtered = conv1d_fft?;
Advanced Operations
// Interpolation
let upsampled = interpolate?;
// Affine grid
let theta = tensor!;
let grid = affine_grid?;
// Grid sampling
let sampled = grid_sample?;
Utilities
// Tensor manipulation
let flattened = flatten?;
let reshaped = reshape?;
let permuted = permute?;
// Padding
let padded = pad?;
// Concatenation and stacking
let tensors = vec!;
let concatenated = cat?;
let stacked = stack?;
// Splitting
let chunks = chunk?;
let splits = split?;
Integration with SciRS2
This crate leverages multiple scirs2 modules for optimized implementations:
- scirs2-linalg: For linear algebra operations (matrix multiplication, decompositions)
- scirs2-special: For special mathematical functions (bessel, gamma, etc.)
- scirs2-signal: For signal processing operations
- scirs2-fft: For Fast Fourier Transform operations
- scirs2-core: For SIMD operations and memory management
- scirs2-neural: For neural network specific operations
License
Licensed under the Apache License, Version 2.0. See LICENSE for details.