torsh-sparse
Sparse tensor operations for ToRSh, leveraging scirs2-sparse for efficient sparse matrix computations.
Overview
This crate provides comprehensive sparse tensor support:
- Sparse Formats: COO, CSR, CSC, and hybrid formats
- Operations: Sparse matrix multiplication, addition, transpose
- Conversions: Dense to sparse, format conversions
- GPU Support: CUDA sparse operations via cuSPARSE
- Integration: Seamless integration with scirs2-sparse
Usage
Creating Sparse Tensors
use *;
// From COO format (coordinate list)
let indices = tensor!; // [[row], [col]]
let values = tensor!;
let size = vec!;
let sparse_coo = sparse_coo_tensor?;
// From dense tensor
let dense = tensor!;
let sparse = dense.to_sparse?;
// From CSR format (compressed sparse row)
let crow_indices = tensor!; // row pointers
let col_indices = tensor!; // column indices
let values = tensor!;
let sparse_csr = sparse_csr_tensor?;
Sparse Operations
// Sparse matrix multiplication (leveraging scirs2-sparse)
let a = sparse_coo_tensor?;
let b = sparse_coo_tensor?;
let c = mm?;
// Sparse-dense multiplication
let sparse_matrix = load_sparse_matrix?;
let dense_vector = randn;
let result = mv?;
// Element-wise operations
let sum = add?;
let product = mul?;
// Transpose
let transposed = sparse_matrix.t?;
Format Conversions
// Convert between formats
let coo = create_coo_tensor?;
let csr = coo.to_csr?;
let csc = coo.to_csc?;
// Convert to dense
let dense = sparse_tensor.to_dense?;
// Hybrid format for better performance
let hybrid = sparse_tensor.to_hybrid?;
Advanced Sparse Operations
// Sparse linear algebra (via scirs2-sparse)
use *;
// Sparse LU decomposition
let = sparse_lu?;
// Sparse Cholesky decomposition
let l = sparse_cholesky?;
// Solve sparse linear system
let x = sparse_solve?;
// Iterative solvers
let x = conjugate_gradient?;
let x = gmres?;
Sparse Neural Network Layers
use *;
// Sparse Linear layer
let sparse_linear = new;
// Sparse Embedding
let sparse_embedding = new;
// Graph Convolution (for GNNs)
let gcn = new;
GPU Acceleration
// Move sparse tensor to GPU
let gpu_sparse = sparse_tensor.cuda?;
// cuSPARSE operations
let result = spmm?;
let result = spmv?;
// Batched sparse operations
let batch_sparse = create_batch_sparse_tensors?;
let results = batch_spmm?;
Sparse Patterns and Masks
// Create structured sparsity patterns
let block_sparse_pattern = block_sparse;
let banded_pattern = banded;
// Apply sparsity to dense tensor
let sparse = dense_tensor.apply_sparsity?;
// Pruning utilities
let pruned = prune_magnitude?;
Sparse Gradients
// Sparse optimizer for sparse gradients
use *;
let sparse_adam = new;
// Gradient accumulation for sparse tensors
let grad_accumulator = new;
grad_accumulator.accumulate?;
Utilities
// Analyze sparsity
let stats = analyze_sparsity?;
println!;
println!;
println!;
// Visualize sparse matrix
visualize?;
// Benchmark sparse operations
let benchmark = new;
let results = benchmark.compare_formats?;
Integration with SciRS2
This crate fully leverages scirs2-sparse for:
- Optimized sparse BLAS operations
- Efficient sparse matrix formats
- Hardware-accelerated sparse computations
- Advanced sparse linear algebra
Performance Tips
- Choose the right format for your access pattern
- Use CSR for row-wise operations, CSC for column-wise
- Consider hybrid formats for mixed access patterns
- Use batched operations when possible
- Profile different sparse formats for your use case
License
Licensed under the Apache License, Version 2.0. See LICENSE for details.