Skip to main content

Crate scirs2_sparse

Crate scirs2_sparse 

Source
Expand description

§SciRS2 Sparse - Sparse Matrix Operations

scirs2-sparse provides comprehensive sparse matrix formats and operations modeled after SciPy’s sparse module, offering CSR, CSC, COO, DOK, LIL, DIA, BSR formats with efficient algorithms for large-scale sparse linear algebra, eigenvalue problems, and graph operations.

§🎯 Key Features

  • SciPy Compatibility: Drop-in replacement for scipy.sparse classes
  • Multiple Formats: CSR, CSC, COO, DOK, LIL, DIA, BSR with easy conversion
  • Efficient Operations: Sparse matrix-vector/matrix multiplication
  • Linear Solvers: Direct (LU, Cholesky) and iterative (CG, GMRES) solvers
  • Eigenvalue Solvers: ARPACK-based sparse eigenvalue computation
  • Array API: Modern NumPy-compatible array interface (recommended)

§📦 Module Overview

SciRS2 FormatSciPy EquivalentDescription
CsrArrayscipy.sparse.csr_arrayCompressed Sparse Row (efficient row slicing)
CscArrayscipy.sparse.csc_arrayCompressed Sparse Column (efficient column slicing)
CooArrayscipy.sparse.coo_arrayCoordinate format (efficient construction)
DokArrayscipy.sparse.dok_arrayDictionary of Keys (efficient element access)
LilArrayscipy.sparse.lil_arrayList of Lists (efficient incremental construction)
DiaArrayscipy.sparse.dia_arrayDiagonal format (efficient banded matrices)
BsrArrayscipy.sparse.bsr_arrayBlock Sparse Row (efficient block operations)

§🚀 Quick Start

[dependencies]
scirs2-sparse = "0.2.0"
use scirs2_sparse::csr_array::CsrArray;

// Create sparse matrix from triplets (row, col, value)
let rows = vec![0, 0, 1, 2, 2];
let cols = vec![0, 2, 2, 0, 1];
let data = vec![1.0, 2.0, 3.0, 4.0, 5.0];
let sparse = CsrArray::from_triplets(&rows, &cols, &data, (3, 3), false).expect("Operation failed");

§🔒 Version: 0.2.0 (February 8, 2026)

§Matrix vs. Array API

This module provides both a matrix-based API and an array-based API, following SciPy’s transition to a more NumPy-compatible array interface.

When using the array interface (e.g., CsrArray), please note that:

  • * performs element-wise multiplication, not matrix multiplication
  • Use dot() method for matrix multiplication
  • Operations like sum produce arrays, not matrices
  • Array-style slicing operations return scalars, 1D, or 2D arrays

For new code, we recommend using the array interface, which is more consistent with the rest of the numerical ecosystem.

§Examples

§Matrix API (Legacy)

use scirs2_sparse::csr::CsrMatrix;

// Create a sparse matrix in CSR format
let rows = vec![0, 0, 1, 2, 2];
let cols = vec![0, 2, 2, 0, 1];
let data = vec![1.0, 2.0, 3.0, 4.0, 5.0];
let shape = (3, 3);

let matrix = CsrMatrix::new(data, rows, cols, shape).expect("Operation failed");
use scirs2_sparse::csr_array::CsrArray;

// Create a sparse array in CSR format
let rows = vec![0, 0, 1, 2, 2];
let cols = vec![0, 2, 2, 0, 1];
let data = vec![1.0, 2.0, 3.0, 4.0, 5.0];
let shape = (3, 3);

// From triplets (COO-like construction)
let array = CsrArray::from_triplets(&rows, &cols, &data, shape, false).expect("Operation failed");

// Or directly from CSR components
// let array = CsrArray::new(...);

Re-exports§

pub use error::SparseError;
pub use error::SparseResult;
pub use sparray::is_sparse;
pub use sparray::SparseArray;
pub use sparray::SparseSum;
pub use sym_sparray::SymSparseArray;
pub use csr_array::CsrArray;
pub use csc_array::CscArray;
pub use coo_array::CooArray;
pub use dok_array::DokArray;
pub use lil_array::LilArray;
pub use dia_array::DiaArray;
pub use bsr_array::BsrArray;
pub use banded_array::BandedArray;
pub use sym_csr::SymCsrArray;
pub use sym_csr::SymCsrMatrix;
pub use sym_coo::SymCooArray;
pub use sym_coo::SymCooMatrix;
pub use csr::CsrMatrix;
pub use csc::CscMatrix;
pub use coo::CooMatrix;
pub use dok::DokMatrix;
pub use lil::LilMatrix;
pub use dia::DiaMatrix;
pub use bsr::BsrMatrix;
pub use banded::BandedMatrix;
pub use linalg::add;
pub use linalg::bicg;
pub use linalg::bicgstab;
pub use linalg::cg;
pub use linalg::cholesky_decomposition;
pub use linalg::convolution_operator;
pub use linalg::diag_matrix;
pub use linalg::eigs;
pub use linalg::eigsh;
pub use linalg::enhanced_add;
pub use linalg::enhanced_diagonal;
pub use linalg::enhanced_scale;
pub use linalg::enhanced_subtract;
pub use linalg::expm;
pub use linalg::expm_multiply;
pub use linalg::eye;
pub use linalg::finite_difference_operator;
pub use linalg::gcrot;
pub use linalg::gmres;
pub use linalg::incomplete_cholesky;
pub use linalg::incomplete_lu;
pub use linalg::inv;
pub use linalg::iram;
pub use linalg::iram_shift_invert;
pub use linalg::lanczos;
pub use linalg::lanczos;
pub use linalg::lu_decomposition;
pub use linalg::matmul;
pub use linalg::matrix_power;
pub use linalg::multiply;
pub use linalg::norm;
pub use linalg::onenormest;
pub use linalg::power_iteration;
pub use linalg::power_iteration;
pub use linalg::qr_decomposition;
pub use linalg::solve_arrow_matrix;
pub use linalg::solve_banded_system;
pub use linalg::solve_block_2x2;
pub use linalg::solve_kronecker_system;
pub use linalg::solve_saddle_point;
pub use linalg::sparse_direct_solve;
pub use linalg::sparse_lstsq;
pub use linalg::spsolve;
pub use linalg::svd_truncated;
pub use linalg::svds;
pub use linalg::tfqmr;
pub use linalg::ArnoldiConfig;
pub use linalg::ArpackOptions;
pub use linalg::AsLinearOperator;
pub use linalg::BiCGOptions;
pub use linalg::BiCGSTABOptions;
pub use linalg::BiCGSTABResult;
pub use linalg::BoundaryCondition;
pub use linalg::CGOptions;
pub use linalg::CGSOptions;
pub use linalg::CGSResult;
pub use linalg::CholeskyResult;
pub use linalg::ConvolutionMode;
pub use linalg::ConvolutionOperator;
pub use linalg::DiagonalOperator;
pub use linalg::EigenResult;
pub use linalg::EigenvalueMethod;
pub use linalg::EnhancedDiagonalOperator;
pub use linalg::EnhancedDifferenceOperator;
pub use linalg::EnhancedOperatorOptions;
pub use linalg::EnhancedScaledOperator;
pub use linalg::EnhancedSumOperator;
pub use linalg::FiniteDifferenceOperator;
pub use linalg::GCROTOptions;
pub use linalg::GCROTResult;
pub use linalg::GMRESOptions;
pub use linalg::ICOptions;
pub use linalg::ILU0Preconditioner;
pub use linalg::ILUOptions;
pub use linalg::IdentityOperator;
pub use linalg::IterationResult;
pub use linalg::JacobiPreconditioner;
pub use linalg::LUResult;
pub use linalg::LanczosOptions;
pub use linalg::LinearOperator;
pub use linalg::PowerIterationOptions;
pub use linalg::QRResult;
pub use linalg::SSORPreconditioner;
pub use linalg::SVDOptions;
pub use linalg::SVDResult;
pub use linalg::ScaledIdentityOperator;
pub use linalg::TFQMROptions;
pub use linalg::TFQMRResult;
pub use combine::block_diag;
pub use combine::bmat;
pub use combine::hstack;
pub use combine::kron;
pub use combine::kronsum;
pub use combine::tril;
pub use combine::triu;
pub use combine::vstack;
pub use index_dtype::can_cast_safely;
pub use index_dtype::get_index_dtype;
pub use index_dtype::safely_cast_index_arrays;
pub use bsr_enhanced::block_lu;
pub use bsr_enhanced::block_lu_solve;
pub use bsr_enhanced::BlockLUResult;
pub use bsr_enhanced::EnhancedBsr;
pub use dia_enhanced::banded_solve;
pub use dia_enhanced::dia_tridiagonal_solve;
pub use dia_enhanced::tridiagonal_solve;
pub use dia_enhanced::EnhancedDia;
pub use csf_tensor::CsfTensor;
pub use sparse_functions::sparse_block_diag;
pub use sparse_functions::sparse_diag_matrix;
pub use sparse_functions::sparse_diags;
pub use sparse_functions::sparse_eye;
pub use sparse_functions::sparse_eye_rect;
pub use sparse_functions::sparse_hstack;
pub use sparse_functions::sparse_kron;
pub use sparse_functions::sparse_kronsum;
pub use sparse_functions::sparse_random;
pub use sparse_functions::sparse_vstack;
pub use sym_ops::sym_coo_matvec;
pub use sym_ops::sym_csr_matvec;
pub use sym_ops::sym_csr_quadratic_form;
pub use sym_ops::sym_csr_rank1_update;
pub use sym_ops::sym_csr_trace;
pub use tensor_sparse::khatri_rao_product;
pub use tensor_sparse::CPDecomposition;
pub use tensor_sparse::SparseTensor;
pub use tensor_sparse::TuckerDecomposition;
pub use gpu_kernel_execution::calculate_adaptive_workgroup_size;
pub use gpu_kernel_execution::execute_spmv_kernel;
pub use gpu_kernel_execution::execute_symmetric_spmv_kernel;
pub use gpu_kernel_execution::execute_triangular_solve_kernel;
pub use gpu_kernel_execution::GpuKernelConfig;
pub use gpu_kernel_execution::GpuMemoryManager as GpuKernelMemoryManager;
pub use gpu_kernel_execution::GpuPerformanceProfiler;
pub use gpu_kernel_execution::MemoryStrategy;
pub use gpu_ops::gpu_sparse_matvec;
pub use gpu_ops::gpu_sym_sparse_matvec;
pub use gpu_ops::AdvancedGpuOps;
pub use gpu_ops::GpuKernelScheduler;
pub use gpu_ops::GpuMemoryManager;
pub use gpu_ops::GpuOptions;
pub use gpu_ops::GpuProfiler;
pub use gpu_ops::OptimizedGpuOps;
pub use gpu_spmv_implementation::GpuSpMV;
pub use memory_efficient::streaming_sparse_matvec;
pub use memory_efficient::CacheAwareOps;
pub use memory_efficient::MemoryPool;
pub use memory_efficient::MemoryTracker;
pub use memory_efficient::OutOfCoreProcessor;
pub use simd_ops::simd_csr_matvec;
pub use simd_ops::simd_sparse_elementwise;
pub use simd_ops::simd_sparse_linear_combination;
pub use simd_ops::simd_sparse_matmul;
pub use simd_ops::simd_sparse_norm;
pub use simd_ops::simd_sparse_scale;
pub use simd_ops::simd_sparse_transpose;
pub use simd_ops::ElementwiseOp;
pub use simd_ops::SimdOptions;
pub use parallel_vector_ops::advanced_sparse_matvec_csr;
pub use parallel_vector_ops::parallel_axpy;
pub use parallel_vector_ops::parallel_dot;
pub use parallel_vector_ops::parallel_linear_combination;
pub use parallel_vector_ops::parallel_norm2;
pub use parallel_vector_ops::parallel_sparse_matvec_csr;
pub use parallel_vector_ops::parallel_vector_add;
pub use parallel_vector_ops::parallel_vector_copy;
pub use parallel_vector_ops::parallel_vector_scale;
pub use parallel_vector_ops::parallel_vector_sub;
pub use parallel_vector_ops::ParallelVectorOptions;
pub use iterative_solvers::bicgstab as enhanced_bicgstab;
pub use iterative_solvers::cg as enhanced_cg;
pub use iterative_solvers::chebyshev;
pub use iterative_solvers::estimate_spectral_radius;
pub use iterative_solvers::gmres as enhanced_gmres;
pub use iterative_solvers::sparse_diagonal;
pub use iterative_solvers::sparse_norm;
pub use iterative_solvers::sparse_trace;
pub use iterative_solvers::ILU0Preconditioner as EnhancedILU0Preconditioner;
pub use iterative_solvers::IterativeSolverConfig;
pub use iterative_solvers::JacobiPreconditioner as EnhancedJacobiPreconditioner;
pub use iterative_solvers::NormType;
pub use iterative_solvers::Preconditioner;
pub use iterative_solvers::SSORPreconditioner as EnhancedSSORPreconditioner;
pub use iterative_solvers::SolverResult;
pub use lobpcg::lobpcg as lobpcg_eigensolver;
pub use lobpcg::lobpcg_generalised;
pub use lobpcg::EigenTarget;
pub use lobpcg::LobpcgConfig;
pub use lobpcg::LobpcgResult;
pub use krylov::iram as krylov_iram;
pub use krylov::thick_restart_lanczos;
pub use krylov::IramConfig;
pub use krylov::KrylovEigenResult;
pub use krylov::ThickRestartLanczosConfig;
pub use krylov::WhichEigenvalues;
pub use sparse_utils::condest_1norm;
pub use sparse_utils::permute_matrix;
pub use sparse_utils::reverse_cuthill_mckee;
pub use sparse_utils::sparse_add;
pub use sparse_utils::sparse_extract_diagonal;
pub use sparse_utils::sparse_identity;
pub use sparse_utils::sparse_kronecker;
pub use sparse_utils::sparse_matrix_norm;
pub use sparse_utils::sparse_matrix_trace;
pub use sparse_utils::sparse_scale;
pub use sparse_utils::sparse_sub;
pub use sparse_utils::sparse_transpose;
pub use sparse_utils::spgemm;
pub use sparse_utils::RcmResult;
pub use sparse_utils::SparseNorm;
pub use incomplete_factorizations::Ic0;
pub use incomplete_factorizations::Ilu0 as Ilu0Enhanced;
pub use incomplete_factorizations::IluK;
pub use incomplete_factorizations::Ilut;
pub use incomplete_factorizations::IlutConfig;
pub use incomplete_factorizations::Milu;
pub use direct_solver::amd_ordering;
pub use direct_solver::elimination_tree;
pub use direct_solver::inverse_perm;
pub use direct_solver::nested_dissection_ordering;
pub use direct_solver::sparse_cholesky_solve;
pub use direct_solver::sparse_lu_solve;
pub use direct_solver::symbolic_cholesky;
pub use direct_solver::SparseCholResult;
pub use direct_solver::SparseCholeskySolver;
pub use direct_solver::SparseLuResult;
pub use direct_solver::SparseLuSolver;
pub use direct_solver::SparseSolver;
pub use direct_solver::SymbolicAnalysis;
pub use sparse_qr::apply_q;
pub use sparse_qr::apply_qt;
pub use sparse_qr::extract_q_dense;
pub use sparse_qr::numerical_rank;
pub use sparse_qr::sparse_least_squares;
pub use sparse_qr::sparse_qr as sparse_qr_factorize;
pub use sparse_qr::SparseLeastSquaresResult;
pub use sparse_qr::SparseQrConfig;
pub use sparse_qr::SparseQrResult;
pub use sparse_eigen::cayley_transform_matvec;
pub use sparse_eigen::check_eigenpairs;
pub use sparse_eigen::compute_residuals;
pub use sparse_eigen::shift_invert_eig;
pub use sparse_eigen::sparse_eig;
pub use sparse_eigen::sparse_eigs;
pub use sparse_eigen::sparse_eigsh;
pub use sparse_eigen::EigenMethod;
pub use sparse_eigen::EigenvalueTarget;
pub use sparse_eigen::SparseEigenConfig;
pub use sparse_eigen::SparseEigenResult;
pub use sparse_eigen::SpectralTransform;
pub use quantum_inspired_sparse::QuantumProcessorStats;
pub use quantum_inspired_sparse::QuantumSparseConfig;
pub use quantum_inspired_sparse::QuantumSparseProcessor;
pub use quantum_inspired_sparse::QuantumStrategy;
pub use neural_adaptive_sparse::NeuralAdaptiveConfig;
pub use neural_adaptive_sparse::NeuralAdaptiveSparseProcessor;
pub use neural_adaptive_sparse::NeuralProcessorStats;
pub use neural_adaptive_sparse::OptimizationStrategy;
pub use quantum_neural_hybrid::HybridStrategy;
pub use quantum_neural_hybrid::QuantumNeuralConfig;
pub use quantum_neural_hybrid::QuantumNeuralHybridProcessor;
pub use quantum_neural_hybrid::QuantumNeuralHybridStats;
pub use adaptive_memory_compression::AdaptiveCompressionConfig;
pub use adaptive_memory_compression::AdaptiveMemoryCompressor;
pub use adaptive_memory_compression::CompressedMatrix;
pub use adaptive_memory_compression::CompressionAlgorithm;
pub use adaptive_memory_compression::MemoryStats;
pub use realtime_performance_monitor::Alert;
pub use realtime_performance_monitor::AlertSeverity;
pub use realtime_performance_monitor::PerformanceMonitorConfig;
pub use realtime_performance_monitor::PerformanceSample;
pub use realtime_performance_monitor::ProcessorType;
pub use realtime_performance_monitor::RealTimePerformanceMonitor;
pub use csgraph::all_pairs_shortest_path;
pub use csgraph::bellman_ford_single_source;
pub use csgraph::betweenness_centrality;
pub use csgraph::bfs_distances;
pub use csgraph::closeness_centrality;
pub use csgraph::community_detection;
pub use csgraph::compute_laplacianmatrix;
pub use csgraph::connected_components;
pub use csgraph::connected_components;
pub use csgraph::degree_matrix;
pub use csgraph::dijkstra_single_source;
pub use csgraph::dinic;
pub use csgraph::edmonds_karp;
pub use csgraph::eigenvector_centrality;
pub use csgraph::floyd_warshall;
pub use csgraph::ford_fulkerson;
pub use csgraph::has_path;
pub use csgraph::is_connected;
pub use csgraph::is_laplacian;
pub use csgraph::is_spanning_tree;
pub use csgraph::kruskal_mst;
pub use csgraph::label_propagation;
pub use csgraph::laplacian;
pub use csgraph::laplacian;
pub use csgraph::largest_component;
pub use csgraph::louvain_communities;
pub use csgraph::min_cut;
pub use csgraph::minimum_spanning_tree;
pub use csgraph::minimum_spanning_tree;
pub use csgraph::modularity;
pub use csgraph::num_edges;
pub use csgraph::num_vertices;
pub use csgraph::pagerank;
pub use csgraph::prim_mst;
pub use csgraph::reachable_vertices;
pub use csgraph::reconstruct_path;
pub use csgraph::shortest_path;
pub use csgraph::shortest_path;
pub use csgraph::single_source_shortest_path;
pub use csgraph::spanning_tree_weight;
pub use csgraph::strongly_connected_components;
pub use csgraph::to_adjacency_list;
pub use csgraph::topological_sort;
pub use csgraph::traversegraph;
pub use csgraph::undirected_connected_components;
pub use csgraph::validate_graph;
pub use csgraph::weakly_connected_components;
pub use csgraph::LaplacianType;
pub use csgraph::MSTAlgorithm;
pub use csgraph::MaxFlowResult;
pub use csgraph::ShortestPathMethod;
pub use csgraph::TraversalOrder;

Modules§

adaptive_memory_compression
Adaptive Memory Compression for Advanced-Large Sparse Matrices
banded
Banded matrix format (legacy matrix API)
banded_array
Banded matrix format for sparse matrices
bsr
Block Sparse Row (BSR) matrix format
bsr_array
bsr_enhanced
Enhanced Block Sparse Row (BSR) format with flat block storage and Block LU factorization
combine
construct
construct_sym
Construction utilities for symmetric sparse matrices
convert
Conversion utilities for sparse matrices
coo
Coordinate (COO) matrix format
coo_array
csc
Compressed Sparse Column (CSC) matrix format
csc_array
csf_tensor
Compressed Sparse Fiber (CSF) format for sparse tensors
csgraph
Compressed sparse graph algorithms module
csr
Compressed Sparse Row (CSR) matrix format
csr_array
dia
Diagonal (DIA) matrix format
dia_array
dia_enhanced
Enhanced Diagonal (DIA) format with efficient banded matrix operations
direct_solver
Sparse direct solvers
dok
Dictionary of Keys (DOK) matrix format
dok_array
error
Error types for the SciRS2 sparse module
gpu
GPU acceleration for sparse matrix operations
gpu_kernel_execution
GPU kernel execution implementations for sparse matrix operations
gpu_ops
GPU-accelerated operations for sparse matrices
gpu_spmv_implementation
Enhanced GPU SpMV Implementation for scirs2-sparse
incomplete_factorizations
Incomplete matrix factorizations for preconditioning
index_dtype
iterative_solvers
Enhanced iterative solvers for sparse linear systems
krylov
Advanced Krylov subspace eigensolvers
lil
List of Lists (LIL) matrix format
lil_array
linalg
Linear algebra operations for sparse matrices
lobpcg
LOBPCG (Locally Optimal Block Preconditioned Conjugate Gradient) eigensolver
memory_efficient
Memory-efficient algorithms and patterns for sparse matrices
neural_adaptive_sparse
Neural-Adaptive Sparse Matrix Operations for Advanced Mode
parallel_vector_ops
Parallel implementations of vector operations for iterative solvers
quantum_inspired_sparse
Quantum-Inspired Sparse Matrix Operations for Advanced Mode
quantum_neural_hybrid
Quantum-Neural Hybrid Optimization for Advanced Mode
realtime_performance_monitor
Real-Time Performance Monitoring and Adaptation for Advanced Processors
simd_ops
SIMD-accelerated operations for sparse matrices
sparray
sparse_eigen
Unified sparse eigenvalue interface
sparse_functions
Sparse matrix utility functions
sparse_qr
Sparse QR factorization
sparse_utils
Sparse matrix utility operations
sym_coo
Symmetric Coordinate (SymCOO) module
sym_csr
Symmetric Compressed Sparse Row (SymCSR) module
sym_ops
sym_sparray
Symmetric Sparse Array trait
tensor_sparse
Tensor-based sparse operations
utils
Utility functions for sparse matrices

Structs§

SparseEfficiencyWarning
SparseWarning

Functions§

is_sparse_array
Check if an object is a sparse array
is_sparse_matrix
Check if an object is a sparse matrix (legacy API)
is_sym_sparse_array
Check if an object is a symmetric sparse array