Skip to main content

Tensor

Struct Tensor 

Source
pub struct Tensor {
    pub buffer: Buffer<f64>,
    /* private fields */
}
Expand description

An N-dimensional tensor backed by a Buffer<f64>.

Supports element-wise arithmetic (SIMD-accelerated), matrix multiplication (tiled + parallel), numerically-stable reductions via BinnedAccumulatorF64, and neural network operations (softmax, layer norm, attention).

§Memory

The underlying Buffer<f64> uses copy-on-write semantics. Cloning a Tensor is O(1). Operations like reshape and transpose return zero-copy views when possible. Mutation via set triggers a deep copy only when the buffer is shared.

Fields§

§buffer: Buffer<f64>

The underlying COW data buffer.

Implementations§

Source§

impl Tensor

Source

pub fn zeros(shape: &[usize]) -> Self

Create a tensor filled with zeros.

Source

pub fn ones(shape: &[usize]) -> Self

Create a tensor filled with ones.

Source

pub fn randn(shape: &[usize], rng: &mut Rng) -> Self

Create a tensor filled with samples from the standard normal distribution, drawn deterministically from rng.

Source

pub fn from_vec(data: Vec<f64>, shape: &[usize]) -> Result<Self, RuntimeError>

Create a tensor from raw data and a shape. Returns an error if the number of elements does not match the shape.

Source

pub fn shape(&self) -> &[usize]

The shape of this tensor.

Source

pub fn ndim(&self) -> usize

Number of dimensions.

Source

pub fn len(&self) -> usize

Total number of elements.

Source

pub fn is_empty(&self) -> bool

Whether the tensor has zero elements.

Source

pub fn is_contiguous(&self) -> bool

Whether this tensor is contiguous in memory (row-major, no offset).

Source

pub fn slice(&self, ranges: &[(usize, usize)]) -> Result<Tensor, RuntimeError>

Create a zero-copy slice (view) of this tensor. ranges contains (start, end) for each dimension.

Source

pub fn to_contiguous(&self) -> Tensor

Materialize a contiguous copy if this tensor is non-contiguous.

Source

pub fn broadcast_to( &self, target_shape: &[usize], ) -> Result<Tensor, RuntimeError>

Create a broadcast view of this tensor to target_shape. Uses stride=0 for dimensions that need broadcasting (size 1 -> target size).

Source

pub fn get(&self, indices: &[usize]) -> Result<f64, RuntimeError>

Read the element at the given multi-dimensional index.

Source

pub fn set(&mut self, indices: &[usize], val: f64) -> Result<(), RuntimeError>

Write the element at the given multi-dimensional index.

Source

pub fn to_vec(&self) -> Vec<f64>

Extract the raw data as a Vec<f64>, respecting strides and offset.

Source

pub fn reshape(&self, new_shape: &[usize]) -> Result<Tensor, RuntimeError>

Reshape to new_shape. The new shape must have the same total number of elements. The returned tensor shares the underlying buffer.

Source

pub fn add(&self, other: &Tensor) -> Result<Tensor, RuntimeError>

Element-wise addition (SIMD-accelerated for contiguous same-shape tensors).

Source

pub fn sub(&self, other: &Tensor) -> Result<Tensor, RuntimeError>

Element-wise subtraction (SIMD-accelerated for contiguous same-shape tensors).

Source

pub fn mul_elem(&self, other: &Tensor) -> Result<Tensor, RuntimeError>

Element-wise (Hadamard) multiplication (SIMD-accelerated for contiguous same-shape tensors).

Source

pub fn div_elem(&self, other: &Tensor) -> Result<Tensor, RuntimeError>

Element-wise division (SIMD-accelerated for contiguous same-shape tensors).

Source

pub fn fused_mul_add( &self, b: &Tensor, c: &Tensor, ) -> Result<Tensor, RuntimeError>

Fused multiply-add: self * b + c element-wise in a single pass.

Eliminates the intermediate tensor that separate mul + add would create. Uses software FMA (a * b + c with two roundings, not hardware FMA) to preserve bit-identity with the non-fused path.

Source

pub fn elem_pow(&self, other: &Tensor) -> Result<Tensor, RuntimeError>

Element-wise power: a^b.

Source

pub fn elem_min(&self, other: &Tensor) -> Result<Tensor, RuntimeError>

Element-wise minimum.

Source

pub fn elem_max(&self, other: &Tensor) -> Result<Tensor, RuntimeError>

Element-wise maximum.

Source

pub fn elem_atan2(&self, other: &Tensor) -> Result<Tensor, RuntimeError>

Element-wise atan2(self, other).

Source

pub fn elem_hypot(&self, other: &Tensor) -> Result<Tensor, RuntimeError>

Element-wise hypot(self, other).

Source

pub fn map(&self, f: impl Fn(f64) -> f64) -> Tensor

Apply a unary function to every element, returning a new contiguous tensor.

Source

pub fn map_simd(&self, op: UnaryOp) -> Tensor

SIMD-accelerated unary map for known operations (sqrt, abs, neg, relu).

Uses AVX2 (4-wide f64) when available, scalar fallback otherwise. Bit-identical to map(f) for the supported operations.

Source

pub fn sum(&self) -> f64

Sum of all elements (binned accumulation — order-invariant, deterministic).

Source

pub fn binned_sum(&self) -> f64

Sum of all elements using BinnedAccumulator (order-invariant, deterministic).

Bit-identical results regardless of element ordering or reduction schedule.

Source

pub fn dispatched_sum(&self, ctx: &ReductionContext) -> f64

Sum with dispatched strategy based on execution context.

Uses Kahan in serial mode, Binned in parallel/@nogc/strict/linalg mode.

Source

pub fn mean(&self) -> f64

Mean of all elements (binned sum / count).

Source

pub fn dispatched_mean(&self, ctx: &ReductionContext) -> f64

Mean with dispatched strategy based on execution context.

Source

pub fn sum_axis(&self, axis: usize) -> Result<Tensor, RuntimeError>

Sum along a specific axis, returning a tensor with that dimension reduced.

Supports N-D tensors. The reduced axis becomes size 1 in the output. Uses BinnedAccumulator for order-invariant, deterministic summation.

Examples:

  • 2D [M, N] with axis=0: result [1, N] (sum columns)
  • 2D [M, N] with axis=1: result [M, 1] (sum rows)
  • 3D [A, B, C] with axis=1: result [A, 1, C]
Source

pub fn neg(&self) -> Tensor

Negate every element, returning a new tensor.

Source

pub fn transpose(&self) -> Tensor

Transpose a tensor. For 2-D: swaps rows and columns (zero-copy view). For N-D: reverses all axes (zero-copy view).

Source

pub fn transpose_axes(&self, axes: &[usize]) -> Result<Tensor, RuntimeError>

Transpose with explicit axis permutation (N-D). Zero-copy view.

axes must be a permutation of [0, 1, ..., ndim-1].

Source

pub fn scalar_mul(&self, s: f64) -> Tensor

Multiply every element by a scalar, returning a new tensor.

Source

pub fn from_vec_unchecked(data: Vec<f64>, shape: &[usize]) -> Tensor

Create a tensor from raw data and shape. Panics if data.len() does not match the shape.

Source

pub fn add_unchecked(&self, other: &Tensor) -> Tensor

Element-wise addition. Panics on shape mismatch.

Source

pub fn sub_unchecked(&self, other: &Tensor) -> Tensor

Element-wise subtraction. Panics on shape mismatch.

Source

pub fn mul_elem_unchecked(&self, other: &Tensor) -> Tensor

Element-wise multiplication. Panics on shape mismatch.

Source

pub fn div_elem_unchecked(&self, other: &Tensor) -> Tensor

Element-wise division. Panics on shape mismatch.

Source

pub fn matmul_unchecked(&self, other: &Tensor) -> Tensor

Matrix multiplication. Panics on dimension mismatch.

Source

pub fn matmul(&self, other: &Tensor) -> Result<Tensor, RuntimeError>

Matrix multiplication for 2-D tensors.

self is (M, K), other is (K, N) => result is (M, N).

Source

pub fn bmm(&self, other: &Tensor) -> Result<Tensor, RuntimeError>

Batched matrix multiplication.

self is [..., M, K], other is [..., K, N] => result is [..., M, N]. The batch dimensions must be identical (no broadcast). For 2-D inputs, delegates to matmul.

Source

pub fn softmax(&self) -> Result<Tensor, RuntimeError>

Softmax along the last dimension (two-pass stable algorithm).

Pass 1: find max per row (prevents overflow in exp) Pass 2: compute exp(x - max), accumulate sum, normalize

For a tensor of shape [..., N], softmax is applied independently to each length-N slice along the last axis.

Source

pub fn layer_norm( &self, gamma: &Tensor, beta: &Tensor, eps: f64, ) -> Result<Tensor, RuntimeError>

Layer normalization over the last dimension.

For each length-D slice along the last axis:

  1. mean = Σx / D (BinnedAccumulator)
  2. var = Σ(x - mean)² / D (BinnedAccumulator)
  3. normalized = (x - mean) / √(var + eps)
  4. output = gamma * normalized + beta

gamma and beta are 1-D tensors of shape [D]. eps is a small constant (typically 1e-5).

Source

pub fn relu(&self) -> Tensor

ReLU activation: max(0, x) element-wise.

Source

pub fn sigmoid(&self) -> Tensor

Sigmoid activation: 1 / (1 + exp(-x)) element-wise.

Source

pub fn tanh_activation(&self) -> Tensor

Tanh activation element-wise.

Source

pub fn leaky_relu(&self, alpha: f64) -> Tensor

Leaky ReLU activation: max(alpha*x, x) element-wise.

Source

pub fn silu(&self) -> Tensor

SiLU (Swish) activation: x * sigmoid(x) element-wise.

Source

pub fn mish(&self) -> Tensor

Mish activation: x * tanh(softplus(x)) = x * tanh(ln(1 + exp(x))).

Source

pub fn argmax(&self) -> usize

Argmax: index of the maximum element (first occurrence, deterministic).

Source

pub fn argmin(&self) -> usize

Argmin: index of the minimum element (first occurrence, deterministic).

Source

pub fn clamp(&self, min: f64, max: f64) -> Tensor

Clamp all elements to [min, max].

Source

pub fn one_hot(indices: &[usize], depth: usize) -> Result<Tensor, RuntimeError>

One-hot encoding: given a 1D tensor of integer indices and a depth, returns a 2D tensor of shape [len, depth].

Source

pub fn cat(tensors: &[&Tensor], axis: usize) -> Result<Tensor, RuntimeError>

Concatenate tensors along existing axis.

Source

pub fn stack(tensors: &[&Tensor], axis: usize) -> Result<Tensor, RuntimeError>

Stack tensors along a new axis.

Source

pub fn topk(&self, k: usize) -> Result<(Tensor, Vec<usize>), RuntimeError>

Top-k values and indices (largest k values from flat data).

Source

pub fn gelu(&self) -> Tensor

GELU activation (approximate): x * 0.5 * (1 + tanh(√(2/π) * (x + 0.044715 * x³)))

Source

pub fn linear( &self, weight: &Tensor, bias: &Tensor, ) -> Result<Tensor, RuntimeError>

Linear layer: output = input @ weight^T + bias

self is [..., in_features], weight is [out_features, in_features], bias is [out_features]. Result is [..., out_features].

Source

pub fn conv1d( &self, filters: &Tensor, bias: &Tensor, ) -> Result<Tensor, RuntimeError>

1D convolution: signal [signal_len] * filters [out_ch, kernel_size] + bias

Returns [out_ch, signal_len - kernel_size + 1] (valid mode, stride=1).

Source

pub fn conv2d( &self, filters: &Tensor, bias: &Tensor, stride: usize, ) -> Result<Tensor, RuntimeError>

2D convolution — NCHW layout, valid mode, configurable stride.

§Arguments
  • self: [N, C_in, H, W] input tensor
  • filters: [C_out, C_in, kH, kW]
  • bias: [C_out]
  • stride: spatial stride (default 1)
§Returns

[N, C_out, H_out, W_out] where H_out = (H - kH) / stride + 1.

Uses BinnedAccumulatorF64 for every dot product — bit-identical results across all runs and hardware configurations.

Source

pub fn maxpool2d(&self, ph: usize, pw: usize) -> Result<Tensor, RuntimeError>

2D max-pooling — NCHW layout, non-overlapping windows.

  • self: [N, C, H, W]
  • ph, pw: pool height/width (stride = window size)

Returns [N, C, H/ph, W/pw].

Source

pub fn avgpool2d( &self, kernel_h: usize, kernel_w: usize, stride_h: usize, stride_w: usize, ) -> Result<Tensor, RuntimeError>

Applies 2-D average pooling over a [C, H, W] tensor.

§Arguments
  • kernel_h / kernel_w - Pooling window size
  • stride_h / stride_w - Stride for the pooling window
§Returns

Tensor of shape [C, out_h, out_w] where out_h = (H - kernel_h) / stride_h + 1.

§Errors

Returns an error if the tensor is not 3-D or if kernel/stride produce invalid output dimensions.

Source

pub fn scaled_dot_product_attention( queries: &Tensor, keys: &Tensor, values: &Tensor, ) -> Result<Tensor, RuntimeError>

Scaled dot-product attention (single head).

queries is [..., T, d_k] keys is [..., S, d_k] values is [..., S, d_v]

Computes: softmax(Q × Kᵀ / √d_k) × V Returns [..., T, d_v].

Source

pub fn transpose_last_two(&self) -> Result<Tensor, RuntimeError>

Transpose the last two dimensions of a tensor.

[..., A, B][..., B, A]

Source

pub fn from_bytes( bytes: &[u8], shape: &[usize], dtype: &str, ) -> Result<Tensor, RuntimeError>

Create a tensor view from raw bytes — zero allocation.

Interprets bytes as a contiguous block of f64 (8 bytes each) or f32 (4 bytes each, promoted to f64) values and maps them into a Tensor with the given shape.

dtype must be "f64" or "f32".

For f64: bytes.len() must equal shape_numel * 8. For f32: bytes.len() must equal shape_numel * 4.

The returned tensor owns its buffer (copied from the raw bytes) but performs exactly one allocation for the data vector.

Source

pub fn split_heads(&self, num_heads: usize) -> Result<Tensor, RuntimeError>

Reshape a 3D tensor [batch, seq, model_dim] into 4D [batch, num_heads, seq, head_dim] by splitting the last dimension.

This is a zero-copy view — it only changes shape/strides metadata. model_dim must be divisible by num_heads.

Source

pub fn merge_heads(&self) -> Result<Tensor, RuntimeError>

Merge heads back: reshape 4D [batch, num_heads, seq, head_dim] into 3D [batch, seq, model_dim]. Materializes if non-contiguous.

Source

pub fn view_reshape(&self, new_shape: &[usize]) -> Result<Tensor, RuntimeError>

View-only reshape: reinterpret shape without copying. Only works on contiguous tensors. Falls back to copy if non-contiguous.

Source

pub fn argsort(&self) -> Tensor

Returns indices that would sort the flattened tensor in ascending order. Uses f64::total_cmp for deterministic ordering of NaN.

Source

pub fn gather( &self, dim: usize, indices: &Tensor, ) -> Result<Tensor, RuntimeError>

Gather elements from the tensor along a dimension using index tensor. For 1D: result[i] = self[indices[i]] For 2D dim=0: result[i][j] = self[indices[i][j]][j] For 2D dim=1: result[i][j] = self[i][indices[i][j]]

Source

pub fn scatter( &self, dim: usize, indices: &Tensor, src: &Tensor, ) -> Result<Tensor, RuntimeError>

Scatter src values into a tensor of given shape at indices along a dimension. For 1D: result[indices[i]] = src[i] For 2D dim=0: result[indices[i][j]][j] = src[i][j] For 2D dim=1: result[i][indices[i][j]] = src[i][j]

Source

pub fn index_select( &self, dim: usize, indices: &Tensor, ) -> Result<Tensor, RuntimeError>

Select slices along a dimension by index. For 2D dim=0: selects rows For 2D dim=1: selects columns

Source

pub fn tensor_where( &self, condition: &Tensor, other: &Tensor, ) -> Result<Tensor, RuntimeError>

Element-wise conditional select: where(condition, other). For each element, returns self[i] if condition[i] != 0.0, else other[i].

Source

pub fn any(&self) -> bool

Return true if any element is non-zero.

Source

pub fn all(&self) -> bool

Return true if all elements are non-zero.

Source

pub fn nonzero(&self) -> Tensor

Return a 1-D tensor of flat indices where elements are non-zero.

If no elements are non-zero, returns an empty tensor of shape [0].

Source

pub fn masked_fill( &self, mask: &Tensor, value: f64, ) -> Result<Tensor, RuntimeError>

Fill elements where mask is non-zero with value.

Source

pub fn mean_axis( &self, axis: usize, keepdim: bool, ) -> Result<Tensor, RuntimeError>

Mean along an axis with optional keepdim.

Uses BinnedAccumulatorF64 for deterministic summation before dividing by the axis length.

Source

pub fn max_axis( &self, axis: usize, keepdim: bool, ) -> Result<(Tensor, Tensor), RuntimeError>

Max along an axis with optional keepdim. Return (values, indices).

Ties are broken by choosing the first occurrence (smallest index).

Source

pub fn min_axis( &self, axis: usize, keepdim: bool, ) -> Result<(Tensor, Tensor), RuntimeError>

Min along an axis with optional keepdim. Return (values, indices).

Ties are broken by choosing the first occurrence (smallest index).

Source

pub fn var_axis( &self, axis: usize, keepdim: bool, ) -> Result<Tensor, RuntimeError>

Variance along an axis with optional keepdim.

Computes population variance: Var = sum((x - mean)^2) / N. Uses BinnedAccumulatorF64 for the squared-differences summation.

Source

pub fn std_axis( &self, axis: usize, keepdim: bool, ) -> Result<Tensor, RuntimeError>

Standard deviation along an axis with optional keepdim.

Computed as sqrt(var_axis(axis, keepdim)).

Source

pub fn prod_axis( &self, axis: usize, keepdim: bool, ) -> Result<Tensor, RuntimeError>

Product along an axis with optional keepdim.

Computes a simple sequential product (exact for integer-like values).

Source

pub fn sort_axis( &self, axis: usize, descending: bool, ) -> Result<Tensor, RuntimeError>

Sort along an axis (stable sort). Return the sorted tensor.

For N-D tensors, independently sorts each 1-D slice along the specified axis. Uses f64::partial_cmp with deterministic tie-breaking by original index position.

Source

pub fn argsort_axis( &self, axis: usize, descending: bool, ) -> Result<Tensor, RuntimeError>

N-D argsort along an axis. Return a tensor of indices that would sort each slice along the given axis.

Deterministic tie-breaking: ties are resolved by original index order.

Source

pub fn einsum( notation: &str, inputs: &[&Tensor], ) -> Result<Tensor, RuntimeError>

Einstein summation notation. Supports patterns like “ij,jk->ik” (matmul), “ii->i” (diagonal), “ij->ji” (transpose), “ijk,ikl->ijl” (batched matmul). Uses BinnedAccumulator for all reductions.

Source

pub fn unsqueeze(&self, dim: usize) -> Result<Tensor, RuntimeError>

Add a dimension of size 1 at position dim.

For a tensor of shape [A, B], unsqueeze(0) yields [1, A, B], unsqueeze(1) yields [A, 1, B], etc.

Source

pub fn squeeze(&self, dim: Option<usize>) -> Result<Tensor, RuntimeError>

Remove a dimension of size 1 at position dim. If dim is None, removes all dimensions of size 1.

Source

pub fn expand(&self, target_shape: &[usize]) -> Result<Tensor, RuntimeError>

Broadcast without copying. Return a view with stride=0 for broadcasted dims.

Alias for broadcast_to.

Source

pub fn flatten( &self, start_dim: usize, end_dim: usize, ) -> Result<Tensor, RuntimeError>

Flatten a range of dimensions [start_dim, end_dim] into a single dimension.

Source

pub fn chunk(&self, n: usize, dim: usize) -> Result<Vec<Tensor>, RuntimeError>

Split tensor into n roughly equal chunks along dimension dim.

Source

pub fn split( &self, sizes: &[usize], dim: usize, ) -> Result<Vec<Tensor>, RuntimeError>

Split tensor along dimension dim according to the given sizes.

Source

pub fn scale_add( &self, alpha: f64, other: &Tensor, beta: f64, ) -> Result<Tensor, RuntimeError>

Fused alpha * self + beta * other element-wise. Single pass, one allocation.

Critical for LSTM/GRU gates where f * c_prev + i * g would otherwise create 3 intermediate tensors.

Source§

impl Tensor

Source

pub fn lu_decompose(&self) -> Result<(Tensor, Tensor, Vec<usize>), RuntimeError>

Compute the LU decomposition with partial pivoting.

Returns (L, U, pivot_indices) where P * A = L * U and pivot_indices encodes the row permutation P.

§Arguments
  • self - A square 2-D Tensor (n x n).
§Returns
  • L - Lower-triangular matrix with unit diagonal.
  • U - Upper-triangular matrix.
  • pivot_indices - Permutation vector of length n.
§Errors

Returns RuntimeError::InvalidOperation if the matrix is not square 2-D or is singular.

§Determinism

Pivot selection uses strict > comparison on absolute values; ties are broken by choosing the lowest row index.

Determinism contract: Pivot selection uses strict > comparison on absolute values. When two candidates have identical absolute values, the first (lowest row index) is chosen. This is deterministic given identical input bits.

Source

pub fn qr_decompose(&self) -> Result<(Tensor, Tensor), RuntimeError>

Compute the QR decomposition via Householder reflections.

Returns (Q, R) where A = Q * R, Q is orthogonal (m x min(m,n)), and R is upper-triangular (min(m,n) x n).

§Errors

Returns RuntimeError::InvalidOperation if the tensor is not 2-D.

Source

pub fn cholesky(&self) -> Result<Tensor, RuntimeError>

Compute the Cholesky decomposition: A = L * L^T.

Returns the lower-triangular factor L.

§Errors

Returns RuntimeError::InvalidOperation if the matrix is not square 2-D or is not positive definite.

§Determinism

Inner-loop summation uses Kahan compensation with fixed iteration order.

Source

pub fn det(&self) -> Result<f64, RuntimeError>

Compute the determinant via LU decomposition.

Returns the product of the U diagonal elements multiplied by the permutation parity sign. Returns 0.0 for singular matrices.

Source

pub fn solve(&self, b: &Tensor) -> Result<Tensor, RuntimeError>

Solve the linear system A * x = b via LU decomposition with partial pivoting.

§Arguments
  • self - Coefficient matrix A (n x n).
  • b - Right-hand side vector (length n).
§Returns

Solution vector x as a 1-D Tensor.

Source

pub fn lstsq(&self, b: &Tensor) -> Result<Tensor, RuntimeError>

Compute the ordinary least-squares solution minimizing ||A*x - b||_2.

Uses QR decomposition for numerical stability. The dot products for Q^T * b use BinnedAccumulatorF64 for determinism.

§Arguments
  • self - Design matrix A (m x n, m >= n).
  • b - Observation vector (length m).
§Returns

Solution vector x as a 1-D Tensor of length n.

Source

pub fn trace(&self) -> Result<f64, RuntimeError>

Compute the matrix trace (sum of diagonal elements) using BinnedAccumulatorF64.

Source

pub fn norm_frobenius(&self) -> Result<f64, RuntimeError>

Compute the Frobenius norm: sqrt(sum(a_ij^2)) using BinnedAccumulatorF64.

Source

pub fn eigh(&self) -> Result<(Vec<f64>, Tensor), RuntimeError>

Compute the symmetric eigenvalue decomposition via Householder tridiagonalization followed by implicit QR iteration with Wilkinson shift.

Returns (eigenvalues, eigenvectors) where eigenvalues are sorted in ascending order and eigenvectors form the columns of an n x n Tensor.

§Algorithm
  1. Householder reduction to tridiagonal form – O(n^3).
  2. Implicit QR iteration on the tridiagonal matrix – O(n^2) total.
  3. Eigenvectors are sign-canonicalized (first nonzero element positive).
§Determinism

Fixed row-major sweep order with smallest (i, j) tie-breaking. All reductions use fixed iteration order.

Source

pub fn matrix_rank(&self) -> Result<usize, RuntimeError>

Estimate the matrix rank by counting nonzero diagonal elements of R from a QR decomposition (tolerance 1e-10).

Source

pub fn kron(&self, other: &Tensor) -> Result<Tensor, RuntimeError>

Compute the Kronecker product A (x) B.

For A of shape (m, n) and B of shape (p, q), the result has shape (mp, nq).

Source

pub fn inverse(&self) -> Result<Tensor, RuntimeError>

Compute the matrix inverse via LU decomposition and column-wise forward/back substitution.

Source

pub fn norm_1(&self) -> Result<f64, RuntimeError>

Compute the matrix 1-norm (maximum absolute column sum) using BinnedAccumulatorF64.

Source

pub fn norm_inf(&self) -> Result<f64, RuntimeError>

Compute the matrix infinity-norm (maximum absolute row sum) using BinnedAccumulatorF64.

Source

pub fn cond(&self) -> Result<f64, RuntimeError>

Estimate the 2-norm condition number of the matrix.

For symmetric matrices, computes |lambda_max| / |lambda_min| via eigh. For general matrices, computes sqrt(sigma_max / sigma_min) via the eigenvalues of A^T * A.

Source

pub fn schur(&self) -> Result<(Tensor, Tensor), RuntimeError>

Compute the real Schur decomposition: A = Q * T * Q^T.

T is quasi-upper-triangular (upper triangular with possible 2x2 blocks on the diagonal for complex eigenvalue pairs) and Q is orthogonal.

§Algorithm
  1. Householder reduction to upper Hessenberg form.
  2. Implicit single-shift QR iteration with Wilkinson shift and Givens rotations.
Source

pub fn svd(&self) -> Result<(Tensor, Vec<f64>, Tensor), RuntimeError>

Compute the Singular Value Decomposition: A = U * diag(S) * Vt.

Returns (U, S, Vt) where S is a Vec<f64> of singular values in descending order, U is m x k, and Vt is k x n (k = min(m, n)).

§Algorithm

Eigendecomposition of A^T * A yields V and sigma^2. Then U = A * V * diag(1 / sigma_i). Sign-canonical: largest-magnitude element of each U column is positive.

§Determinism

All intermediate floating-point reductions use BinnedAccumulatorF64. Iteration order is fixed row-major.

Source

pub fn svd_truncated( &self, k: usize, ) -> Result<(Tensor, Vec<f64>, Tensor), RuntimeError>

Compute a truncated SVD retaining only the top k singular triplets.

Returns (U_k, S_k, Vt_k) where U_k is m x k and Vt_k is k x n.

Source

pub fn pinv(&self) -> Result<Tensor, RuntimeError>

Compute the Moore-Penrose pseudoinverse via SVD.

A+ = V * diag(1/s_i) * U^T, with default tolerance max(m, n) * eps * max(S) for near-zero singular values.

Source

pub fn pinv_with_tol(&self, tol: f64) -> Result<Tensor, RuntimeError>

Compute the Moore-Penrose pseudoinverse via SVD with an explicit singular-value cutoff tolerance.

Source

pub fn matrix_exp(&self) -> Result<Tensor, RuntimeError>

Compute the matrix exponential exp(A) via scaling-and-squaring with a Pade(13,13) rational approximation.

§Errors

Returns RuntimeError::InvalidOperation if the matrix is not square 2-D.

Trait Implementations§

Source§

impl Clone for Tensor

Source§

fn clone(&self) -> Tensor

Returns a duplicate of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl Debug for Tensor

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl Display for Tensor

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more

Auto Trait Implementations§

§

impl Freeze for Tensor

§

impl !RefUnwindSafe for Tensor

§

impl !Send for Tensor

§

impl !Sync for Tensor

§

impl Unpin for Tensor

§

impl UnsafeUnpin for Tensor

§

impl !UnwindSafe for Tensor

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoEither for T

Source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

impl<T> Pointable for T

Source§

const ALIGN: usize

The alignment of pointer.
Source§

type Init = T

The type for initializers.
Source§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
Source§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
Source§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
Source§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T> ToString for T
where T: Display + ?Sized,

Source§

fn to_string(&self) -> String

Converts the given value to a String. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.