pub struct Tensor {
pub buffer: Buffer<f64>,
/* private fields */
}Expand description
An N-dimensional tensor backed by a Buffer<f64>.
Supports element-wise arithmetic, matrix multiplication (2-D), and numerically-stable reductions via BinnedAccumulator summation.
Fields§
§buffer: Buffer<f64>Implementations§
Source§impl Tensor
impl Tensor
Sourcepub fn randn(shape: &[usize], rng: &mut Rng) -> Self
pub fn randn(shape: &[usize], rng: &mut Rng) -> Self
Create a tensor filled with samples from the standard normal
distribution, drawn deterministically from rng.
Sourcepub fn from_vec(data: Vec<f64>, shape: &[usize]) -> Result<Self, RuntimeError>
pub fn from_vec(data: Vec<f64>, shape: &[usize]) -> Result<Self, RuntimeError>
Create a tensor from raw data and a shape. Returns an error if the number of elements does not match the shape.
Sourcepub fn is_contiguous(&self) -> bool
pub fn is_contiguous(&self) -> bool
Whether this tensor is contiguous in memory (row-major, no offset).
Sourcepub fn slice(&self, ranges: &[(usize, usize)]) -> Result<Tensor, RuntimeError>
pub fn slice(&self, ranges: &[(usize, usize)]) -> Result<Tensor, RuntimeError>
Create a zero-copy slice (view) of this tensor.
ranges contains (start, end) for each dimension.
Sourcepub fn to_contiguous(&self) -> Tensor
pub fn to_contiguous(&self) -> Tensor
Materialize a contiguous copy if this tensor is non-contiguous.
Sourcepub fn broadcast_to(
&self,
target_shape: &[usize],
) -> Result<Tensor, RuntimeError>
pub fn broadcast_to( &self, target_shape: &[usize], ) -> Result<Tensor, RuntimeError>
Create a broadcast view of this tensor to target_shape.
Uses stride=0 for dimensions that need broadcasting (size 1 -> target size).
Sourcepub fn get(&self, indices: &[usize]) -> Result<f64, RuntimeError>
pub fn get(&self, indices: &[usize]) -> Result<f64, RuntimeError>
Read the element at the given multi-dimensional index.
Sourcepub fn set(&mut self, indices: &[usize], val: f64) -> Result<(), RuntimeError>
pub fn set(&mut self, indices: &[usize], val: f64) -> Result<(), RuntimeError>
Write the element at the given multi-dimensional index.
Sourcepub fn to_vec(&self) -> Vec<f64>
pub fn to_vec(&self) -> Vec<f64>
Extract the raw data as a Vec<f64>, respecting strides and offset.
Sourcepub fn reshape(&self, new_shape: &[usize]) -> Result<Tensor, RuntimeError>
pub fn reshape(&self, new_shape: &[usize]) -> Result<Tensor, RuntimeError>
Reshape to new_shape. The new shape must have the same total number
of elements. The returned tensor shares the underlying buffer.
Sourcepub fn add(&self, other: &Tensor) -> Result<Tensor, RuntimeError>
pub fn add(&self, other: &Tensor) -> Result<Tensor, RuntimeError>
Element-wise addition (SIMD-accelerated for contiguous same-shape tensors).
Sourcepub fn sub(&self, other: &Tensor) -> Result<Tensor, RuntimeError>
pub fn sub(&self, other: &Tensor) -> Result<Tensor, RuntimeError>
Element-wise subtraction (SIMD-accelerated for contiguous same-shape tensors).
Sourcepub fn mul_elem(&self, other: &Tensor) -> Result<Tensor, RuntimeError>
pub fn mul_elem(&self, other: &Tensor) -> Result<Tensor, RuntimeError>
Element-wise (Hadamard) multiplication (SIMD-accelerated for contiguous same-shape tensors).
Sourcepub fn div_elem(&self, other: &Tensor) -> Result<Tensor, RuntimeError>
pub fn div_elem(&self, other: &Tensor) -> Result<Tensor, RuntimeError>
Element-wise division (SIMD-accelerated for contiguous same-shape tensors).
Sourcepub fn fused_mul_add(
&self,
b: &Tensor,
c: &Tensor,
) -> Result<Tensor, RuntimeError>
pub fn fused_mul_add( &self, b: &Tensor, c: &Tensor, ) -> Result<Tensor, RuntimeError>
Fused multiply-add: self * b + c element-wise in a single pass.
Eliminates the intermediate tensor that separate mul + add would create.
Uses software FMA (a * b + c with two roundings, not hardware FMA)
to preserve bit-identity with the non-fused path.
Sourcepub fn elem_pow(&self, other: &Tensor) -> Result<Tensor, RuntimeError>
pub fn elem_pow(&self, other: &Tensor) -> Result<Tensor, RuntimeError>
Element-wise power: a^b.
Sourcepub fn elem_atan2(&self, other: &Tensor) -> Result<Tensor, RuntimeError>
pub fn elem_atan2(&self, other: &Tensor) -> Result<Tensor, RuntimeError>
Element-wise atan2(self, other).
Sourcepub fn elem_hypot(&self, other: &Tensor) -> Result<Tensor, RuntimeError>
pub fn elem_hypot(&self, other: &Tensor) -> Result<Tensor, RuntimeError>
Element-wise hypot(self, other).
Sourcepub fn map(&self, f: impl Fn(f64) -> f64) -> Tensor
pub fn map(&self, f: impl Fn(f64) -> f64) -> Tensor
Apply a unary function to every element, returning a new contiguous tensor.
Sourcepub fn map_simd(&self, op: UnaryOp) -> Tensor
pub fn map_simd(&self, op: UnaryOp) -> Tensor
SIMD-accelerated unary map for known operations (sqrt, abs, neg, relu).
Uses AVX2 (4-wide f64) when available, scalar fallback otherwise.
Bit-identical to map(f) for the supported operations.
Sourcepub fn sum(&self) -> f64
pub fn sum(&self) -> f64
Sum of all elements (binned accumulation — order-invariant, deterministic).
Sourcepub fn binned_sum(&self) -> f64
pub fn binned_sum(&self) -> f64
Sum of all elements using BinnedAccumulator (order-invariant, deterministic).
Bit-identical results regardless of element ordering or reduction schedule.
Sourcepub fn dispatched_sum(&self, ctx: &ReductionContext) -> f64
pub fn dispatched_sum(&self, ctx: &ReductionContext) -> f64
Sum with dispatched strategy based on execution context.
Uses Kahan in serial mode, Binned in parallel/@nogc/strict/linalg mode.
Sourcepub fn dispatched_mean(&self, ctx: &ReductionContext) -> f64
pub fn dispatched_mean(&self, ctx: &ReductionContext) -> f64
Mean with dispatched strategy based on execution context.
Sourcepub fn sum_axis(&self, axis: usize) -> Result<Tensor, RuntimeError>
pub fn sum_axis(&self, axis: usize) -> Result<Tensor, RuntimeError>
Sum along a specific axis, returning a tensor with that dimension reduced.
Supports N-D tensors. The reduced axis becomes size 1 in the output. Uses BinnedAccumulator for order-invariant, deterministic summation.
Examples:
- 2D [M, N] with axis=0: result [1, N] (sum columns)
- 2D [M, N] with axis=1: result [M, 1] (sum rows)
- 3D [A, B, C] with axis=1: result [A, 1, C]
Sourcepub fn transpose(&self) -> Tensor
pub fn transpose(&self) -> Tensor
Transpose a tensor. For 2-D: swaps rows and columns (zero-copy view). For N-D: reverses all axes (zero-copy view).
Sourcepub fn transpose_axes(&self, axes: &[usize]) -> Result<Tensor, RuntimeError>
pub fn transpose_axes(&self, axes: &[usize]) -> Result<Tensor, RuntimeError>
Transpose with explicit axis permutation (N-D). Zero-copy view.
axes must be a permutation of [0, 1, ..., ndim-1].
Sourcepub fn scalar_mul(&self, s: f64) -> Tensor
pub fn scalar_mul(&self, s: f64) -> Tensor
Multiply every element by a scalar, returning a new tensor.
Sourcepub fn from_vec_unchecked(data: Vec<f64>, shape: &[usize]) -> Tensor
pub fn from_vec_unchecked(data: Vec<f64>, shape: &[usize]) -> Tensor
Create a tensor from raw data and shape.
Panics if data.len() does not match the shape.
Sourcepub fn add_unchecked(&self, other: &Tensor) -> Tensor
pub fn add_unchecked(&self, other: &Tensor) -> Tensor
Element-wise addition. Panics on shape mismatch.
Sourcepub fn sub_unchecked(&self, other: &Tensor) -> Tensor
pub fn sub_unchecked(&self, other: &Tensor) -> Tensor
Element-wise subtraction. Panics on shape mismatch.
Sourcepub fn mul_elem_unchecked(&self, other: &Tensor) -> Tensor
pub fn mul_elem_unchecked(&self, other: &Tensor) -> Tensor
Element-wise multiplication. Panics on shape mismatch.
Sourcepub fn div_elem_unchecked(&self, other: &Tensor) -> Tensor
pub fn div_elem_unchecked(&self, other: &Tensor) -> Tensor
Element-wise division. Panics on shape mismatch.
Sourcepub fn matmul_unchecked(&self, other: &Tensor) -> Tensor
pub fn matmul_unchecked(&self, other: &Tensor) -> Tensor
Matrix multiplication. Panics on dimension mismatch.
Sourcepub fn matmul(&self, other: &Tensor) -> Result<Tensor, RuntimeError>
pub fn matmul(&self, other: &Tensor) -> Result<Tensor, RuntimeError>
Matrix multiplication for 2-D tensors.
self is (M, K), other is (K, N) => result is (M, N).
Sourcepub fn bmm(&self, other: &Tensor) -> Result<Tensor, RuntimeError>
pub fn bmm(&self, other: &Tensor) -> Result<Tensor, RuntimeError>
Batched matrix multiplication.
self is [..., M, K], other is [..., K, N] => result is [..., M, N].
The batch dimensions must be identical (no broadcast).
For 2-D inputs, delegates to matmul.
Sourcepub fn softmax(&self) -> Result<Tensor, RuntimeError>
pub fn softmax(&self) -> Result<Tensor, RuntimeError>
Softmax along the last dimension (two-pass stable algorithm).
Pass 1: find max per row (prevents overflow in exp) Pass 2: compute exp(x - max), accumulate sum, normalize
For a tensor of shape [..., N], softmax is applied independently
to each length-N slice along the last axis.
Sourcepub fn layer_norm(
&self,
gamma: &Tensor,
beta: &Tensor,
eps: f64,
) -> Result<Tensor, RuntimeError>
pub fn layer_norm( &self, gamma: &Tensor, beta: &Tensor, eps: f64, ) -> Result<Tensor, RuntimeError>
Layer normalization over the last dimension.
For each length-D slice along the last axis:
- mean = Σx / D (BinnedAccumulator)
- var = Σ(x - mean)² / D (BinnedAccumulator)
- normalized = (x - mean) / √(var + eps)
- output = gamma * normalized + beta
gamma and beta are 1-D tensors of shape [D].
eps is a small constant (typically 1e-5).
pub fn relu(&self) -> Tensor
Sourcepub fn tanh_activation(&self) -> Tensor
pub fn tanh_activation(&self) -> Tensor
Tanh activation element-wise.
Sourcepub fn leaky_relu(&self, alpha: f64) -> Tensor
pub fn leaky_relu(&self, alpha: f64) -> Tensor
Leaky ReLU activation: max(alpha*x, x) element-wise.
Sourcepub fn mish(&self) -> Tensor
pub fn mish(&self) -> Tensor
Mish activation: x * tanh(softplus(x)) = x * tanh(ln(1 + exp(x))).
Sourcepub fn argmax(&self) -> usize
pub fn argmax(&self) -> usize
Argmax: index of the maximum element (first occurrence, deterministic).
Sourcepub fn argmin(&self) -> usize
pub fn argmin(&self) -> usize
Argmin: index of the minimum element (first occurrence, deterministic).
Sourcepub fn one_hot(indices: &[usize], depth: usize) -> Result<Tensor, RuntimeError>
pub fn one_hot(indices: &[usize], depth: usize) -> Result<Tensor, RuntimeError>
One-hot encoding: given a 1D tensor of integer indices and a depth, returns a 2D tensor of shape [len, depth].
Sourcepub fn cat(tensors: &[&Tensor], axis: usize) -> Result<Tensor, RuntimeError>
pub fn cat(tensors: &[&Tensor], axis: usize) -> Result<Tensor, RuntimeError>
Concatenate tensors along existing axis.
Sourcepub fn stack(tensors: &[&Tensor], axis: usize) -> Result<Tensor, RuntimeError>
pub fn stack(tensors: &[&Tensor], axis: usize) -> Result<Tensor, RuntimeError>
Stack tensors along a new axis.
Sourcepub fn topk(&self, k: usize) -> Result<(Tensor, Vec<usize>), RuntimeError>
pub fn topk(&self, k: usize) -> Result<(Tensor, Vec<usize>), RuntimeError>
Top-k values and indices (largest k values from flat data).
Sourcepub fn gelu(&self) -> Tensor
pub fn gelu(&self) -> Tensor
GELU activation (approximate): x * 0.5 * (1 + tanh(√(2/π) * (x + 0.044715 * x³)))
Sourcepub fn linear(
&self,
weight: &Tensor,
bias: &Tensor,
) -> Result<Tensor, RuntimeError>
pub fn linear( &self, weight: &Tensor, bias: &Tensor, ) -> Result<Tensor, RuntimeError>
Linear layer: output = input @ weight^T + bias
self is [..., in_features], weight is [out_features, in_features],
bias is [out_features].
Result is [..., out_features].
Sourcepub fn conv1d(
&self,
filters: &Tensor,
bias: &Tensor,
) -> Result<Tensor, RuntimeError>
pub fn conv1d( &self, filters: &Tensor, bias: &Tensor, ) -> Result<Tensor, RuntimeError>
1D convolution: signal [signal_len] * filters [out_ch, kernel_size] + bias
Returns [out_ch, signal_len - kernel_size + 1] (valid mode, stride=1).
Sourcepub fn conv2d(
&self,
filters: &Tensor,
bias: &Tensor,
stride: usize,
) -> Result<Tensor, RuntimeError>
pub fn conv2d( &self, filters: &Tensor, bias: &Tensor, stride: usize, ) -> Result<Tensor, RuntimeError>
2D convolution — NCHW layout, valid mode, configurable stride.
§Arguments
self:[N, C_in, H, W]input tensorfilters:[C_out, C_in, kH, kW]bias:[C_out]stride: spatial stride (default 1)
§Returns
[N, C_out, H_out, W_out] where H_out = (H - kH) / stride + 1.
Uses BinnedAccumulatorF64 for every dot product — bit-identical results
across all runs and hardware configurations.
Sourcepub fn maxpool2d(&self, ph: usize, pw: usize) -> Result<Tensor, RuntimeError>
pub fn maxpool2d(&self, ph: usize, pw: usize) -> Result<Tensor, RuntimeError>
2D max-pooling — NCHW layout, non-overlapping windows.
self:[N, C, H, W]ph,pw: pool height/width (stride = window size)
Returns [N, C, H/ph, W/pw].
Sourcepub fn scaled_dot_product_attention(
queries: &Tensor,
keys: &Tensor,
values: &Tensor,
) -> Result<Tensor, RuntimeError>
pub fn scaled_dot_product_attention( queries: &Tensor, keys: &Tensor, values: &Tensor, ) -> Result<Tensor, RuntimeError>
Scaled dot-product attention (single head).
queries is [..., T, d_k]
keys is [..., S, d_k]
values is [..., S, d_v]
Computes: softmax(Q × Kᵀ / √d_k) × V
Returns [..., T, d_v].
Sourcepub fn transpose_last_two(&self) -> Result<Tensor, RuntimeError>
pub fn transpose_last_two(&self) -> Result<Tensor, RuntimeError>
Transpose the last two dimensions of a tensor.
[..., A, B] → [..., B, A]
Sourcepub fn from_bytes(
bytes: &[u8],
shape: &[usize],
dtype: &str,
) -> Result<Tensor, RuntimeError>
pub fn from_bytes( bytes: &[u8], shape: &[usize], dtype: &str, ) -> Result<Tensor, RuntimeError>
Create a tensor view from raw bytes — zero allocation.
Interprets bytes as a contiguous block of f64 (8 bytes each) or
f32 (4 bytes each, promoted to f64) values and maps them into a
Tensor with the given shape.
dtype must be "f64" or "f32".
For f64: bytes.len() must equal shape_numel * 8. For f32: bytes.len() must equal shape_numel * 4.
The returned tensor owns its buffer (copied from the raw bytes) but performs exactly one allocation for the data vector.
Sourcepub fn split_heads(&self, num_heads: usize) -> Result<Tensor, RuntimeError>
pub fn split_heads(&self, num_heads: usize) -> Result<Tensor, RuntimeError>
Reshape a 3D tensor [batch, seq, model_dim] into 4D
[batch, num_heads, seq, head_dim] by splitting the last dimension.
This is a zero-copy view — it only changes shape/strides metadata.
model_dim must be divisible by num_heads.
Sourcepub fn merge_heads(&self) -> Result<Tensor, RuntimeError>
pub fn merge_heads(&self) -> Result<Tensor, RuntimeError>
Merge heads back: reshape 4D [batch, num_heads, seq, head_dim] into
3D [batch, seq, model_dim]. Materializes if non-contiguous.
Sourcepub fn view_reshape(&self, new_shape: &[usize]) -> Result<Tensor, RuntimeError>
pub fn view_reshape(&self, new_shape: &[usize]) -> Result<Tensor, RuntimeError>
View-only reshape: reinterpret shape without copying. Only works on contiguous tensors. Falls back to copy if non-contiguous.
Sourcepub fn argsort(&self) -> Tensor
pub fn argsort(&self) -> Tensor
Returns indices that would sort the flattened tensor in ascending order. Uses f64::total_cmp for deterministic ordering of NaN.
Sourcepub fn gather(
&self,
dim: usize,
indices: &Tensor,
) -> Result<Tensor, RuntimeError>
pub fn gather( &self, dim: usize, indices: &Tensor, ) -> Result<Tensor, RuntimeError>
Gather elements from the tensor along a dimension using index tensor. For 1D: result[i] = self[indices[i]] For 2D dim=0: result[i][j] = self[indices[i][j]][j] For 2D dim=1: result[i][j] = self[i][indices[i][j]]
Sourcepub fn scatter(
&self,
dim: usize,
indices: &Tensor,
src: &Tensor,
) -> Result<Tensor, RuntimeError>
pub fn scatter( &self, dim: usize, indices: &Tensor, src: &Tensor, ) -> Result<Tensor, RuntimeError>
Scatter src values into a tensor of given shape at indices along a dimension. For 1D: result[indices[i]] = src[i] For 2D dim=0: result[indices[i][j]][j] = src[i][j] For 2D dim=1: result[i][indices[i][j]] = src[i][j]
Sourcepub fn index_select(
&self,
dim: usize,
indices: &Tensor,
) -> Result<Tensor, RuntimeError>
pub fn index_select( &self, dim: usize, indices: &Tensor, ) -> Result<Tensor, RuntimeError>
Select slices along a dimension by index. For 2D dim=0: selects rows For 2D dim=1: selects columns
Sourcepub fn tensor_where(
&self,
condition: &Tensor,
other: &Tensor,
) -> Result<Tensor, RuntimeError>
pub fn tensor_where( &self, condition: &Tensor, other: &Tensor, ) -> Result<Tensor, RuntimeError>
Element-wise conditional select: where(condition, other).
For each element, returns self[i] if condition[i] != 0.0, else other[i].
Sourcepub fn nonzero(&self) -> Tensor
pub fn nonzero(&self) -> Tensor
Returns a 1-D tensor of flat indices where elements are non-zero.
Sourcepub fn masked_fill(
&self,
mask: &Tensor,
value: f64,
) -> Result<Tensor, RuntimeError>
pub fn masked_fill( &self, mask: &Tensor, value: f64, ) -> Result<Tensor, RuntimeError>
Fill elements where mask is non-zero with value.
Sourcepub fn mean_axis(
&self,
axis: usize,
keepdim: bool,
) -> Result<Tensor, RuntimeError>
pub fn mean_axis( &self, axis: usize, keepdim: bool, ) -> Result<Tensor, RuntimeError>
Mean along an axis with optional keepdim.
Sourcepub fn max_axis(
&self,
axis: usize,
keepdim: bool,
) -> Result<(Tensor, Tensor), RuntimeError>
pub fn max_axis( &self, axis: usize, keepdim: bool, ) -> Result<(Tensor, Tensor), RuntimeError>
Max along an axis with optional keepdim. Returns (values, indices).
Sourcepub fn min_axis(
&self,
axis: usize,
keepdim: bool,
) -> Result<(Tensor, Tensor), RuntimeError>
pub fn min_axis( &self, axis: usize, keepdim: bool, ) -> Result<(Tensor, Tensor), RuntimeError>
Min along an axis with optional keepdim. Returns (values, indices).
Sourcepub fn var_axis(
&self,
axis: usize,
keepdim: bool,
) -> Result<Tensor, RuntimeError>
pub fn var_axis( &self, axis: usize, keepdim: bool, ) -> Result<Tensor, RuntimeError>
Variance along an axis with optional keepdim.
Sourcepub fn std_axis(
&self,
axis: usize,
keepdim: bool,
) -> Result<Tensor, RuntimeError>
pub fn std_axis( &self, axis: usize, keepdim: bool, ) -> Result<Tensor, RuntimeError>
Standard deviation along an axis with optional keepdim.
Sourcepub fn prod_axis(
&self,
axis: usize,
keepdim: bool,
) -> Result<Tensor, RuntimeError>
pub fn prod_axis( &self, axis: usize, keepdim: bool, ) -> Result<Tensor, RuntimeError>
Product along an axis with optional keepdim.
Sourcepub fn sort_axis(
&self,
axis: usize,
descending: bool,
) -> Result<Tensor, RuntimeError>
pub fn sort_axis( &self, axis: usize, descending: bool, ) -> Result<Tensor, RuntimeError>
Sort along an axis (stable sort). Returns the sorted tensor. For N-D tensors, sorts slices along the specified axis.
Sourcepub fn argsort_axis(
&self,
axis: usize,
descending: bool,
) -> Result<Tensor, RuntimeError>
pub fn argsort_axis( &self, axis: usize, descending: bool, ) -> Result<Tensor, RuntimeError>
N-D argsort along an axis. Returns indices tensor.
Sourcepub fn einsum(
notation: &str,
inputs: &[&Tensor],
) -> Result<Tensor, RuntimeError>
pub fn einsum( notation: &str, inputs: &[&Tensor], ) -> Result<Tensor, RuntimeError>
Einstein summation notation. Supports patterns like “ij,jk->ik” (matmul), “ii->i” (diagonal), “ij->ji” (transpose), “ijk,ikl->ijl” (batched matmul). Uses BinnedAccumulator for all reductions.
Sourcepub fn unsqueeze(&self, dim: usize) -> Result<Tensor, RuntimeError>
pub fn unsqueeze(&self, dim: usize) -> Result<Tensor, RuntimeError>
Add a dimension of size 1 at position dim.
Sourcepub fn squeeze(&self, dim: Option<usize>) -> Result<Tensor, RuntimeError>
pub fn squeeze(&self, dim: Option<usize>) -> Result<Tensor, RuntimeError>
Remove a dimension of size 1 at position dim.
If dim is None, removes all dimensions of size 1.
Sourcepub fn expand(&self, target_shape: &[usize]) -> Result<Tensor, RuntimeError>
pub fn expand(&self, target_shape: &[usize]) -> Result<Tensor, RuntimeError>
Broadcast without copying. Returns a view with stride=0 for broadcasted dims.
Same as broadcast_to but named for consistency with the gap-fix plan.
Sourcepub fn flatten(
&self,
start_dim: usize,
end_dim: usize,
) -> Result<Tensor, RuntimeError>
pub fn flatten( &self, start_dim: usize, end_dim: usize, ) -> Result<Tensor, RuntimeError>
Flatten a range of dimensions [start_dim, end_dim] into a single dimension.
Sourcepub fn chunk(&self, n: usize, dim: usize) -> Result<Vec<Tensor>, RuntimeError>
pub fn chunk(&self, n: usize, dim: usize) -> Result<Vec<Tensor>, RuntimeError>
Split tensor into n roughly equal chunks along dimension dim.
Source§impl Tensor
impl Tensor
Sourcepub fn lu_decompose(&self) -> Result<(Tensor, Tensor, Vec<usize>), RuntimeError>
pub fn lu_decompose(&self) -> Result<(Tensor, Tensor, Vec<usize>), RuntimeError>
LU decomposition with partial pivoting. Returns (L, U, pivot_indices). Input must be square 2D.
Determinism contract: Pivot selection uses strict > comparison on
absolute values. When two candidates have identical absolute values, the
first (lowest row index) is chosen. This is deterministic given identical
input bits.
Sourcepub fn qr_decompose(&self) -> Result<(Tensor, Tensor), RuntimeError>
pub fn qr_decompose(&self) -> Result<(Tensor, Tensor), RuntimeError>
QR decomposition via Modified Gram-Schmidt. Returns (Q, R). Input must be 2D with rows >= cols.
Sourcepub fn cholesky(&self) -> Result<Tensor, RuntimeError>
pub fn cholesky(&self) -> Result<Tensor, RuntimeError>
Cholesky decomposition: A = L * L^T. Input must be symmetric positive definite 2D.
Sourcepub fn det(&self) -> Result<f64, RuntimeError>
pub fn det(&self) -> Result<f64, RuntimeError>
Determinant via LU decomposition: product of U diagonal * parity.
Sourcepub fn solve(&self, b: &Tensor) -> Result<Tensor, RuntimeError>
pub fn solve(&self, b: &Tensor) -> Result<Tensor, RuntimeError>
Solve Ax = b via LU decomposition. self = A (n x n), b = vector (n).
Sourcepub fn lstsq(&self, b: &Tensor) -> Result<Tensor, RuntimeError>
pub fn lstsq(&self, b: &Tensor) -> Result<Tensor, RuntimeError>
Least squares solution: min ||Ax - b||_2 via QR decomposition. self = A (m x n, m >= n), b = vector (m).
Sourcepub fn trace(&self) -> Result<f64, RuntimeError>
pub fn trace(&self) -> Result<f64, RuntimeError>
Matrix trace: sum of diagonal elements.
Sourcepub fn norm_frobenius(&self) -> Result<f64, RuntimeError>
pub fn norm_frobenius(&self) -> Result<f64, RuntimeError>
Frobenius norm: sqrt(sum(aij^2)).
Sourcepub fn eigh(&self) -> Result<(Vec<f64>, Tensor), RuntimeError>
pub fn eigh(&self) -> Result<(Vec<f64>, Tensor), RuntimeError>
Eigenvalue decomposition for symmetric matrices (Jacobi method). Returns (eigenvalues sorted ascending, eigenvectors n x n). DETERMINISM: fixed row-major sweep order, smallest (i,j) tie-breaking.
Sourcepub fn matrix_rank(&self) -> Result<usize, RuntimeError>
pub fn matrix_rank(&self) -> Result<usize, RuntimeError>
Matrix rank via SVD: count singular values > tolerance.
Sourcepub fn inverse(&self) -> Result<Tensor, RuntimeError>
pub fn inverse(&self) -> Result<Tensor, RuntimeError>
Matrix inverse via LU decomposition + back-substitution.
Sourcepub fn norm_1(&self) -> Result<f64, RuntimeError>
pub fn norm_1(&self) -> Result<f64, RuntimeError>
1-norm: maximum absolute column sum.
Sourcepub fn norm_inf(&self) -> Result<f64, RuntimeError>
pub fn norm_inf(&self) -> Result<f64, RuntimeError>
Infinity norm: maximum absolute row sum.
Sourcepub fn cond(&self) -> Result<f64, RuntimeError>
pub fn cond(&self) -> Result<f64, RuntimeError>
Condition number via eigenvalue ratio. For symmetric: |lambda_max| / |lambda_min|. For general: sqrt(sigma_max/sigma_min).
Sourcepub fn schur(&self) -> Result<(Tensor, Tensor), RuntimeError>
pub fn schur(&self) -> Result<(Tensor, Tensor), RuntimeError>
Real Schur decomposition: A = Q * T * Q^T. Uses Hessenberg reduction + implicit double-shift QR.
Sourcepub fn svd(&self) -> Result<(Tensor, Vec<f64>, Tensor), RuntimeError>
pub fn svd(&self) -> Result<(Tensor, Vec<f64>, Tensor), RuntimeError>
Compute the Singular Value Decomposition: A = U @ diag(S) @ Vt.
Returns (U, S, Vt) as (Tensor, Vec
Implementation: via eigendecomposition of A^TA (for V and S^2), then U = AV*diag(1/s_i).
Determinism contract: All intermediate float reductions use
BinnedAccumulatorF64. Iteration order is fixed row-major.
Sourcepub fn svd_truncated(
&self,
k: usize,
) -> Result<(Tensor, Vec<f64>, Tensor), RuntimeError>
pub fn svd_truncated( &self, k: usize, ) -> Result<(Tensor, Vec<f64>, Tensor), RuntimeError>
Truncated SVD — only the top k singular values/vectors.
Returns (U_k, S_k, Vt_k) where U_k is m x k, Vt_k is k x n.
Sourcepub fn pinv(&self) -> Result<Tensor, RuntimeError>
pub fn pinv(&self) -> Result<Tensor, RuntimeError>
Compute the Moore-Penrose pseudoinverse via SVD. A+ = V @ diag(1/s_i) @ Ut (with default tolerance for near-zero singular values).
Sourcepub fn pinv_with_tol(&self, tol: f64) -> Result<Tensor, RuntimeError>
pub fn pinv_with_tol(&self, tol: f64) -> Result<Tensor, RuntimeError>
Compute the Moore-Penrose pseudoinverse via SVD with explicit tolerance.
Sourcepub fn matrix_exp(&self) -> Result<Tensor, RuntimeError>
pub fn matrix_exp(&self) -> Result<Tensor, RuntimeError>
Matrix exponential via scaling and squaring with Pade(13,13) approximation.
Trait Implementations§
Auto Trait Implementations§
impl Freeze for Tensor
impl !RefUnwindSafe for Tensor
impl !Send for Tensor
impl !Sync for Tensor
impl Unpin for Tensor
impl UnsafeUnpin for Tensor
impl !UnwindSafe for Tensor
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more