pub enum Tensor {
F32(ArrayD<f32>),
F64(ArrayD<f64>),
F16(ArrayD<f16>),
BF16(ArrayD<bf16>),
I64(ArrayD<i64>),
C32(ArrayD<Complex32>),
C64(ArrayD<Complex64>),
CF16(ArrayD<Complex<f16>>),
CBF16(ArrayD<Complex<bf16>>),
Sparse(SparseTensor),
}Variants§
F32(ArrayD<f32>)
F64(ArrayD<f64>)
F16(ArrayD<f16>)
BF16(ArrayD<bf16>)
I64(ArrayD<i64>)
C32(ArrayD<Complex32>)
C64(ArrayD<Complex64>)
CF16(ArrayD<Complex<f16>>)
CBF16(ArrayD<Complex<bf16>>)
Sparse(SparseTensor)
Implementations§
Source§impl Tensor
impl Tensor
Sourcepub fn leaky_relu(&self, negative_slope: f32) -> Result<Tensor>
pub fn leaky_relu(&self, negative_slope: f32) -> Result<Tensor>
Sourcepub fn silu(&self) -> Result<Tensor>
pub fn silu(&self) -> Result<Tensor>
SiLU (Sigmoid-Linear Unit) activation function.
Also known as Swish activation: f(x) = x * sigmoid(x)
§Performance
Uses scirs2-core’s SIMD-accelerated Swish for tensors with ≥256 elements. SiLU/Swish is used in EfficientNet, GPT-NeoX, and many modern architectures.
§Returns
A tensor with SiLU applied element-wise.
Source§impl Tensor
impl Tensor
Sourcepub fn magnitude(&self) -> Result<Tensor>
pub fn magnitude(&self) -> Result<Tensor>
Get the magnitude of a complex tensor with numerical stability enhancements.
Uses numerically stable algorithms to avoid overflow/underflow in intermediate calculations.
§Returns
A tensor containing the magnitudes.
Sourcepub fn to_complex(&self) -> Result<Tensor>
pub fn to_complex(&self) -> Result<Tensor>
Sourcepub fn complex_hadamard(&self, other: &Tensor) -> Result<Tensor>
pub fn complex_hadamard(&self, other: &Tensor) -> Result<Tensor>
Complex element-wise multiplication (Hadamard product) for two complex tensors.
This operation is crucial for transformer architectures using complex-valued layers. Optimized for modern hardware architectures.
§Arguments
other- The other complex tensor to multiply with
§Returns
A tensor containing the element-wise complex multiplication result.
Sourcepub fn fft(&self) -> Result<Tensor>
pub fn fft(&self) -> Result<Tensor>
Fast Fourier Transform (FFT) for complex tensors with numerical stability enhancements.
Essential for advanced transformer architectures using frequency domain operations. Optimized for modern SIMD architectures with overflow/underflow protection.
§Returns
A tensor containing the FFT result.
Sourcepub fn complex_matmul(&self, other: &Tensor) -> Result<Tensor>
pub fn complex_matmul(&self, other: &Tensor) -> Result<Tensor>
Complex matrix multiplication optimized for modern architectures with numerical stability.
Uses SIMD instructions and parallel processing for maximum performance. Essential for complex-valued transformer layers with overflow/underflow protection.
§Arguments
other- The other complex tensor to multiply with
§Returns
A tensor containing the complex matrix multiplication result.
Sourcepub fn complex_relu(&self) -> Result<Tensor>
pub fn complex_relu(&self) -> Result<Tensor>
Optimized complex activation function for advanced architectures.
Applies complex ReLU activation: ReLU(Re(z)) + i*ReLU(Im(z)) Optimized for modern SIMD architectures.
§Returns
A tensor with complex ReLU activation applied.
Source§impl Tensor
impl Tensor
Sourcepub fn with_shape(data: Vec<f32>, shape: Vec<usize>) -> Result<Self>
pub fn with_shape(data: Vec<f32>, shape: Vec<usize>) -> Result<Self>
Creates a tensor from data with a specific shape.
This is an alias for from_vec for backward compatibility with tests.
§Arguments
data- A vector of f32 valuesshape- The desired shape of the tensor
§Returns
A tensor with the specified shape containing the provided data.
§Example
use trustformers_core::tensor::Tensor;
let tensor = Tensor::with_shape(vec![1.0, 2.0, 3.0, 4.0], vec![2, 2])?;
assert_eq!(tensor.shape(), vec![2, 2]);Sourcepub fn from_vec_i64(data: Vec<i64>, shape: &[usize]) -> Result<Self>
pub fn from_vec_i64(data: Vec<i64>, shape: &[usize]) -> Result<Self>
Creates a tensor from i64 data with a specific shape.
§Arguments
data- A vector of i64 valuesshape- The desired shape of the tensor
§Returns
A tensor with the specified shape containing the provided i64 data.
§Example
use trustformers_core::tensor::Tensor;
let tensor = Tensor::from_vec_i64(vec![1, 2, 3, 4], &[2, 2])?;
assert_eq!(tensor.shape(), vec![2, 2]);Sourcepub fn randn(shape: &[usize]) -> Result<Self>
pub fn randn(shape: &[usize]) -> Result<Self>
Creates a tensor filled with random values from a normal distribution.
§Arguments
shape- The desired shape of the tensor
§Returns
A tensor filled with random values from N(0, 1).
§Example
use trustformers_core::tensor::Tensor;
let tensor = Tensor::randn(&[2, 3])?;
assert_eq!(tensor.shape(), vec![2, 3]);Sourcepub fn zeros_like(tensor: &Tensor) -> Result<Self>
pub fn zeros_like(tensor: &Tensor) -> Result<Self>
Creates a tensor filled with zeros with the same shape as the input tensor.
§Arguments
tensor- The tensor to match shape from
§Returns
A tensor of the same shape filled with zeros.
§Example
use trustformers_core::tensor::Tensor;
let input = Tensor::randn(&[2, 3])?;
let zeros = Tensor::zeros_like(&input)?;
assert_eq!(zeros.shape(), vec![2, 3]);Sourcepub fn ones_like(tensor: &Tensor) -> Result<Self>
pub fn ones_like(tensor: &Tensor) -> Result<Self>
Creates a tensor filled with ones with the same shape as the input tensor.
§Arguments
tensor- The tensor to match shape from
§Returns
A tensor of the same shape filled with ones.
§Example
use trustformers_core::tensor::Tensor;
let input = Tensor::randn(&[2, 3])?;
let ones = Tensor::ones_like(&input)?;
assert_eq!(ones.shape(), vec![2, 3]);Sourcepub fn from_data(data: Vec<f32>, shape: &[usize]) -> Result<Self>
pub fn from_data(data: Vec<f32>, shape: &[usize]) -> Result<Self>
Creates a tensor from data with specified shape.
§Arguments
data- A vector of f32 valuesshape- The desired shape of the tensor
§Returns
A tensor containing the provided data reshaped to the specified shape.
§Example
use trustformers_core::tensor::Tensor;
let data = vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0];
let tensor = Tensor::from_data(data, &[2, 3])?;
assert_eq!(tensor.shape(), vec![2, 3]);Sourcepub fn from_slice(data: &[f32], shape: &[usize]) -> Result<Self>
pub fn from_slice(data: &[f32], shape: &[usize]) -> Result<Self>
Creates a tensor from a slice with specified shape.
§Arguments
data- A slice of f32 valuesshape- The desired shape of the tensor
§Returns
A tensor containing the provided data reshaped to the specified shape.
§Example
use trustformers_core::tensor::Tensor;
let data = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0];
let tensor = Tensor::from_slice(&data, &[2, 3])?;
assert_eq!(tensor.shape(), vec![2, 3]);Sourcepub fn randn_like(tensor: &Tensor) -> Result<Self>
pub fn randn_like(tensor: &Tensor) -> Result<Self>
Creates a tensor filled with random values with the same shape as the input tensor.
§Arguments
tensor- The tensor to match shape from
§Returns
A tensor of the same shape filled with random values from N(0, 1).
§Example
use trustformers_core::tensor::Tensor;
let input = Tensor::zeros(&[2, 3])?;
let random = Tensor::randn_like(&input)?;
assert_eq!(random.shape(), vec![2, 3]);Sourcepub fn zeros_f64(shape: &[usize]) -> Result<Self>
pub fn zeros_f64(shape: &[usize]) -> Result<Self>
Creates a tensor filled with zeros (f64 precision).
Sourcepub fn zeros_i64(shape: &[usize]) -> Result<Self>
pub fn zeros_i64(shape: &[usize]) -> Result<Self>
Creates a tensor filled with zeros (i64 integers).
Sourcepub fn zeros_c32(shape: &[usize]) -> Result<Self>
pub fn zeros_c32(shape: &[usize]) -> Result<Self>
Creates a tensor filled with zeros (complex f32).
Sourcepub fn zeros_c64(shape: &[usize]) -> Result<Self>
pub fn zeros_c64(shape: &[usize]) -> Result<Self>
Creates a tensor filled with zeros (complex f64).
Sourcepub fn zeros_f16(shape: &[usize]) -> Result<Self>
pub fn zeros_f16(shape: &[usize]) -> Result<Self>
Creates a tensor filled with zeros (f16 precision).
Sourcepub fn zeros_bf16(shape: &[usize]) -> Result<Self>
pub fn zeros_bf16(shape: &[usize]) -> Result<Self>
Creates a tensor filled with zeros (bf16 precision).
Sourcepub fn zeros_cf16(shape: &[usize]) -> Result<Self>
pub fn zeros_cf16(shape: &[usize]) -> Result<Self>
Creates a tensor filled with zeros (complex f16).
Sourcepub fn zeros_cbf16(shape: &[usize]) -> Result<Self>
pub fn zeros_cbf16(shape: &[usize]) -> Result<Self>
Creates a tensor filled with zeros (complex bf16).
Sourcepub fn complex_f64(
real: Vec<f64>,
imag: Vec<f64>,
shape: &[usize],
) -> Result<Self>
pub fn complex_f64( real: Vec<f64>, imag: Vec<f64>, shape: &[usize], ) -> Result<Self>
Creates a complex tensor from real and imaginary parts (f64 precision).
Sourcepub fn from_vec_with_dtype(
data: Vec<f64>,
shape: &[usize],
dtype: DType,
) -> Result<Self>
pub fn from_vec_with_dtype( data: Vec<f64>, shape: &[usize], dtype: DType, ) -> Result<Self>
Creates a tensor from a Vec with explicit dtype. TEMPORARY: Uses ndarray. Will be replaced with SciRS2-Core.
§Arguments
data- The data as aVec<f64>shape- The desired shapedtype- The desired data type
Sourcepub fn full_with_dtype(
shape: &[usize],
value: f64,
dtype: DType,
) -> Result<Self>
pub fn full_with_dtype( shape: &[usize], value: f64, dtype: DType, ) -> Result<Self>
Creates a tensor filled with a constant value with specified dtype.
§Arguments
shape- Shape of the tensor as a slicevalue- The fill value (will be cast to target dtype)dtype- Target data type
§Returns
A tensor filled with the constant value.
§Note
TEMPORARY: Uses ndarray - will be replaced with SciRS2-Core in future migration
Sourcepub fn ones_f16(shape: &[usize]) -> Result<Self>
pub fn ones_f16(shape: &[usize]) -> Result<Self>
Creates a tensor filled with ones (f16 precision).
Sourcepub fn ones_bf16(shape: &[usize]) -> Result<Self>
pub fn ones_bf16(shape: &[usize]) -> Result<Self>
Creates a tensor filled with ones (bf16 precision).
Sourcepub fn randn_f16(shape: &[usize]) -> Result<Self>
pub fn randn_f16(shape: &[usize]) -> Result<Self>
Creates a tensor filled with random values from a normal distribution (f16 precision).
Sourcepub fn randn_bf16(shape: &[usize]) -> Result<Self>
pub fn randn_bf16(shape: &[usize]) -> Result<Self>
Creates a tensor filled with random values from a normal distribution (bf16 precision).
Sourcepub fn complex_f16(
real: Vec<f32>,
imag: Vec<f32>,
shape: &[usize],
) -> Result<Self>
pub fn complex_f16( real: Vec<f32>, imag: Vec<f32>, shape: &[usize], ) -> Result<Self>
Creates a complex tensor from real and imaginary parts (f16 precision).
Sourcepub fn complex_bf16(
real: Vec<f32>,
imag: Vec<f32>,
shape: &[usize],
) -> Result<Self>
pub fn complex_bf16( real: Vec<f32>, imag: Vec<f32>, shape: &[usize], ) -> Result<Self>
Creates a complex tensor from real and imaginary parts (bf16 precision).
Sourcepub fn zeros_dtype(dtype: DType, shape: &[usize]) -> Result<Self>
pub fn zeros_dtype(dtype: DType, shape: &[usize]) -> Result<Self>
Sourcepub fn ones_dtype(dtype: DType, shape: &[usize]) -> Result<Self>
pub fn ones_dtype(dtype: DType, shape: &[usize]) -> Result<Self>
Sourcepub fn full_with_shape(shape: &[usize], value: f32) -> Result<Self>
pub fn full_with_shape(shape: &[usize], value: f32) -> Result<Self>
Sourcepub fn from_slice_f64(data: &[f64], shape: &[usize]) -> Result<Self>
pub fn from_slice_f64(data: &[f64], shape: &[usize]) -> Result<Self>
Sourcepub fn from_slice_i64(data: &[i64], shape: &[usize]) -> Result<Self>
pub fn from_slice_i64(data: &[i64], shape: &[usize]) -> Result<Self>
Sourcepub fn from_slice_i32(data: &[i32], shape: &[usize]) -> Result<Self>
pub fn from_slice_i32(data: &[i32], shape: &[usize]) -> Result<Self>
Sourcepub fn from_scalar(value: f32, dtype: DType) -> Result<Self>
pub fn from_scalar(value: f32, dtype: DType) -> Result<Self>
Source§impl Tensor
impl Tensor
Source§impl Tensor
impl Tensor
Sourcepub fn cross_entropy(&self, targets: &Tensor, reduction: &str) -> Result<Tensor>
pub fn cross_entropy(&self, targets: &Tensor, reduction: &str) -> Result<Tensor>
Cross entropy loss.
Sourcepub fn cosine_similarity(
&self,
other: &Tensor,
dim: i32,
eps: f32,
) -> Result<Tensor>
pub fn cosine_similarity( &self, other: &Tensor, dim: i32, eps: f32, ) -> Result<Tensor>
Cosine similarity.
Sourcepub fn log_softmax(&self, dim: i32) -> Result<Tensor>
pub fn log_softmax(&self, dim: i32) -> Result<Tensor>
Log softmax.
Source§impl Tensor
impl Tensor
Sourcepub fn add(&self, other: &Tensor) -> Result<Tensor>
pub fn add(&self, other: &Tensor) -> Result<Tensor>
Element-wise addition with numerical stability enhancements.
Includes overflow/underflow protection and NaN/infinity detection.
Sourcepub fn broadcast_add(&self, other: &Tensor) -> Result<Tensor>
pub fn broadcast_add(&self, other: &Tensor) -> Result<Tensor>
Broadcasting addition.
Sourcepub fn scalar_mul(&self, scalar: f32) -> Result<Tensor>
pub fn scalar_mul(&self, scalar: f32) -> Result<Tensor>
Scalar multiplication.
Sourcepub fn scalar_div(&self, scalar: f32) -> Result<Tensor>
pub fn scalar_div(&self, scalar: f32) -> Result<Tensor>
Scalar division.
Sourcepub fn add_scalar(&self, scalar: f32) -> Result<Tensor>
pub fn add_scalar(&self, scalar: f32) -> Result<Tensor>
Scalar addition.
Sourcepub fn sub_scalar(&self, scalar: f32) -> Result<Tensor>
pub fn sub_scalar(&self, scalar: f32) -> Result<Tensor>
Scalar subtraction.
Sourcepub fn div_scalar(&self, scalar: f32) -> Result<Tensor>
pub fn div_scalar(&self, scalar: f32) -> Result<Tensor>
Division by scalar.
Sourcepub fn mul_scalar(&self, scalar: f32) -> Result<Tensor>
pub fn mul_scalar(&self, scalar: f32) -> Result<Tensor>
Multiplication by scalar (alias for scalar_mul).
Source§impl Tensor
impl Tensor
Sourcepub fn shapes_are_broadcastable(shape1: &[usize], shape2: &[usize]) -> bool
pub fn shapes_are_broadcastable(shape1: &[usize], shape2: &[usize]) -> bool
Check if two shapes are broadcastable according to numpy-style broadcasting rules
Source§impl Tensor
impl Tensor
Sourcepub fn matmul(&self, other: &Tensor) -> Result<Tensor>
pub fn matmul(&self, other: &Tensor) -> Result<Tensor>
Matrix multiplication with numerical stability enhancements.
Performs matrix multiplication between two tensors with support for:
- 2D matrix multiplication
- Batched 3D matrix multiplication
- Multi-headed 4D matrix multiplication (for attention mechanisms)
§Numerical Stability Features
- Automatic detection of unstable values (NaN, infinity, extreme values)
- Kahan summation algorithm for unstable inputs
- Memory layout optimization for performance
- Overflow/underflow protection
§Arguments
other- The tensor to multiply with (right operand)
§Returns
A new tensor containing the matrix multiplication result.
§Errors
ShapeError: If tensors have incompatible dimensions for matrix multiplicationTensorOpError: If the operation is not supported for the tensor types
§Examples
use trustformers_core::tensor::Tensor;
// 2D matrix multiplication
let a = Tensor::randn(&[128, 64])?;
let b = Tensor::randn(&[64, 256])?;
let result = a.matmul(&b)?; // Shape: [128, 256]
// Batched matrix multiplication
let a = Tensor::randn(&[32, 128, 64])?; // 32 batches
let b = Tensor::randn(&[32, 64, 256])?;
let result = a.matmul(&b)?; // Shape: [32, 128, 256]
// Multi-headed attention matrices
let q = Tensor::randn(&[8, 12, 512, 64])?; // 8 batches, 12 heads
let k = Tensor::randn(&[8, 12, 64, 512])?;
let attention = q.matmul(&k)?; // Shape: [8, 12, 512, 512]Sourcepub fn norm(&self) -> Result<f32>
pub fn norm(&self) -> Result<f32>
Calculate the L2 norm (Euclidean norm) of the tensor.
Computes the square root of the sum of squares of all elements in the tensor. This is equivalent to the Euclidean distance from the origin in the tensor’s vector space.
§Mathematical Definition
For a tensor x, the L2 norm is: ||x||_2 = sqrt(Σ x_i²)
§Performance
Uses SIMD-accelerated computation via scirs2-core when the tensor can be viewed as a contiguous 1D array.
§Returns
The L2 norm as a scalar f32 value.
§Errors
TensorOpError: If the operation is not supported for the tensor type
§Examples
use trustformers_core::tensor::Tensor;
let tensor = Tensor::from_vec(vec![3.0, 4.0], &[2])?;
let norm = tensor.norm()?; // Should be 5.0 (sqrt(3² + 4²))Sourcepub fn norm_squared(&self) -> Result<Tensor>
pub fn norm_squared(&self) -> Result<Tensor>
Calculate the squared L2 norm of the tensor.
Computes the sum of squares of all elements in the tensor without taking
the square root. This is computationally more efficient than norm() when
only the squared norm is needed.
§Mathematical Definition
For a tensor x, the squared L2 norm is: ||x||_2² = Σ x_i²
§Returns
A scalar tensor containing the squared norm value.
§Errors
TensorOpError: If the operation is not supported for the tensor type
§Examples
use trustformers_core::tensor::Tensor;
let tensor = Tensor::from_vec(vec![3.0, 4.0], &[2])?;
let norm_squared = tensor.norm_squared()?; // Should be 25.0 (3² + 4²)Sourcepub fn clip_grad_norm(&self, max_norm: f32) -> Result<Tensor>
pub fn clip_grad_norm(&self, max_norm: f32) -> Result<Tensor>
Clip gradients based on their norm to prevent gradient explosion.
This function implements gradient clipping by scaling the tensor values to ensure the L2 norm does not exceed the specified maximum value. This is a common technique used in training deep neural networks to prevent gradient explosion.
§Algorithm
- Calculate the current L2 norm of the tensor
- If norm ≤ max_norm, return the tensor unchanged
- If norm > max_norm, scale the tensor by (max_norm / norm)
§Arguments
max_norm- The maximum allowed norm value
§Returns
A new tensor with clipped gradient values.
§Errors
TensorOpError: If norm calculation or scalar multiplication fails
§Examples
use trustformers_core::tensor::Tensor;
// Create a tensor with large gradient values
let gradients = Tensor::from_vec(vec![10.0, 20.0, 30.0], &[3])?;
// Clip to maximum norm of 1.0
let clipped = gradients.clip_grad_norm(1.0)?;
// The resulting tensor will have norm ≤ 1.0
assert!(clipped.norm()? <= 1.0);§Use in Training
use trustformers_core::tensor::Tensor;
// Typical usage in gradient clipping during training
let max_gradient_norm = 1.0;
let clipped_gradients = gradients.clip_grad_norm(max_gradient_norm)?;Sourcepub fn norm_dim(
&self,
p: i32,
dims: Option<Vec<i32>>,
keepdim: bool,
) -> Result<Tensor>
pub fn norm_dim( &self, p: i32, dims: Option<Vec<i32>>, keepdim: bool, ) -> Result<Tensor>
Calculate L2 norm along specified dimension(s).
This function computes the L2 norm along one or more dimensions, which is useful for normalization operations (e.g., in contrastive learning, CLIP models).
§Arguments
p- The order of the norm (typically 2 for L2 norm)dims- Optional dimensions along which to compute the norm. If None, computes the norm across all dimensions (equivalent tonorm()).keepdim- If true, keeps the reduced dimensions with size 1
§Returns
A tensor containing the L2 norm values along the specified dimensions.
§Errors
TensorOpError: If the operation is not supported for the tensor typeShapeError: If the specified dimensions are out of bounds
§Examples
use trustformers_core::tensor::Tensor;
// Create a 2D tensor
let tensor = Tensor::from_vec(vec![3.0, 4.0, 1.0, 2.0], &[2, 2])?;
// Compute L2 norm along last dimension
let norm = tensor.norm_dim(2, Some(vec![-1]), true)?;
// Result: [[5.0], [sqrt(5)]]Source§impl Tensor
impl Tensor
Sourcepub fn pow_scalar(&self, exponent: f64) -> Result<Tensor>
pub fn pow_scalar(&self, exponent: f64) -> Result<Tensor>
Sourcepub fn exp(&self) -> Result<Tensor>
pub fn exp(&self) -> Result<Tensor>
Element-wise exponential function.
§Returns
A new tensor with exponential function applied to each element.
Sourcepub fn reciprocal(&self) -> Result<Tensor>
pub fn reciprocal(&self) -> Result<Tensor>
Reciprocal operation - 1/x.
Sourcepub fn sign(&self) -> Result<Tensor>
pub fn sign(&self) -> Result<Tensor>
Element-wise sign function.
Returns 1 for positive values, -1 for negative values, and 0 for zero.
Sourcepub fn round(&self) -> Result<Tensor>
pub fn round(&self) -> Result<Tensor>
Round values to nearest integer.
Rounds halfway cases away from zero.
Sourcepub fn floor(&self) -> Result<Tensor>
pub fn floor(&self) -> Result<Tensor>
Floor operation - round down to nearest integer.
Returns the largest integer less than or equal to the input.
Source§impl Tensor
impl Tensor
Sourcepub fn std(&self) -> Result<Tensor>
pub fn std(&self) -> Result<Tensor>
Standard deviation across all elements.
Computes the standard deviation of all elements in the tensor, returning a scalar tensor containing the result.
§Returns
A scalar tensor containing the standard deviation.
Sourcepub fn mean(&self) -> Result<Tensor>
pub fn mean(&self) -> Result<Tensor>
Mean value across all elements.
Computes the arithmetic mean of all elements in the tensor, returning a scalar tensor containing the result.
§Returns
A scalar tensor containing the mean value.
Sourcepub fn sum_axes(&self, axes: &[usize]) -> Result<Tensor>
pub fn sum_axes(&self, axes: &[usize]) -> Result<Tensor>
Sum across specified axes with robust error handling.
Computes the sum along the specified axes. The axes are processed in reverse order to maintain proper axis indexing during reduction.
§Arguments
axes- The axes along which to compute the sum
§Returns
A tensor with sums computed along the specified axes.
Sourcepub fn sum_dim(&self, dim: i64, _keepdims: bool) -> Result<Tensor>
pub fn sum_dim(&self, dim: i64, _keepdims: bool) -> Result<Tensor>
Python-style sum along a dimension with negative axis support.
This is a convenience method that supports negative axis indexing (e.g., -1 for last axis, -2 for second-to-last, etc.)
§Arguments
dim- The dimension to sum along (supports negative indexing)keepdims- Whether to keep dimensions (currently ignored for compatibility)
§Returns
A tensor with the sum along the specified dimension.
§Examples
let tensor = Tensor::randn(&[2, 3, 4])?;
let sum_last = tensor.sum_dim(-1, false)?; // Sum along last axis
let sum_first = tensor.sum_dim(0, false)?; // Sum along first axisSourcepub fn variance(
&self,
axes: Option<&[usize]>,
_keepdims: bool,
) -> Result<Tensor>
pub fn variance( &self, axes: Option<&[usize]>, _keepdims: bool, ) -> Result<Tensor>
Variance computation along specified axes.
Computes the sample variance using the formula: Var(X) = E[(X - μ)²] where μ is the mean. Supports computation along specific axes or across the entire tensor.
§Arguments
axes- Optional axes along which to compute variance. If None, compute across all elements.keepdims- Whether to keep dimensions (currently ignored for compatibility).
§Returns
A tensor containing the variance values.
Sourcepub fn std_dev(&self, axes: Option<&[usize]>, keepdims: bool) -> Result<Tensor>
pub fn std_dev(&self, axes: Option<&[usize]>, keepdims: bool) -> Result<Tensor>
Standard deviation computation along specified axes.
Computes the standard deviation as the square root of variance. This provides a measure of spread in the same units as the original data.
§Arguments
axes- Optional axes along which to compute standard deviation.keepdims- Whether to keep dimensions (currently ignored for compatibility).
§Returns
A tensor containing the standard deviation values.
Sourcepub fn max_axes(&self, axes: &[usize]) -> Result<Tensor>
pub fn max_axes(&self, axes: &[usize]) -> Result<Tensor>
Find maximum value across specified axes.
Sourcepub fn min_axes(&self, axes: &[usize]) -> Result<Tensor>
pub fn min_axes(&self, axes: &[usize]) -> Result<Tensor>
Find minimum value across specified axes.
Sourcepub fn max_scalar(&self) -> Result<Tensor>
pub fn max_scalar(&self) -> Result<Tensor>
Find maximum value in tensor (scalar reduction).
Sourcepub fn min_scalar(&self) -> Result<Tensor>
pub fn min_scalar(&self) -> Result<Tensor>
Find minimum value in tensor (scalar reduction).
Sourcepub fn multinomial(
&self,
num_samples: usize,
replacement: bool,
) -> Result<Tensor>
pub fn multinomial( &self, num_samples: usize, replacement: bool, ) -> Result<Tensor>
Sample from multinomial distribution.
Samples from a multinomial distribution defined by the probabilities in the input tensor. This is useful for sampling tokens during text generation.
§Arguments
num_samples- Number of samples to drawreplacement- Whether to sample with replacement (must be true currently)
§Returns
A tensor containing sampled indices.
§Errors
TensorOpError: If the tensor is not a probability distribution (doesn’t sum to ~1.0)
§Examples
use trustformers_core::tensor::Tensor;
// Create a probability distribution
let probs = Tensor::from_vec(vec![0.1, 0.2, 0.3, 0.4], &[4])?;
let probs = probs.softmax(0)?; // Ensure it sums to 1.0
// Sample from the distribution
let samples = probs.multinomial(1, true)?;Sourcepub fn all(&self) -> Result<Tensor>
pub fn all(&self) -> Result<Tensor>
Check if all elements are true (for boolean tensors) or non-zero.
Returns a scalar boolean tensor indicating whether all elements satisfy the condition.
§Returns
A scalar F32 tensor with value 1.0 if all elements are non-zero, 0.0 otherwise.
§Errors
TensorOpError: If the operation is not supported for the tensor type
§Examples
use trustformers_core::tensor::Tensor;
let tensor = Tensor::from_vec(vec![1.0, 1.0, 1.0], &[3])?;
let result = tensor.all()?; // Should be 1.0 (true)
let tensor2 = Tensor::from_vec(vec![1.0, 0.0, 1.0], &[3])?;
let result2 = tensor2.all()?; // Should be 0.0 (false)Source§impl Tensor
impl Tensor
Sourcepub fn broadcast_to(&self, shape: &[usize]) -> Result<Tensor>
pub fn broadcast_to(&self, shape: &[usize]) -> Result<Tensor>
Sourcepub fn get_scalar(&self, indices: &[usize]) -> Result<f32>
pub fn get_scalar(&self, indices: &[usize]) -> Result<f32>
Source§impl Tensor
impl Tensor
Sourcepub fn sparse_coo(
indices: Vec<Vec<usize>>,
values: Vec<f32>,
shape: Vec<usize>,
) -> Result<Tensor>
pub fn sparse_coo( indices: Vec<Vec<usize>>, values: Vec<f32>, shape: Vec<usize>, ) -> Result<Tensor>
Source§impl Tensor
impl Tensor
Sourcepub fn flatten(&self, start_dim: i64, end_dim: i64) -> Result<Tensor>
pub fn flatten(&self, start_dim: i64, end_dim: i64) -> Result<Tensor>
Flatten tensor dimensions from start_dim to end_dim (inclusive).
§Arguments
start_dim- Starting dimension to flatten (supports negative indexing)end_dim- Ending dimension to flatten (supports negative indexing)
§Returns
A tensor with flattened dimensions.
§Example
let t = Tensor::randn(&[2, 3, 4, 5])?;
let flattened = t.flatten(1, 2)?; // Shape becomes [2, 12, 5]Sourcepub fn zero_padding_embedding(&self, padding_idx: usize) -> Result<Tensor>
pub fn zero_padding_embedding(&self, padding_idx: usize) -> Result<Tensor>
Sourcepub fn select_first_token(&self) -> Result<Tensor>
pub fn select_first_token(&self) -> Result<Tensor>
Sourcepub fn contiguous(&self) -> Result<Tensor>
pub fn contiguous(&self) -> Result<Tensor>
Sourcepub fn unsqueeze_i64(&self, axis: i64) -> Result<Tensor>
pub fn unsqueeze_i64(&self, axis: i64) -> Result<Tensor>
Sourcepub fn squeeze_i64(&self, axis: i64) -> Result<Tensor>
pub fn squeeze_i64(&self, axis: i64) -> Result<Tensor>
Sourcepub fn gather(&self, dim: i64, index: &Tensor) -> Result<Tensor>
pub fn gather(&self, dim: i64, index: &Tensor) -> Result<Tensor>
Gathers values along an axis specified by an index tensor.
This is a PyTorch-style gather operation that selects values from the input tensor along the specified dimension according to the indices in the index tensor.
§Arguments
dim- The dimension along which to gather (supports negative indexing)index- Tensor containing indices to gather
§Returns
A tensor with gathered values.
§Examples
let tensor = Tensor::randn(&[3, 4, 5])?;
let indices = Tensor::from_vec(vec![0, 2, 1], &[3, 1, 1])?;
let gathered = tensor.gather(-2, &indices)?;Sourcepub fn repeat(&self, repeats: &[usize]) -> Result<Tensor>
pub fn repeat(&self, repeats: &[usize]) -> Result<Tensor>
Repeat tensor elements along specified dimensions.
Repeats the tensor along each dimension according to the specified repetition counts.
§Arguments
repeats- Number of times to repeat along each dimension. If the length is less than the number of dimensions, repeats are prepended with 1s.
§Returns
A new tensor with repeated elements.
§Errors
TensorOpError: If the operation fails
§Examples
use trustformers_core::tensor::Tensor;
let tensor = Tensor::from_vec(vec![1.0, 2.0], &[2])?;
// Repeat 3 times along dimension 0
let repeated = tensor.repeat(&[3])?;
// Result: [1.0, 2.0, 1.0, 2.0, 1.0, 2.0] with shape [6]Sourcepub fn upsample_nearest(&self, scale_factor: usize) -> Result<Tensor>
pub fn upsample_nearest(&self, scale_factor: usize) -> Result<Tensor>
Upsample a 4D tensor using nearest neighbor interpolation.
This function performs upsampling on a 4D tensor (typically for image data in NCHW format). Currently supports nearest neighbor interpolation which is simple and efficient.
§Arguments
scale_factor- Scaling factor for spatial dimensions (height and width)
§Returns
An upsampled tensor with spatial dimensions multiplied by scale_factor.
§Errors
ShapeError: If the tensor is not 4DTensorOpError: If the operation fails
§Examples
use trustformers_core::tensor::Tensor;
// Create a 4D tensor [batch, channels, height, width]
let tensor = Tensor::zeros(&[1, 3, 8, 8])?;
// Upsample by factor of 2
let upsampled = tensor.upsample_nearest(2)?;
// Result shape: [1, 3, 16, 16]Sourcepub fn interpolate(&self, size: (usize, usize)) -> Result<Tensor>
pub fn interpolate(&self, size: (usize, usize)) -> Result<Tensor>
Interpolate (upsample or downsample) a tensor using bilinear interpolation.
This function performs bilinear interpolation on a 4D tensor (NCHW format). For upsampling in VAE decoders and other generative models.
§Arguments
size- Target size as (height, width)
§Returns
An interpolated tensor with the specified spatial dimensions.
§Errors
ShapeError: If the tensor is not 4DTensorOpError: If the operation fails
§Examples
use trustformers_core::tensor::Tensor;
let tensor = Tensor::zeros(&[1, 3, 8, 8])?;
// Interpolate to 16x16
let interpolated = tensor.interpolate((16, 16))?;Source§impl Tensor
impl Tensor
Sourcepub fn size_bytes(&self) -> usize
pub fn size_bytes(&self) -> usize
Sourcepub fn to_device_enum(&self, device: &Device) -> Result<Tensor>
pub fn to_device_enum(&self, device: &Device) -> Result<Tensor>
Transfer tensor to specified device using Device enum.
This is the preferred method for device transfers in modern code. It supports Metal GPU acceleration and provides better type safety.
§Arguments
device- Device enum (Device::CPU, Device::Metal(0), etc.)
§Returns
A tensor on the specified device.
§Example
use trustformers_core::tensor::Tensor;
use trustformers_core::device::Device;
let cpu_tensor = Tensor::randn(&[2, 3])?;
// Transfer to Metal GPU
let gpu_tensor = cpu_tensor.to_device_enum(&Device::Metal(0))?;
// Transfer back to CPU
let result = gpu_tensor.to_device_enum(&Device::CPU)?;Sourcepub fn grad(&self) -> Result<Tensor>
pub fn grad(&self) -> Result<Tensor>
Get gradient tensor.
§Returns
Returns the gradient tensor associated with this tensor, or None if no gradient exists.
Gradients are only tracked when gradient mode is enabled via enable_grad().
§Example
use trustformers_core::tensor::{Tensor, enable_grad, disable_grad};
enable_grad();
let x = Tensor::randn(&[2, 3])?;
// After some computation that requires gradients...
if let Ok(grad_tensor) = x.grad() {
println!("Gradient: {:?}", grad_tensor.shape());
}
disable_grad();Sourcepub fn set_grad(&mut self, grad: Tensor) -> Result<()>
pub fn set_grad(&mut self, grad: Tensor) -> Result<()>
Set gradient tensor.
§Arguments
grad- The gradient tensor to set for this tensor
§Returns
Returns Ok(()) if the gradient was successfully set, or an error if gradient tracking is not enabled or if the gradient shape doesn’t match the tensor shape.
§Example
use trustformers_core::tensor::{Tensor, enable_grad};
enable_grad();
let mut x = Tensor::randn(&[2, 3])?;
let grad = Tensor::ones(&[2, 3])?;
x.set_grad(grad)?;Sourcepub fn data(&self) -> Result<Vec<f32>>
pub fn data(&self) -> Result<Vec<f32>>
Get tensor data as a vector (for F32 tensors).
§Returns
A Result containing a vector with the tensor data.
Sourcepub fn data_f32(&self) -> Result<Vec<f32>>
pub fn data_f32(&self) -> Result<Vec<f32>>
Get tensor data as F32 vector (alias for data() method).
§Returns
A Result containing a vector with the tensor data as f32.
Sourcepub fn set_data_f32(&mut self, data: &[f32]) -> Result<()>
pub fn set_data_f32(&mut self, data: &[f32]) -> Result<()>
Sourcepub fn data_mut(&mut self) -> Result<&mut [f32]>
pub fn data_mut(&mut self) -> Result<&mut [f32]>
Get mutable reference to tensor data as a slice (for F32 tensors).
§Returns
A Result containing a mutable slice of the tensor data.
Sourcepub fn modify_data<F>(&mut self, f: F) -> Result<()>
pub fn modify_data<F>(&mut self, f: F) -> Result<()>
Sourcepub fn memory_usage(&self) -> usize
pub fn memory_usage(&self) -> usize
Sourcepub fn get_scalar_i64(&self) -> Result<i64>
pub fn get_scalar_i64(&self) -> Result<i64>
Sourcepub fn batch_split(&self, batch_size: usize) -> Result<Vec<Tensor>>
pub fn batch_split(&self, batch_size: usize) -> Result<Vec<Tensor>>
Split tensor into batches along the first dimension
§Arguments
batch_size- Size of each batch
§Returns
Vector of tensors, each representing a batch. The last batch may be smaller if the tensor size is not evenly divisible by batch_size.
§Example
use trustformers_core::tensor::Tensor;
let tensor = Tensor::ones(&[10, 4]).expect("Failed to create ones tensor");
let batches = tensor.batch_split(3).unwrap();
assert_eq!(batches.len(), 4); // [3, 3, 3, 1]
assert_eq!(batches[0].shape(), &[3, 4]);
assert_eq!(batches[3].shape(), &[1, 4]);Sourcepub fn batch_stack(tensors: &[&Tensor]) -> Result<Tensor>
pub fn batch_stack(tensors: &[&Tensor]) -> Result<Tensor>
Batch tensors together along a new first dimension
§Arguments
tensors- Slice of tensors to batch together. All tensors must have the same shape.
§Returns
A new tensor with shape [batch_size, original_shape…]
§Example
use trustformers_core::tensor::Tensor;
let t1 = Tensor::ones(&[3, 4]).expect("Failed to create ones tensor");
let t2 = Tensor::zeros(&[3, 4]).expect("Failed to create zero tensor");
let t3 = Tensor::ones(&[3, 4]).expect("Failed to create ones tensor");
let batched = Tensor::batch_stack(&[&t1, &t2, &t3]).unwrap();
assert_eq!(batched.shape(), &[3, 3, 4]);Sourcepub fn unbatch(&self) -> Result<Vec<Tensor>>
pub fn unbatch(&self) -> Result<Vec<Tensor>>
Unbatch a tensor by removing the first dimension
§Returns
Vector of tensors, each representing an item from the batch
§Example
use trustformers_core::tensor::Tensor;
let batched = Tensor::ones(&[3, 4, 5]).expect("Failed to create ones tensor");
let unbatched = batched.unbatch().unwrap();
assert_eq!(unbatched.len(), 3);
assert_eq!(unbatched[0].shape(), &[4, 5]);Trait Implementations§
Auto Trait Implementations§
impl Freeze for Tensor
impl RefUnwindSafe for Tensor
impl Send for Tensor
impl Sync for Tensor
impl Unpin for Tensor
impl UnsafeUnpin for Tensor
impl UnwindSafe for Tensor
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more