Skip to main content

Tensor

Enum Tensor 

Source
pub enum Tensor {
    F32(ArrayD<f32>),
    F64(ArrayD<f64>),
    F16(ArrayD<f16>),
    BF16(ArrayD<bf16>),
    I64(ArrayD<i64>),
    C32(ArrayD<Complex32>),
    C64(ArrayD<Complex64>),
    CF16(ArrayD<Complex<f16>>),
    CBF16(ArrayD<Complex<bf16>>),
    Sparse(SparseTensor),
}

Variants§

Implementations§

Source§

impl Tensor

Source

pub fn relu(&self) -> Result<Tensor>

ReLU activation function.

§Returns

A tensor with ReLU applied element-wise.

Source

pub fn sigmoid(&self) -> Result<Tensor>

Sigmoid activation function.

§Performance

Uses scirs2-core’s SIMD-accelerated sigmoid for tensors with ≥256 elements.

§Returns

A tensor with sigmoid applied element-wise.

Source

pub fn tanh(&self) -> Result<Tensor>

Tanh activation function.

§Performance

Uses scirs2-core’s SIMD-accelerated tanh for tensors with ≥256 elements.

§Returns

A tensor with tanh applied element-wise.

Source

pub fn softmax(&self, axis: i32) -> Result<Tensor>

Softmax activation function.

§Arguments
  • axis - The axis along which to apply softmax
§Returns

A tensor with softmax applied along the specified axis.

Source

pub fn dropout(&self, dropout_prob: f32) -> Result<Tensor>

Dropout operation.

§Arguments
  • dropout_prob - Probability of dropping each element
§Returns

A tensor with dropout applied.

Source

pub fn gelu(&self) -> Result<Tensor>

GELU (Gaussian Error Linear Unit) activation function.

§Performance

Uses scirs2-core’s SIMD-accelerated GELU for tensors with ≥256 elements. GELU is widely used in Transformer models (BERT, GPT, etc.).

§Returns

A tensor with GELU applied element-wise.

Source

pub fn leaky_relu(&self, negative_slope: f32) -> Result<Tensor>

Leaky ReLU activation function.

§Arguments
  • negative_slope - The slope for negative values (default: 0.01)
§Returns

A tensor with Leaky ReLU applied element-wise.

Source

pub fn silu(&self) -> Result<Tensor>

SiLU (Sigmoid-Linear Unit) activation function.

Also known as Swish activation: f(x) = x * sigmoid(x)

§Performance

Uses scirs2-core’s SIMD-accelerated Swish for tensors with ≥256 elements. SiLU/Swish is used in EfficientNet, GPT-NeoX, and many modern architectures.

§Returns

A tensor with SiLU applied element-wise.

Source

pub fn swish(&self) -> Result<Tensor>

Swish activation function (alias for SiLU).

Swish(x) = x * sigmoid(x) = SiLU(x)

Source§

impl Tensor

Source

pub fn real(&self) -> Result<Tensor>

Get the real part of a complex tensor.

§Returns

A tensor containing the real parts.

Source

pub fn imag(&self) -> Result<Tensor>

Get the imaginary part of a complex tensor.

§Returns

A tensor containing the imaginary parts.

Source

pub fn magnitude(&self) -> Result<Tensor>

Get the magnitude of a complex tensor with numerical stability enhancements.

Uses numerically stable algorithms to avoid overflow/underflow in intermediate calculations.

§Returns

A tensor containing the magnitudes.

Source

pub fn phase(&self) -> Result<Tensor>

Get the phase of a complex tensor.

§Returns

A tensor containing the phases.

Source

pub fn conj(&self) -> Result<Tensor>

Get the complex conjugate of a complex tensor.

§Returns

A tensor containing the complex conjugates.

Source

pub fn to_complex(&self) -> Result<Tensor>

Convert real tensor to complex tensor.

§Returns

A complex tensor with zero imaginary part.

Source

pub fn complex_hadamard(&self, other: &Tensor) -> Result<Tensor>

Complex element-wise multiplication (Hadamard product) for two complex tensors.

This operation is crucial for transformer architectures using complex-valued layers. Optimized for modern hardware architectures.

§Arguments
  • other - The other complex tensor to multiply with
§Returns

A tensor containing the element-wise complex multiplication result.

Source

pub fn fft(&self) -> Result<Tensor>

Fast Fourier Transform (FFT) for complex tensors with numerical stability enhancements.

Essential for advanced transformer architectures using frequency domain operations. Optimized for modern SIMD architectures with overflow/underflow protection.

§Returns

A tensor containing the FFT result.

Source

pub fn complex_matmul(&self, other: &Tensor) -> Result<Tensor>

Complex matrix multiplication optimized for modern architectures with numerical stability.

Uses SIMD instructions and parallel processing for maximum performance. Essential for complex-valued transformer layers with overflow/underflow protection.

§Arguments
  • other - The other complex tensor to multiply with
§Returns

A tensor containing the complex matrix multiplication result.

Source

pub fn complex_relu(&self) -> Result<Tensor>

Optimized complex activation function for advanced architectures.

Applies complex ReLU activation: ReLU(Re(z)) + i*ReLU(Im(z)) Optimized for modern SIMD architectures.

§Returns

A tensor with complex ReLU activation applied.

Source§

impl Tensor

Source

pub fn new(data: Vec<f32>) -> Result<Self>

Creates a new 1D tensor from a vector of data.

§Arguments
  • data - A vector of f32 values
§Returns

A 1D tensor containing the provided data.

§Example
use trustformers_core::tensor::Tensor;

let tensor = Tensor::new(vec![1.0, 2.0, 3.0, 4.0])?;
assert_eq!(tensor.shape(), vec![4]);
Source

pub fn with_shape(data: Vec<f32>, shape: Vec<usize>) -> Result<Self>

Creates a tensor from data with a specific shape.

This is an alias for from_vec for backward compatibility with tests.

§Arguments
  • data - A vector of f32 values
  • shape - The desired shape of the tensor
§Returns

A tensor with the specified shape containing the provided data.

§Example
use trustformers_core::tensor::Tensor;

let tensor = Tensor::with_shape(vec![1.0, 2.0, 3.0, 4.0], vec![2, 2])?;
assert_eq!(tensor.shape(), vec![2, 2]);
Source

pub fn from_vec_i64(data: Vec<i64>, shape: &[usize]) -> Result<Self>

Creates a tensor from i64 data with a specific shape.

§Arguments
  • data - A vector of i64 values
  • shape - The desired shape of the tensor
§Returns

A tensor with the specified shape containing the provided i64 data.

§Example
use trustformers_core::tensor::Tensor;

let tensor = Tensor::from_vec_i64(vec![1, 2, 3, 4], &[2, 2])?;
assert_eq!(tensor.shape(), vec![2, 2]);
Source

pub fn zeros(shape: &[usize]) -> Result<Self>

Creates a tensor filled with zeros.

§Arguments
  • shape - The desired shape of the tensor
§Returns

A tensor of the specified shape filled with zeros.

§Example
use trustformers_core::tensor::Tensor;

let tensor = Tensor::zeros(&[2, 3])?;
assert_eq!(tensor.shape(), vec![2, 3]);
Source

pub fn ones(shape: &[usize]) -> Result<Self>

Creates a tensor filled with ones.

§Arguments
  • shape - The desired shape of the tensor
§Returns

A tensor of the specified shape filled with ones.

§Example
use trustformers_core::tensor::Tensor;

let tensor = Tensor::ones(&[2, 3])?;
assert_eq!(tensor.shape(), vec![2, 3]);
Source

pub fn randn(shape: &[usize]) -> Result<Self>

Creates a tensor filled with random values from a normal distribution.

§Arguments
  • shape - The desired shape of the tensor
§Returns

A tensor filled with random values from N(0, 1).

§Example
use trustformers_core::tensor::Tensor;

let tensor = Tensor::randn(&[2, 3])?;
assert_eq!(tensor.shape(), vec![2, 3]);
Source

pub fn zeros_like(tensor: &Tensor) -> Result<Self>

Creates a tensor filled with zeros with the same shape as the input tensor.

§Arguments
  • tensor - The tensor to match shape from
§Returns

A tensor of the same shape filled with zeros.

§Example
use trustformers_core::tensor::Tensor;

let input = Tensor::randn(&[2, 3])?;
let zeros = Tensor::zeros_like(&input)?;
assert_eq!(zeros.shape(), vec![2, 3]);
Source

pub fn ones_like(tensor: &Tensor) -> Result<Self>

Creates a tensor filled with ones with the same shape as the input tensor.

§Arguments
  • tensor - The tensor to match shape from
§Returns

A tensor of the same shape filled with ones.

§Example
use trustformers_core::tensor::Tensor;

let input = Tensor::randn(&[2, 3])?;
let ones = Tensor::ones_like(&input)?;
assert_eq!(ones.shape(), vec![2, 3]);
Source

pub fn from_data(data: Vec<f32>, shape: &[usize]) -> Result<Self>

Creates a tensor from data with specified shape.

§Arguments
  • data - A vector of f32 values
  • shape - The desired shape of the tensor
§Returns

A tensor containing the provided data reshaped to the specified shape.

§Example
use trustformers_core::tensor::Tensor;

let data = vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0];
let tensor = Tensor::from_data(data, &[2, 3])?;
assert_eq!(tensor.shape(), vec![2, 3]);
Source

pub fn from_slice(data: &[f32], shape: &[usize]) -> Result<Self>

Creates a tensor from a slice with specified shape.

§Arguments
  • data - A slice of f32 values
  • shape - The desired shape of the tensor
§Returns

A tensor containing the provided data reshaped to the specified shape.

§Example
use trustformers_core::tensor::Tensor;

let data = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0];
let tensor = Tensor::from_slice(&data, &[2, 3])?;
assert_eq!(tensor.shape(), vec![2, 3]);
Source

pub fn randn_like(tensor: &Tensor) -> Result<Self>

Creates a tensor filled with random values with the same shape as the input tensor.

§Arguments
  • tensor - The tensor to match shape from
§Returns

A tensor of the same shape filled with random values from N(0, 1).

§Example
use trustformers_core::tensor::Tensor;

let input = Tensor::zeros(&[2, 3])?;
let random = Tensor::randn_like(&input)?;
assert_eq!(random.shape(), vec![2, 3]);
Source

pub fn zeros_f64(shape: &[usize]) -> Result<Self>

Creates a tensor filled with zeros (f64 precision).

Source

pub fn zeros_i64(shape: &[usize]) -> Result<Self>

Creates a tensor filled with zeros (i64 integers).

Source

pub fn zeros_c32(shape: &[usize]) -> Result<Self>

Creates a tensor filled with zeros (complex f32).

Source

pub fn zeros_c64(shape: &[usize]) -> Result<Self>

Creates a tensor filled with zeros (complex f64).

Source

pub fn zeros_f16(shape: &[usize]) -> Result<Self>

Creates a tensor filled with zeros (f16 precision).

Source

pub fn zeros_bf16(shape: &[usize]) -> Result<Self>

Creates a tensor filled with zeros (bf16 precision).

Source

pub fn zeros_cf16(shape: &[usize]) -> Result<Self>

Creates a tensor filled with zeros (complex f16).

Source

pub fn zeros_cbf16(shape: &[usize]) -> Result<Self>

Creates a tensor filled with zeros (complex bf16).

Source

pub fn complex(real: Vec<f32>, imag: Vec<f32>, shape: &[usize]) -> Result<Self>

Creates a complex tensor from real and imaginary parts.

§Arguments
  • real - Real part values
  • imag - Imaginary part values
  • shape - The desired shape
§Returns

A complex tensor with the specified real and imaginary parts.

Source

pub fn complex_f64( real: Vec<f64>, imag: Vec<f64>, shape: &[usize], ) -> Result<Self>

Creates a complex tensor from real and imaginary parts (f64 precision).

Source

pub fn from_vec(data: Vec<f32>, shape: &[usize]) -> Result<Self>

Creates a tensor from a vector with explicit shape.

§Arguments
  • data - The data vector
  • shape - The desired shape
§Returns

A tensor with the specified data and shape.

Source

pub fn from_vec_with_dtype( data: Vec<f64>, shape: &[usize], dtype: DType, ) -> Result<Self>

Creates a tensor from a Vec with explicit dtype. TEMPORARY: Uses ndarray. Will be replaced with SciRS2-Core.

§Arguments
  • data - The data as a Vec<f64>
  • shape - The desired shape
  • dtype - The desired data type
Source

pub fn full(value: f32, shape: Vec<usize>) -> Result<Self>

Creates a tensor filled with a constant value.

§Arguments
  • value - The constant value to fill with
  • shape - The desired shape
§Returns

A tensor filled with the constant value.

Source

pub fn full_with_dtype( shape: &[usize], value: f64, dtype: DType, ) -> Result<Self>

Creates a tensor filled with a constant value with specified dtype.

§Arguments
  • shape - Shape of the tensor as a slice
  • value - The fill value (will be cast to target dtype)
  • dtype - Target data type
§Returns

A tensor filled with the constant value.

§Note

TEMPORARY: Uses ndarray - will be replaced with SciRS2-Core in future migration

Source

pub fn scalar(value: f32) -> Result<Self>

Creates a scalar tensor.

§Arguments
  • value - The scalar value
§Returns

A 0-dimensional tensor containing the scalar value.

Source

pub fn eye_f32(n: usize) -> Result<Self>

Creates an identity matrix tensor.

§Arguments
  • n - The size of the identity matrix (n x n)
§Returns

An n x n identity matrix tensor.

Source

pub fn ones_f16(shape: &[usize]) -> Result<Self>

Creates a tensor filled with ones (f16 precision).

Source

pub fn ones_bf16(shape: &[usize]) -> Result<Self>

Creates a tensor filled with ones (bf16 precision).

Source

pub fn randn_f16(shape: &[usize]) -> Result<Self>

Creates a tensor filled with random values from a normal distribution (f16 precision).

Source

pub fn randn_bf16(shape: &[usize]) -> Result<Self>

Creates a tensor filled with random values from a normal distribution (bf16 precision).

Source

pub fn complex_f16( real: Vec<f32>, imag: Vec<f32>, shape: &[usize], ) -> Result<Self>

Creates a complex tensor from real and imaginary parts (f16 precision).

Source

pub fn complex_bf16( real: Vec<f32>, imag: Vec<f32>, shape: &[usize], ) -> Result<Self>

Creates a complex tensor from real and imaginary parts (bf16 precision).

Source

pub fn zeros_dtype(dtype: DType, shape: &[usize]) -> Result<Self>

Creates a tensor with specified data type.

§Arguments
  • dtype - The desired data type
  • shape - The desired shape
§Returns

A zero tensor with the specified data type and shape.

Source

pub fn ones_dtype(dtype: DType, shape: &[usize]) -> Result<Self>

Creates a tensor filled with ones with explicit dtype.

§Arguments
  • dtype - The data type for the tensor
  • shape - The shape of the tensor
§Returns

A tensor filled with ones of the specified data type.

Source

pub fn full_with_shape(shape: &[usize], value: f32) -> Result<Self>

Creates a tensor filled with a scalar value (alternative signature).

§Arguments
  • shape - The shape of the tensor
  • value - The value to fill the tensor with
§Returns

A tensor filled with the specified value.

Source

pub fn from_slice_f64(data: &[f64], shape: &[usize]) -> Result<Self>

Creates a tensor from a slice of f64 values with specified shape.

§Arguments
  • data - A slice of f64 values
  • shape - The desired shape of the tensor
§Returns

A tensor containing the provided data reshaped to the specified shape.

Source

pub fn from_slice_i64(data: &[i64], shape: &[usize]) -> Result<Self>

Creates a tensor from a slice of i64 values with specified shape.

§Arguments
  • data - A slice of i64 values
  • shape - The desired shape of the tensor
§Returns

A tensor containing the provided data reshaped to the specified shape.

Source

pub fn from_slice_i32(data: &[i32], shape: &[usize]) -> Result<Self>

Creates a tensor from a slice of i32 values with specified shape.

§Arguments
  • data - A slice of i32 values
  • shape - The desired shape of the tensor
§Returns

A tensor containing the provided data reshaped to the specified shape.

Source

pub fn from_scalar(value: f32, dtype: DType) -> Result<Self>

Creates a tensor from a scalar value.

§Arguments
  • value - The scalar value
  • dtype - The data type for the tensor
§Returns

A 0-dimensional (scalar) tensor.

Source

pub fn range(start: i64, end: i64, dtype: DType) -> Result<Self>

Creates a tensor with a range of values.

§Arguments
  • start - Start value (inclusive)
  • end - End value (exclusive)
  • dtype - The data type for the tensor
§Returns

A 1D tensor with values from start to end-1.

Source

pub fn randint( low: i64, high: i64, shape: &[usize], dtype: DType, ) -> Result<Self>

Creates a tensor filled with random integers in the range [low, high).

§Arguments
  • low - Lower bound (inclusive)
  • high - Upper bound (exclusive)
  • shape - Shape of the tensor
  • dtype - Data type of the tensor
§Returns

A tensor filled with random integers.

Source§

impl Tensor

Source

pub fn to_dtype(&self, dtype: DType) -> Result<Tensor>

Convert tensor to a different data type.

§Arguments
  • dtype - Target data type
§Returns

A tensor with the new data type.

Source

pub fn to_vec_f32(&self) -> Result<Vec<f32>>

Convert tensor to vector of f32 values.

§Returns

A vector of f32 values.

Source

pub fn to_vec_u8(&self) -> Result<Vec<u8>>

Convert tensor to vector of u8 values.

§Returns

A vector of u8 values.

Source

pub fn to_f32(&self) -> Result<Tensor>

Convert tensor to F32 dtype (convenience method).

§Returns

A tensor with F32 dtype.

Source

pub fn to_i64(&self) -> Result<Tensor>

Convert tensor to I64 dtype (convenience method).

§Returns

A tensor with I64 dtype.

Source§

impl Tensor

Source

pub fn less(&self, other: &Tensor) -> Result<Tensor>

Element-wise less-than comparison.

§Arguments
  • other - The tensor to compare with
§Returns

A boolean tensor with the comparison results (1.0 for true, 0.0 for false).

Source

pub fn equal(&self, other: &Tensor) -> Result<Tensor>

Element-wise equality comparison.

§Arguments
  • other - The tensor to compare with
§Returns

A boolean tensor with the comparison results (1.0 for true, 0.0 for false).

Source

pub fn where_cond(&self, condition: &Tensor, other: &Tensor) -> Result<Tensor>

Element-wise conditional selection (where).

§Arguments
  • condition - The boolean tensor condition
  • other - The tensor to select from when condition is false
§Returns

A tensor with elements selected from self where condition is true, other where false.

Source

pub fn layer_norm(&self, axis: i32, epsilon: f32) -> Result<Tensor>

Layer normalization.

Source

pub fn cross_entropy(&self, targets: &Tensor, reduction: &str) -> Result<Tensor>

Cross entropy loss.

Source

pub fn cosine_similarity( &self, other: &Tensor, dim: i32, eps: f32, ) -> Result<Tensor>

Cosine similarity.

Source

pub fn log_softmax(&self, dim: i32) -> Result<Tensor>

Log softmax.

Source§

impl Tensor

Source

pub fn add(&self, other: &Tensor) -> Result<Tensor>

Element-wise addition with numerical stability enhancements.

Includes overflow/underflow protection and NaN/infinity detection.

Source

pub fn sub(&self, other: &Tensor) -> Result<Tensor>

Element-wise subtraction.

Source

pub fn mul(&self, other: &Tensor) -> Result<Tensor>

Element-wise multiplication.

Source

pub fn div(&self, other: &Tensor) -> Result<Tensor>

Element-wise division.

Source

pub fn broadcast_add(&self, other: &Tensor) -> Result<Tensor>

Broadcasting addition.

Source

pub fn scalar_mul(&self, scalar: f32) -> Result<Tensor>

Scalar multiplication.

Source

pub fn scalar_div(&self, scalar: f32) -> Result<Tensor>

Scalar division.

Source

pub fn add_scalar(&self, scalar: f32) -> Result<Tensor>

Scalar addition.

Source

pub fn sub_scalar(&self, scalar: f32) -> Result<Tensor>

Scalar subtraction.

Source

pub fn div_scalar(&self, scalar: f32) -> Result<Tensor>

Division by scalar.

Source

pub fn mul_scalar(&self, scalar: f32) -> Result<Tensor>

Multiplication by scalar (alias for scalar_mul).

Source

pub fn sub_scaled(&self, other: &Tensor, factor: f32) -> Result<Tensor>

Scaled subtraction: self - other * factor.

Source

pub fn add_scaled(&self, other: &Tensor, factor: f32) -> Result<Tensor>

Scaled addition: self + other * factor.

Source§

impl Tensor

Source

pub fn shapes_are_broadcastable(shape1: &[usize], shape2: &[usize]) -> bool

Check if two shapes are broadcastable according to numpy-style broadcasting rules

Source§

impl Tensor

Source

pub fn matmul(&self, other: &Tensor) -> Result<Tensor>

Matrix multiplication with numerical stability enhancements.

Performs matrix multiplication between two tensors with support for:

  • 2D matrix multiplication
  • Batched 3D matrix multiplication
  • Multi-headed 4D matrix multiplication (for attention mechanisms)
§Numerical Stability Features
  • Automatic detection of unstable values (NaN, infinity, extreme values)
  • Kahan summation algorithm for unstable inputs
  • Memory layout optimization for performance
  • Overflow/underflow protection
§Arguments
  • other - The tensor to multiply with (right operand)
§Returns

A new tensor containing the matrix multiplication result.

§Errors
  • ShapeError: If tensors have incompatible dimensions for matrix multiplication
  • TensorOpError: If the operation is not supported for the tensor types
§Examples
use trustformers_core::tensor::Tensor;

// 2D matrix multiplication
let a = Tensor::randn(&[128, 64])?;
let b = Tensor::randn(&[64, 256])?;
let result = a.matmul(&b)?; // Shape: [128, 256]

// Batched matrix multiplication
let a = Tensor::randn(&[32, 128, 64])?;  // 32 batches
let b = Tensor::randn(&[32, 64, 256])?;
let result = a.matmul(&b)?; // Shape: [32, 128, 256]

// Multi-headed attention matrices
let q = Tensor::randn(&[8, 12, 512, 64])?;  // 8 batches, 12 heads
let k = Tensor::randn(&[8, 12, 64, 512])?;
let attention = q.matmul(&k)?; // Shape: [8, 12, 512, 512]
Source

pub fn norm(&self) -> Result<f32>

Calculate the L2 norm (Euclidean norm) of the tensor.

Computes the square root of the sum of squares of all elements in the tensor. This is equivalent to the Euclidean distance from the origin in the tensor’s vector space.

§Mathematical Definition

For a tensor x, the L2 norm is: ||x||_2 = sqrt(Σ x_i²)

§Performance

Uses SIMD-accelerated computation via scirs2-core when the tensor can be viewed as a contiguous 1D array.

§Returns

The L2 norm as a scalar f32 value.

§Errors
  • TensorOpError: If the operation is not supported for the tensor type
§Examples
use trustformers_core::tensor::Tensor;

let tensor = Tensor::from_vec(vec![3.0, 4.0], &[2])?;
let norm = tensor.norm()?; // Should be 5.0 (sqrt(3² + 4²))
Source

pub fn norm_squared(&self) -> Result<Tensor>

Calculate the squared L2 norm of the tensor.

Computes the sum of squares of all elements in the tensor without taking the square root. This is computationally more efficient than norm() when only the squared norm is needed.

§Mathematical Definition

For a tensor x, the squared L2 norm is: ||x||_2² = Σ x_i²

§Returns

A scalar tensor containing the squared norm value.

§Errors
  • TensorOpError: If the operation is not supported for the tensor type
§Examples
use trustformers_core::tensor::Tensor;

let tensor = Tensor::from_vec(vec![3.0, 4.0], &[2])?;
let norm_squared = tensor.norm_squared()?; // Should be 25.0 (3² + 4²)
Source

pub fn clip_grad_norm(&self, max_norm: f32) -> Result<Tensor>

Clip gradients based on their norm to prevent gradient explosion.

This function implements gradient clipping by scaling the tensor values to ensure the L2 norm does not exceed the specified maximum value. This is a common technique used in training deep neural networks to prevent gradient explosion.

§Algorithm
  1. Calculate the current L2 norm of the tensor
  2. If norm ≤ max_norm, return the tensor unchanged
  3. If norm > max_norm, scale the tensor by (max_norm / norm)
§Arguments
  • max_norm - The maximum allowed norm value
§Returns

A new tensor with clipped gradient values.

§Errors
  • TensorOpError: If norm calculation or scalar multiplication fails
§Examples
use trustformers_core::tensor::Tensor;

// Create a tensor with large gradient values
let gradients = Tensor::from_vec(vec![10.0, 20.0, 30.0], &[3])?;

// Clip to maximum norm of 1.0
let clipped = gradients.clip_grad_norm(1.0)?;

// The resulting tensor will have norm ≤ 1.0
assert!(clipped.norm()? <= 1.0);
§Use in Training
use trustformers_core::tensor::Tensor;

// Typical usage in gradient clipping during training
let max_gradient_norm = 1.0;
let clipped_gradients = gradients.clip_grad_norm(max_gradient_norm)?;
Source

pub fn norm_dim( &self, p: i32, dims: Option<Vec<i32>>, keepdim: bool, ) -> Result<Tensor>

Calculate L2 norm along specified dimension(s).

This function computes the L2 norm along one or more dimensions, which is useful for normalization operations (e.g., in contrastive learning, CLIP models).

§Arguments
  • p - The order of the norm (typically 2 for L2 norm)
  • dims - Optional dimensions along which to compute the norm. If None, computes the norm across all dimensions (equivalent to norm()).
  • keepdim - If true, keeps the reduced dimensions with size 1
§Returns

A tensor containing the L2 norm values along the specified dimensions.

§Errors
  • TensorOpError: If the operation is not supported for the tensor type
  • ShapeError: If the specified dimensions are out of bounds
§Examples
use trustformers_core::tensor::Tensor;

// Create a 2D tensor
let tensor = Tensor::from_vec(vec![3.0, 4.0, 1.0, 2.0], &[2, 2])?;

// Compute L2 norm along last dimension
let norm = tensor.norm_dim(2, Some(vec![-1]), true)?;
// Result: [[5.0], [sqrt(5)]]
Source§

impl Tensor

Source

pub fn pow(&self, exponent: f32) -> Result<Tensor>

Element-wise power operation.

Source

pub fn pow_scalar(&self, exponent: f64) -> Result<Tensor>

Raise tensor to a scalar power (alias for pow).

§Arguments
  • exponent - The exponent to raise each element to
§Returns

A new tensor with each element raised to the given power.

Source

pub fn abs(&self) -> Result<Tensor>

Absolute value.

Source

pub fn neg(&self) -> Result<Tensor>

Negation.

Source

pub fn sqrt(&self) -> Result<Tensor>

Element-wise square root.

§Returns

A new tensor with square root applied to each element.

Source

pub fn log(&self) -> Result<Tensor>

Natural logarithm.

Source

pub fn exp(&self) -> Result<Tensor>

Element-wise exponential function.

§Returns

A new tensor with exponential function applied to each element.

Source

pub fn sin(&self) -> Result<Tensor>

Sine function.

Source

pub fn cos(&self) -> Result<Tensor>

Cosine function.

Source

pub fn tan(&self) -> Result<Tensor>

Tangent function.

Source

pub fn asin(&self) -> Result<Tensor>

Arc sine function.

Source

pub fn acos(&self) -> Result<Tensor>

Arc cosine function.

Source

pub fn atan(&self) -> Result<Tensor>

Arc tangent function.

Source

pub fn square(&self) -> Result<Tensor>

Square operation - x².

Source

pub fn reciprocal(&self) -> Result<Tensor>

Reciprocal operation - 1/x.

Source

pub fn rsqrt(&self) -> Result<Tensor>

Reciprocal square root - 1/√x.

Source

pub fn isnan(&self) -> Result<Tensor>

Check for NaN values.

Source

pub fn isinf(&self) -> Result<Tensor>

Check for infinite values.

Source

pub fn isfinite(&self) -> Result<Tensor>

Check for finite values.

Source

pub fn sign(&self) -> Result<Tensor>

Element-wise sign function.

Returns 1 for positive values, -1 for negative values, and 0 for zero.

Source

pub fn round(&self) -> Result<Tensor>

Round values to nearest integer.

Rounds halfway cases away from zero.

Source

pub fn floor(&self) -> Result<Tensor>

Floor operation - round down to nearest integer.

Returns the largest integer less than or equal to the input.

Source

pub fn ceil(&self) -> Result<Tensor>

Ceiling operation - round up to nearest integer.

Returns the smallest integer greater than or equal to the input.

Source

pub fn trunc(&self) -> Result<Tensor>

Truncate operation - round towards zero.

Removes the fractional part, effectively rounding towards zero.

Source§

impl Tensor

Source

pub fn std(&self) -> Result<Tensor>

Standard deviation across all elements.

Computes the standard deviation of all elements in the tensor, returning a scalar tensor containing the result.

§Returns

A scalar tensor containing the standard deviation.

Source

pub fn max_value(&self) -> Result<Tensor>

Maximum value.

Source

pub fn max(&self, other: &Tensor) -> Result<Tensor>

Element-wise maximum between two tensors.

Source

pub fn argmax(&self, axis: i32) -> Result<Tensor>

Find the indices of maximum values along the specified axis.

§Arguments
  • axis - The axis along which to find the maximum indices. Negative values count from the last axis.
§Returns

A tensor containing the indices of maximum values along the specified axis.

Source

pub fn mean(&self) -> Result<Tensor>

Mean value across all elements.

Computes the arithmetic mean of all elements in the tensor, returning a scalar tensor containing the result.

§Returns

A scalar tensor containing the mean value.

Source

pub fn min_max(&self) -> Result<(f32, f32)>

Find minimum and maximum values.

Source

pub fn sum_axes(&self, axes: &[usize]) -> Result<Tensor>

Sum across specified axes with robust error handling.

Computes the sum along the specified axes. The axes are processed in reverse order to maintain proper axis indexing during reduction.

§Arguments
  • axes - The axes along which to compute the sum
§Returns

A tensor with sums computed along the specified axes.

Source

pub fn sum(&self, axes: Option<Vec<usize>>, _keepdims: bool) -> Result<Tensor>

Sum all elements or along specified axes.

§Arguments
  • axes - Optional axes to sum along. If None, sum all elements.
  • keepdims - Whether to keep dimensions (currently ignored for compatibility).
§Returns

A tensor with the sum result.

Source

pub fn mean_axes(&self, axes: &[usize]) -> Result<Tensor>

Mean along specified axes with robust error handling.

§Arguments
  • axes - The axes along which to compute the mean
§Returns

A tensor with means computed along the specified axes.

Source

pub fn sum_axis(&self, axis: usize) -> Result<Tensor>

Sum along a single axis (convenience method).

§Arguments
  • axis - The axis to sum along
§Returns

A tensor with the sum along the specified axis.

Source

pub fn sum_dim(&self, dim: i64, _keepdims: bool) -> Result<Tensor>

Python-style sum along a dimension with negative axis support.

This is a convenience method that supports negative axis indexing (e.g., -1 for last axis, -2 for second-to-last, etc.)

§Arguments
  • dim - The dimension to sum along (supports negative indexing)
  • keepdims - Whether to keep dimensions (currently ignored for compatibility)
§Returns

A tensor with the sum along the specified dimension.

§Examples
let tensor = Tensor::randn(&[2, 3, 4])?;
let sum_last = tensor.sum_dim(-1, false)?;  // Sum along last axis
let sum_first = tensor.sum_dim(0, false)?;  // Sum along first axis
Source

pub fn mean_axis(&self, axis: usize) -> Result<Tensor>

Mean along a single axis (convenience method).

§Arguments
  • axis - The axis to compute mean along
§Returns

A tensor with the mean along the specified axis.

Source

pub fn variance( &self, axes: Option<&[usize]>, _keepdims: bool, ) -> Result<Tensor>

Variance computation along specified axes.

Computes the sample variance using the formula: Var(X) = E[(X - μ)²] where μ is the mean. Supports computation along specific axes or across the entire tensor.

§Arguments
  • axes - Optional axes along which to compute variance. If None, compute across all elements.
  • keepdims - Whether to keep dimensions (currently ignored for compatibility).
§Returns

A tensor containing the variance values.

Source

pub fn std_dev(&self, axes: Option<&[usize]>, keepdims: bool) -> Result<Tensor>

Standard deviation computation along specified axes.

Computes the standard deviation as the square root of variance. This provides a measure of spread in the same units as the original data.

§Arguments
  • axes - Optional axes along which to compute standard deviation.
  • keepdims - Whether to keep dimensions (currently ignored for compatibility).
§Returns

A tensor containing the standard deviation values.

Source

pub fn max_axes(&self, axes: &[usize]) -> Result<Tensor>

Find maximum value across specified axes.

Source

pub fn min_axes(&self, axes: &[usize]) -> Result<Tensor>

Find minimum value across specified axes.

Source

pub fn max_scalar(&self) -> Result<Tensor>

Find maximum value in tensor (scalar reduction).

Source

pub fn min_scalar(&self) -> Result<Tensor>

Find minimum value in tensor (scalar reduction).

Source

pub fn multinomial( &self, num_samples: usize, replacement: bool, ) -> Result<Tensor>

Sample from multinomial distribution.

Samples from a multinomial distribution defined by the probabilities in the input tensor. This is useful for sampling tokens during text generation.

§Arguments
  • num_samples - Number of samples to draw
  • replacement - Whether to sample with replacement (must be true currently)
§Returns

A tensor containing sampled indices.

§Errors
  • TensorOpError: If the tensor is not a probability distribution (doesn’t sum to ~1.0)
§Examples
use trustformers_core::tensor::Tensor;

// Create a probability distribution
let probs = Tensor::from_vec(vec![0.1, 0.2, 0.3, 0.4], &[4])?;
let probs = probs.softmax(0)?; // Ensure it sums to 1.0

// Sample from the distribution
let samples = probs.multinomial(1, true)?;
Source

pub fn all(&self) -> Result<Tensor>

Check if all elements are true (for boolean tensors) or non-zero.

Returns a scalar boolean tensor indicating whether all elements satisfy the condition.

§Returns

A scalar F32 tensor with value 1.0 if all elements are non-zero, 0.0 otherwise.

§Errors
  • TensorOpError: If the operation is not supported for the tensor type
§Examples
use trustformers_core::tensor::Tensor;

let tensor = Tensor::from_vec(vec![1.0, 1.0, 1.0], &[3])?;
let result = tensor.all()?; // Should be 1.0 (true)

let tensor2 = Tensor::from_vec(vec![1.0, 0.0, 1.0], &[3])?;
let result2 = tensor2.all()?; // Should be 0.0 (false)
Source§

impl Tensor

Source

pub fn scale(&self, factor: f32) -> Result<Tensor>

Scaling operation.

Source

pub fn clamp(&self, min_val: f32, max_val: f32) -> Result<Tensor>

Clamp values to a range.

Source

pub fn broadcast_to(&self, shape: &[usize]) -> Result<Tensor>

Broadcast tensor to a target shape.

§Arguments
  • shape - The target shape to broadcast to
§Returns

A new tensor with the broadcasted shape.

Source

pub fn get_scalar(&self, indices: &[usize]) -> Result<f32>

Get a scalar value at the specified index.

§Arguments
  • indices - The indices to get the scalar value from
§Returns

The scalar value at the specified index.

Source

pub fn set_scalar(&self, indices: &[usize], value: f32) -> Result<Tensor>

Set a scalar value at the specified index.

§Arguments
  • indices - The indices to set the scalar value
  • value - The value to set
§Returns

A new tensor with the value set at the specified index.

Source

pub fn greater(&self, other: &Tensor) -> Result<Tensor>

Element-wise greater than comparison.

§Arguments
  • other - The tensor to compare against
§Returns

A tensor with 1.0 where self > other, 0.0 otherwise.

Source

pub fn lerp(&self, other: &Tensor, weight: f32) -> Result<Tensor>

Linear interpolation between two tensors.

Computes: self * (1 - weight) + other * weight

§Arguments
  • other - The tensor to interpolate towards
  • weight - Interpolation weight (must be between 0.0 and 1.0)
§Returns

A tensor interpolated between self and other.

Source§

impl Tensor

Source

pub fn to_sparse(&self, threshold: f32) -> Result<Tensor>

Convert a dense tensor to sparse format.

§Arguments
  • threshold - Values below this threshold will be considered zero
§Returns

A sparse tensor representation.

Source

pub fn to_dense(&self) -> Result<Tensor>

Convert a sparse tensor to dense format.

§Returns

A dense tensor representation.

Source

pub fn is_sparse(&self) -> bool

Check if the tensor is sparse.

§Returns

True if the tensor is sparse, false otherwise.

Source

pub fn sparsity(&self) -> Result<f32>

Get the sparsity ratio of the tensor.

§Returns

The ratio of zero elements to total elements.

Source

pub fn nnz(&self) -> Result<usize>

Get the number of non-zero elements.

§Returns

The number of non-zero elements.

Source

pub fn sparse_coo( indices: Vec<Vec<usize>>, values: Vec<f32>, shape: Vec<usize>, ) -> Result<Tensor>

Create a sparse tensor in COO format.

§Arguments
  • indices - Coordinate indices
  • values - Non-zero values
  • shape - Tensor shape
§Returns

A sparse tensor in COO format.

Source

pub fn sparse_csr( row_ptr: Vec<usize>, col_indices: Vec<usize>, values: Vec<f32>, shape: Vec<usize>, ) -> Result<Tensor>

Create a sparse tensor in CSR format.

§Arguments
  • row_ptr - Row pointers
  • col_indices - Column indices
  • values - Non-zero values
  • shape - Tensor shape
§Returns

A sparse tensor in CSR format.

Source§

impl Tensor

Source

pub fn transpose_i64(&self, dim0: i64, dim1: i64) -> Result<Tensor>

Transpose two dimensions of the tensor (accepts negative indices).

§Arguments
  • dim0 - First dimension to transpose (negative indices count from the end)
  • dim1 - Second dimension to transpose (negative indices count from the end)
§Returns

A tensor with the specified dimensions transposed.

Source

pub fn transpose(&self, dim0: usize, dim1: usize) -> Result<Tensor>

Transpose two dimensions of the tensor.

§Arguments
  • dim0 - First dimension to transpose
  • dim1 - Second dimension to transpose
§Returns

A tensor with the specified dimensions transposed.

Source

pub fn t(&self) -> Result<Tensor>

Transpose (convenience method for 2D).

Source

pub fn slice(&self, axis: usize, start: usize, end: usize) -> Result<Tensor>

Slice the tensor along a specific axis.

§Arguments
  • axis - The axis to slice along
  • start - Start index
  • end - End index (exclusive)
§Returns

A tensor slice.

Source

pub fn slice_multi(&self, ranges: &[(usize, usize)]) -> Result<Tensor>

Multi-dimensional slice of the tensor.

§Arguments
  • ranges - Slice of tuples (start, end) for each dimension
§Returns

A tensor slice.

Source

pub fn split(&self, axis: usize, split_size: usize) -> Result<Vec<Tensor>>

Split the tensor into chunks along an axis.

§Arguments
  • axis - The axis to split along
  • split_size - Size of each chunk
§Returns

A vector of tensor chunks.

Source

pub fn reshape(&self, shape: &[usize]) -> Result<Tensor>

Reshape the tensor to a new shape.

§Arguments
  • shape - The new shape
§Returns

A tensor with the new shape.

Source

pub fn flatten(&self, start_dim: i64, end_dim: i64) -> Result<Tensor>

Flatten tensor dimensions from start_dim to end_dim (inclusive).

§Arguments
  • start_dim - Starting dimension to flatten (supports negative indexing)
  • end_dim - Ending dimension to flatten (supports negative indexing)
§Returns

A tensor with flattened dimensions.

§Example
let t = Tensor::randn(&[2, 3, 4, 5])?;
let flattened = t.flatten(1, 2)?; // Shape becomes [2, 12, 5]
Source

pub fn slice_ranges(&self, ranges: &[(usize, usize)]) -> Result<Tensor>

Slice with multiple ranges.

§Arguments
  • ranges - Vector of (start, end) pairs for each dimension
§Returns

A tensor slice.

Source

pub fn concat(tensors: &[Tensor], axis: usize) -> Result<Tensor>

Concatenate multiple tensors along an axis.

§Arguments
  • tensors - Vector of tensors to concatenate
  • axis - The axis to concatenate along
§Returns

A concatenated tensor.

Source

pub fn sort(&self) -> Result<Tensor>

Sort the tensor.

§Returns

A sorted tensor.

Source

pub fn zero_padding_embedding(&self, padding_idx: usize) -> Result<Tensor>

Zero-padding for embeddings.

§Arguments
  • padding_idx - Index to zero out
§Returns

A tensor with the specified index zeroed.

Source

pub fn select(&self, dim: usize, index: i64) -> Result<Tensor>

Select along a specific dimension with an index.

§Arguments
  • dim - The dimension to select along
  • index - The index to select (can be negative for indexing from the end)
§Returns

A tensor with the specified index selected along the given dimension.

Source

pub fn select_first_token(&self) -> Result<Tensor>

Select the first token from a sequence.

§Returns

A tensor with the first token selected.

Source

pub fn contiguous(&self) -> Result<Tensor>

Ensure tensor has contiguous memory layout.

§Returns

A tensor with contiguous memory layout.

Source

pub fn permute(&self, permutation: &[usize]) -> Result<Tensor>

Permute tensor dimensions.

§Arguments
  • permutation - Vector specifying the new order of dimensions
§Returns

A tensor with permuted dimensions.

Source

pub fn unsqueeze_i64(&self, axis: i64) -> Result<Tensor>

Add a new dimension at the specified axis (accepts negative indices).

§Arguments
  • axis - The axis where to insert the new dimension (negative indices count from the end)
§Returns

A tensor with an added dimension.

Source

pub fn unsqueeze(&self, axis: usize) -> Result<Tensor>

Add a new dimension at the specified axis.

§Arguments
  • axis - The axis where to insert the new dimension
§Returns

A tensor with an added dimension.

Source

pub fn squeeze_i64(&self, axis: i64) -> Result<Tensor>

Removes a single-dimensional entry from the shape of the tensor (accepts negative indices).

§Arguments
  • axis - The axis to remove (must have size 1, negative indices count from the end)
§Returns

A tensor with the specified dimension removed.

Source

pub fn squeeze(&self, axis: usize) -> Result<Tensor>

Removes a single-dimensional entry from the shape of the tensor.

§Arguments
  • axis - The axis to remove (must have size 1)
§Returns

A tensor with the specified dimension removed.

Source

pub fn to_scalar(&self) -> Result<f32>

Extract a scalar value from a 0-dimensional tensor.

§Returns

The scalar value as f32.

Source

pub fn gather(&self, dim: i64, index: &Tensor) -> Result<Tensor>

Gathers values along an axis specified by an index tensor.

This is a PyTorch-style gather operation that selects values from the input tensor along the specified dimension according to the indices in the index tensor.

§Arguments
  • dim - The dimension along which to gather (supports negative indexing)
  • index - Tensor containing indices to gather
§Returns

A tensor with gathered values.

§Examples
let tensor = Tensor::randn(&[3, 4, 5])?;
let indices = Tensor::from_vec(vec![0, 2, 1], &[3, 1, 1])?;
let gathered = tensor.gather(-2, &indices)?;
Source

pub fn repeat(&self, repeats: &[usize]) -> Result<Tensor>

Repeat tensor elements along specified dimensions.

Repeats the tensor along each dimension according to the specified repetition counts.

§Arguments
  • repeats - Number of times to repeat along each dimension. If the length is less than the number of dimensions, repeats are prepended with 1s.
§Returns

A new tensor with repeated elements.

§Errors
  • TensorOpError: If the operation fails
§Examples
use trustformers_core::tensor::Tensor;

let tensor = Tensor::from_vec(vec![1.0, 2.0], &[2])?;

// Repeat 3 times along dimension 0
let repeated = tensor.repeat(&[3])?;
// Result: [1.0, 2.0, 1.0, 2.0, 1.0, 2.0] with shape [6]
Source

pub fn upsample_nearest(&self, scale_factor: usize) -> Result<Tensor>

Upsample a 4D tensor using nearest neighbor interpolation.

This function performs upsampling on a 4D tensor (typically for image data in NCHW format). Currently supports nearest neighbor interpolation which is simple and efficient.

§Arguments
  • scale_factor - Scaling factor for spatial dimensions (height and width)
§Returns

An upsampled tensor with spatial dimensions multiplied by scale_factor.

§Errors
  • ShapeError: If the tensor is not 4D
  • TensorOpError: If the operation fails
§Examples
use trustformers_core::tensor::Tensor;

// Create a 4D tensor [batch, channels, height, width]
let tensor = Tensor::zeros(&[1, 3, 8, 8])?;

// Upsample by factor of 2
let upsampled = tensor.upsample_nearest(2)?;
// Result shape: [1, 3, 16, 16]
Source

pub fn interpolate(&self, size: (usize, usize)) -> Result<Tensor>

Interpolate (upsample or downsample) a tensor using bilinear interpolation.

This function performs bilinear interpolation on a 4D tensor (NCHW format). For upsampling in VAE decoders and other generative models.

§Arguments
  • size - Target size as (height, width)
§Returns

An interpolated tensor with the specified spatial dimensions.

§Errors
  • ShapeError: If the tensor is not 4D
  • TensorOpError: If the operation fails
§Examples
use trustformers_core::tensor::Tensor;

let tensor = Tensor::zeros(&[1, 3, 8, 8])?;

// Interpolate to 16x16
let interpolated = tensor.interpolate((16, 16))?;
Source§

impl Tensor

Source

pub fn shape(&self) -> Vec<usize>

Get the shape of the tensor.

§Returns

A vector containing the dimensions of the tensor.

Source

pub fn len(&self) -> usize

Get the number of elements in the tensor.

§Returns

The total number of elements in the tensor.

Source

pub fn is_empty(&self) -> bool

Check if the tensor is empty.

§Returns

True if the tensor has no elements.

Source

pub fn ndim(&self) -> usize

Get the number of dimensions in the tensor.

§Returns

The number of dimensions.

Source

pub fn size_bytes(&self) -> usize

Get the size in bytes of the tensor.

§Returns

The size in bytes of the tensor data.

Source

pub fn to_device(&self, device: &str) -> Result<Tensor>

Transfer tensor to specified device.

§Arguments
  • device - Device identifier (e.g., “cpu”, “cuda:0”, “mps”, “tpu:0”)
§Returns

A tensor on the specified device (currently CPU-only with validation).

Source

pub fn to_device_enum(&self, device: &Device) -> Result<Tensor>

Transfer tensor to specified device using Device enum.

This is the preferred method for device transfers in modern code. It supports Metal GPU acceleration and provides better type safety.

§Arguments
  • device - Device enum (Device::CPU, Device::Metal(0), etc.)
§Returns

A tensor on the specified device.

§Example
use trustformers_core::tensor::Tensor;
use trustformers_core::device::Device;

let cpu_tensor = Tensor::randn(&[2, 3])?;

// Transfer to Metal GPU
let gpu_tensor = cpu_tensor.to_device_enum(&Device::Metal(0))?;

// Transfer back to CPU
let result = gpu_tensor.to_device_enum(&Device::CPU)?;
Source

pub fn grad(&self) -> Result<Tensor>

Get gradient tensor.

§Returns

Returns the gradient tensor associated with this tensor, or None if no gradient exists. Gradients are only tracked when gradient mode is enabled via enable_grad().

§Example
use trustformers_core::tensor::{Tensor, enable_grad, disable_grad};

enable_grad();
let x = Tensor::randn(&[2, 3])?;
// After some computation that requires gradients...
if let Ok(grad_tensor) = x.grad() {
    println!("Gradient: {:?}", grad_tensor.shape());
}
disable_grad();
Source

pub fn set_grad(&mut self, grad: Tensor) -> Result<()>

Set gradient tensor.

§Arguments
  • grad - The gradient tensor to set for this tensor
§Returns

Returns Ok(()) if the gradient was successfully set, or an error if gradient tracking is not enabled or if the gradient shape doesn’t match the tensor shape.

§Example
use trustformers_core::tensor::{Tensor, enable_grad};

enable_grad();
let mut x = Tensor::randn(&[2, 3])?;
let grad = Tensor::ones(&[2, 3])?;
x.set_grad(grad)?;
Source

pub fn data(&self) -> Result<Vec<f32>>

Get tensor data as a vector (for F32 tensors).

§Returns

A Result containing a vector with the tensor data.

Source

pub fn data_f32(&self) -> Result<Vec<f32>>

Get tensor data as F32 vector (alias for data() method).

§Returns

A Result containing a vector with the tensor data as f32.

Source

pub fn set_data_f32(&mut self, data: &[f32]) -> Result<()>

Set tensor data from F32 vector.

§Arguments
  • data - Vector of f32 values to set as tensor data
§Returns

A Result indicating success or failure.

Source

pub fn data_mut(&mut self) -> Result<&mut [f32]>

Get mutable reference to tensor data as a slice (for F32 tensors).

§Returns

A Result containing a mutable slice of the tensor data.

Source

pub fn modify_data<F>(&mut self, f: F) -> Result<()>
where F: FnOnce(&mut [f32]),

Modify tensor data in-place with a closure.

§Arguments
  • f - A closure that takes a mutable slice of the tensor data
§Returns

A Result indicating success or failure.

Source

pub fn device(&self) -> String

Get the device where the tensor is stored.

§Returns

A string representing the device.

Source

pub fn size(&self) -> usize

Get the number of elements in the tensor.

§Returns

The total number of elements.

Source

pub fn memory_usage(&self) -> usize

Get memory usage in bytes.

§Returns

Memory usage in bytes.

Source

pub fn dtype(&self) -> DType

Get the data type of the tensor.

§Returns

The data type.

Source

pub fn get_dtype(&self) -> DType

Get the data type (alias for dtype).

Source

pub fn get_float(&self, index: usize) -> Result<f32>

Get a float value at a specific index.

§Arguments
  • index - The linear index
§Returns

The float value at the index.

Source

pub fn item<T>(&self) -> Result<T>
where T: NumCast,

Get a scalar value from a 0-dimensional or 1-element tensor.

§Type Parameters
  • T - The type to convert to (i32, i64, f32, f64)
§Returns

The scalar value.

Source

pub fn get_scalar_i64(&self) -> Result<i64>

Get an i64 scalar value from a tensor.

§Returns

The i64 scalar value.

Source

pub fn eq_scalar(&self, scalar: f64) -> Result<Tensor>

Compare tensor elements with a scalar value. TEMPORARY: Uses ndarray. Will be replaced with SciRS2-Core.

§Arguments
  • scalar - The scalar value to compare against
§Returns

A boolean tensor where True indicates elements equal to the scalar.

Source

pub fn batch_split(&self, batch_size: usize) -> Result<Vec<Tensor>>

Split tensor into batches along the first dimension

§Arguments
  • batch_size - Size of each batch
§Returns

Vector of tensors, each representing a batch. The last batch may be smaller if the tensor size is not evenly divisible by batch_size.

§Example
use trustformers_core::tensor::Tensor;

let tensor = Tensor::ones(&[10, 4]).expect("Failed to create ones tensor");
let batches = tensor.batch_split(3).unwrap();
assert_eq!(batches.len(), 4); // [3, 3, 3, 1]
assert_eq!(batches[0].shape(), &[3, 4]);
assert_eq!(batches[3].shape(), &[1, 4]);
Source

pub fn batch_stack(tensors: &[&Tensor]) -> Result<Tensor>

Batch tensors together along a new first dimension

§Arguments
  • tensors - Slice of tensors to batch together. All tensors must have the same shape.
§Returns

A new tensor with shape [batch_size, original_shape…]

§Example
use trustformers_core::tensor::Tensor;

let t1 = Tensor::ones(&[3, 4]).expect("Failed to create ones tensor");
let t2 = Tensor::zeros(&[3, 4]).expect("Failed to create zero tensor");
let t3 = Tensor::ones(&[3, 4]).expect("Failed to create ones tensor");

let batched = Tensor::batch_stack(&[&t1, &t2, &t3]).unwrap();
assert_eq!(batched.shape(), &[3, 3, 4]);
Source

pub fn unbatch(&self) -> Result<Vec<Tensor>>

Unbatch a tensor by removing the first dimension

§Returns

Vector of tensors, each representing an item from the batch

§Example
use trustformers_core::tensor::Tensor;

let batched = Tensor::ones(&[3, 4, 5]).expect("Failed to create ones tensor");
let unbatched = batched.unbatch().unwrap();
assert_eq!(unbatched.len(), 3);
assert_eq!(unbatched[0].shape(), &[4, 5]);

Trait Implementations§

Source§

impl Add<&&Tensor> for &Tensor

Source§

type Output = Result<Tensor, TrustformersError>

The resulting type after applying the + operator.
Source§

fn add(self, other: &&Tensor) -> Self::Output

Performs the + operation. Read more
Source§

impl Add<&Tensor> for &&Tensor

Source§

type Output = Result<Tensor, TrustformersError>

The resulting type after applying the + operator.
Source§

fn add(self, other: &Tensor) -> Self::Output

Performs the + operation. Read more
Source§

impl Add for &Tensor

Source§

type Output = Result<Tensor, TrustformersError>

The resulting type after applying the + operator.
Source§

fn add(self, other: &Tensor) -> Self::Output

Performs the + operation. Read more
Source§

impl Add for Tensor

Source§

type Output = Result<Tensor, TrustformersError>

The resulting type after applying the + operator.
Source§

fn add(self, other: Tensor) -> Self::Output

Performs the + operation. Read more
Source§

impl Clone for Tensor

Source§

fn clone(&self) -> Self

Returns a duplicate of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl Debug for Tensor

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl Div<f32> for &Tensor

Source§

type Output = Result<Tensor, TrustformersError>

The resulting type after applying the / operator.
Source§

fn div(self, scalar: f32) -> Self::Output

Performs the / operation. Read more
Source§

impl Div<f32> for Tensor

Source§

type Output = Result<Tensor, TrustformersError>

The resulting type after applying the / operator.
Source§

fn div(self, scalar: f32) -> Self::Output

Performs the / operation. Read more
Source§

impl Div<f64> for &Tensor

Source§

type Output = Result<Tensor, TrustformersError>

The resulting type after applying the / operator.
Source§

fn div(self, scalar: f64) -> Self::Output

Performs the / operation. Read more
Source§

impl Div<f64> for Tensor

Source§

type Output = Result<Tensor, TrustformersError>

The resulting type after applying the / operator.
Source§

fn div(self, scalar: f64) -> Self::Output

Performs the / operation. Read more
Source§

impl From<ArrayBase<OwnedRepr<f32>, Dim<IxDynImpl>>> for Tensor

Source§

fn from(arr: ArrayD<f32>) -> Self

Converts to this type from the input type.
Source§

impl From<ArrayBase<OwnedRepr<f64>, Dim<IxDynImpl>>> for Tensor

Source§

fn from(arr: ArrayD<f64>) -> Self

Converts to this type from the input type.
Source§

impl Mul<&Tensor> for &Tensor

Source§

type Output = Result<Tensor, TrustformersError>

The resulting type after applying the * operator.
Source§

fn mul(self, other: &Tensor) -> Self::Output

Performs the * operation. Read more
Source§

impl Mul<&Tensor> for Tensor

Source§

type Output = Result<Tensor, TrustformersError>

The resulting type after applying the * operator.
Source§

fn mul(self, other: &Tensor) -> Self::Output

Performs the * operation. Read more
Source§

impl Mul<Tensor> for &Tensor

Source§

type Output = Result<Tensor, TrustformersError>

The resulting type after applying the * operator.
Source§

fn mul(self, other: Tensor) -> Self::Output

Performs the * operation. Read more
Source§

impl Mul<f32> for &Tensor

Source§

type Output = Result<Tensor, TrustformersError>

The resulting type after applying the * operator.
Source§

fn mul(self, scalar: f32) -> Self::Output

Performs the * operation. Read more
Source§

impl Mul<f32> for Tensor

Source§

type Output = Result<Tensor, TrustformersError>

The resulting type after applying the * operator.
Source§

fn mul(self, scalar: f32) -> Self::Output

Performs the * operation. Read more
Source§

impl Mul<f64> for &Tensor

Source§

type Output = Result<Tensor, TrustformersError>

The resulting type after applying the * operator.
Source§

fn mul(self, scalar: f64) -> Self::Output

Performs the * operation. Read more
Source§

impl Mul<f64> for Tensor

Source§

type Output = Result<Tensor, TrustformersError>

The resulting type after applying the * operator.
Source§

fn mul(self, scalar: f64) -> Self::Output

Performs the * operation. Read more
Source§

impl Sub for &Tensor

Source§

type Output = Result<Tensor, TrustformersError>

The resulting type after applying the - operator.
Source§

fn sub(self, other: &Tensor) -> Self::Output

Performs the - operation. Read more
Source§

impl Sub for Tensor

Source§

type Output = Result<Tensor, TrustformersError>

The resulting type after applying the - operator.
Source§

fn sub(self, other: Tensor) -> Self::Output

Performs the - operation. Read more

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T> Instrument for T

Source§

fn instrument(self, span: Span) -> Instrumented<Self>

Instruments this type with the provided Span, returning an Instrumented wrapper. Read more
Source§

fn in_current_span(self) -> Instrumented<Self>

Instruments this type with the current Span, returning an Instrumented wrapper. Read more
Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoEither for T

Source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

impl<T> Pointable for T

Source§

const ALIGN: usize

The alignment of pointer.
Source§

type Init = T

The type for initializers.
Source§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
Source§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
Source§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
Source§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
Source§

impl<T> PolicyExt for T
where T: ?Sized,

Source§

fn and<P, B, E>(self, other: P) -> And<T, P>
where T: Policy<B, E>, P: Policy<B, E>,

Create a new Policy that returns Action::Follow only if self and other return Action::Follow. Read more
Source§

fn or<P, B, E>(self, other: P) -> Or<T, P>
where T: Policy<B, E>, P: Policy<B, E>,

Create a new Policy that returns Action::Follow if either self or other returns Action::Follow. Read more
Source§

impl<T> Same for T

Source§

type Output = T

Should always be Self
Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

Source§

fn vzip(self) -> V

Source§

impl<T> WithSubscriber for T

Source§

fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
where S: Into<Dispatch>,

Attaches the provided Subscriber to this type, returning a WithDispatch wrapper. Read more
Source§

fn with_current_subscriber(self) -> WithDispatch<Self>

Attaches the current default Subscriber to this type, returning a WithDispatch wrapper. Read more