TensorBase

Struct TensorBase 

Source
pub struct TensorBase<S: StorageTrait> { /* private fields */ }
Expand description

Core tensor structure that can work with different storage backends.

TensorBase is a generic tensor implementation that abstracts over different storage types through the StorageTrait. This allows for both owned tensors and tensor views with zero-cost abstractions.

§Type Parameters

  • S - Storage type that implements StorageTrait

§Fields

  • storage - The underlying storage backend
  • ptr - Raw pointer to the tensor data for fast access
  • dtype - Data type of tensor elements
  • shape - Dimensions of the tensor
  • strides - Memory layout information for indexing
  • offset_bytes - Byte offset into the storage

Implementations§

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn add<T: StorageTrait>(&self, other: &TensorBase<T>) -> Result<Tensor>

Add two tensors element-wise

Source

pub fn add_scalar<T: TensorElement + Add<Output = T>>( &self, scalar: T, ) -> Result<Tensor>

Add scalar to tensor

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn div<T: StorageTrait>(&self, other: &TensorBase<T>) -> Result<Tensor>

Divide two tensors element-wise (shapes must match exactly)

Source

pub fn div_scalar<T: TensorElement + Div<Output = T>>( &self, scalar: T, ) -> Result<Tensor>

Divide all elements of the tensor by a scalar

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn mul<T: StorageTrait>(&self, other: &TensorBase<T>) -> Result<Tensor>

Multiply two tensors element-wise

Source

pub fn mul_scalar<T: TensorElement + Mul<Output = T>>( &self, scalar: T, ) -> Result<Tensor>

Multiply all elements of the tensor by a scalar

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn sub<T: StorageTrait>(&self, other: &TensorBase<T>) -> Result<Tensor>

Subtract two tensors element-wise (shapes must match exactly)

Source

pub fn sub_scalar<T: TensorElement + Sub<Output = T>>( &self, scalar: T, ) -> Result<Tensor>

Subtract a scalar from all elements of the tensor

Source§

impl TensorBase<Storage>

Source

pub fn from_scalar<T: TensorElement>(value: T) -> Result<Self>

Creates a scalar (0-dimensional) tensor from a single value.

§Parameters
  • value - The scalar value to store in the tensor
§Returns

A Result<Tensor> containing:

  • Ok(Tensor): A scalar tensor with empty dimensions []
  • Err: If tensor creation fails
§Examples
use slsl::Tensor;

let scalar = Tensor::from_scalar(42.0f32)?;
assert_eq!(scalar.dims(), []);
assert_eq!(scalar.to_scalar::<f32>()?, 42.0);

let int_scalar = Tensor::from_scalar(10i32)?;
assert_eq!(int_scalar.to_scalar::<i32>()?, 10);
Source

pub fn from_vec<T: TensorElement, S: Into<Shape>>( data: Vec<T>, shape: S, ) -> Result<Self>

Creates a tensor from a vector with the specified shape.

Takes ownership of the vector data and reshapes it according to the given dimensions. The total number of elements in the vector must match the product of all dimensions.

§Parameters
  • data - Vector containing the tensor elements
  • shape - The desired shape/dimensions for the tensor
§Returns

A Result<Tensor> containing:

  • Ok(Tensor): A tensor with the specified shape
  • Err: If the data length doesn’t match the expected shape size
§Examples
use slsl::Tensor;

// Create 2D tensor
let data = vec![1, 2, 3, 4, 5, 6];
let tensor = Tensor::from_vec(data, [2, 3])?;
assert_eq!(tensor.dims(), [2, 3]);

// Create 1D tensor
let data = vec![1.0, 2.0, 3.0];
let tensor = Tensor::from_vec(data, [3])?;
assert_eq!(tensor.dims(), [3]);
§Panics

Returns an error if data.len() doesn’t equal the product of shape dimensions.

Source

pub fn from_slice<T: TensorElement, S: Into<Shape>>( data: &[T], shape: S, ) -> Result<Self>

Creates a tensor from a slice with the specified shape.

Copies data from the slice and creates a tensor with the given dimensions. The slice length must match the product of all shape dimensions.

§Parameters
  • data - Slice containing the tensor elements
  • shape - The desired shape/dimensions for the tensor
§Returns

A Result<Tensor> containing:

  • Ok(Tensor): A tensor with the specified shape containing copied data
  • Err: If the slice length doesn’t match the expected shape size
§Examples
use slsl::Tensor;

let data = [1, 2, 3, 4];
let tensor = Tensor::from_slice(&data, [2, 2])?;
assert_eq!(tensor.dims(), [2, 2]);
assert_eq!(tensor.at::<i32>([0, 1]), 2);

let data = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0];
let tensor = Tensor::from_slice(&data, [3, 2])?;
assert_eq!(tensor.dims(), [3, 2]);
Source

pub fn full<T: TensorElement>(shape: impl Into<Shape>, value: T) -> Result<Self>

Creates a tensor filled with a specific value.

All elements in the tensor will be set to the provided value.

§Parameters
  • shape - The shape/dimensions of the tensor to create
  • value - The value to fill all tensor elements with
§Returns

A Result<Tensor> containing:

  • Ok(Tensor): A tensor filled with the specified value
  • Err: If tensor creation fails
§Examples
use slsl::Tensor;

let tensor = Tensor::full([2, 3], 7.5f32)?;
assert_eq!(tensor.dims(), [2, 3]);
assert_eq!(tensor.at::<f32>([0, 0]), 7.5);
assert_eq!(tensor.at::<f32>([1, 2]), 7.5);

let tensor = Tensor::full([4], -1i32)?;
assert_eq!(tensor.to_vec::<i32>()?, vec![-1, -1, -1, -1]);
Source

pub fn ones<T: TensorElement>(shape: impl Into<Shape>) -> Result<Self>

Creates a tensor filled with ones.

All elements in the tensor will be set to the numeric value 1 for the specified type.

§Parameters
  • shape - The shape/dimensions of the tensor to create
§Returns

A Result<Tensor> containing:

  • Ok(Tensor): A tensor filled with ones
  • Err: If tensor creation fails
§Examples
use slsl::Tensor;

let tensor = Tensor::ones::<f32>([2, 2])?;
assert_eq!(tensor.dims(), [2, 2]);
assert_eq!(tensor.to_flat_vec::<f32>()?, vec![1.0, 1.0, 1.0, 1.0]);

let tensor = Tensor::ones::<i32>([3])?;
assert_eq!(tensor.to_vec::<i32>()?, vec![1, 1, 1]);
Source

pub fn zeros<T: TensorElement>(shape: impl Into<Shape>) -> Result<Self>

Creates a tensor filled with zeros.

All elements in the tensor will be set to the numeric value 0 for the specified type. This operation is optimized for performance.

§Parameters
  • shape - The shape/dimensions of the tensor to create
§Returns

A Result<Tensor> containing:

  • Ok(Tensor): A tensor filled with zeros
  • Err: If tensor creation fails
§Examples
use slsl::Tensor;

let tensor = Tensor::zeros::<f64>([3, 2])?;
assert_eq!(tensor.dims(), [3, 2]);
assert_eq!(tensor.to_flat_vec::<f64>()?, vec![0.0; 6]);

let tensor = Tensor::zeros::<i32>([4])?;
assert_eq!(tensor.to_vec::<i32>()?, vec![0, 0, 0, 0]);
Source

pub fn ones_like<T: TensorElement>(tensor: &Self) -> Result<Self>

Creates a tensor filled with ones, with the same shape as the input tensor.

This is a convenience function that creates a new tensor with the same dimensions as the reference tensor, but filled with ones of the specified type.

§Parameters
  • tensor - The reference tensor whose shape will be copied
§Returns

A Result<Tensor> containing:

  • Ok(Tensor): A tensor with the same shape as input, filled with ones
  • Err: If tensor creation fails
§Examples
use slsl::Tensor;

let original = Tensor::zeros::<f32>([2, 3])?;
let ones_tensor = Tensor::ones_like::<f32>(&original)?;
assert_eq!(ones_tensor.dims(), [2, 3]);
assert_eq!(ones_tensor.to_flat_vec::<f32>()?, vec![1.0; 6]);
Source

pub fn zeros_like<T: TensorElement>(tensor: &Self) -> Result<Self>

Creates a tensor filled with zeros, with the same shape as the input tensor.

This is a convenience function that creates a new tensor with the same dimensions as the reference tensor, but filled with zeros of the specified type.

§Parameters
  • tensor - The reference tensor whose shape will be copied
§Returns

A Result<Tensor> containing:

  • Ok(Tensor): A tensor with the same shape as input, filled with zeros
  • Err: If tensor creation fails
§Examples
use slsl::Tensor;

let original = Tensor::ones::<f32>([2, 2])?;
let zeros_tensor = Tensor::zeros_like::<f32>(&original)?;
assert_eq!(zeros_tensor.dims(), [2, 2]);
assert_eq!(zeros_tensor.to_flat_vec::<f32>()?, vec![0.0; 4]);
Source

pub fn eye<T: TensorElement>(n: usize) -> Result<Self>

Creates an identity matrix of size n × n.

An identity matrix has ones on the main diagonal and zeros elsewhere. This function creates a 2D square tensor with these properties.

§Parameters
  • n - The size of the square matrix (both width and height)
§Returns

A Result<Tensor> containing:

  • Ok(Tensor): A 2D tensor of shape [n, n] representing the identity matrix
  • Err: If tensor creation fails
§Examples
use slsl::Tensor;

let eye = Tensor::eye::<f32>(3)?;
assert_eq!(eye.dims(), [3, 3]);
assert_eq!(eye.at::<f32>([0, 0]), 1.0);  // Diagonal elements
assert_eq!(eye.at::<f32>([1, 1]), 1.0);
assert_eq!(eye.at::<f32>([2, 2]), 1.0);
assert_eq!(eye.at::<f32>([0, 1]), 0.0);  // Off-diagonal elements
assert_eq!(eye.at::<f32>([1, 0]), 0.0);
Source

pub fn arange<T: TensorElement + PartialOrd + Add<Output = T> + AsPrimitive<f32>>( start: T, end: T, step: T, ) -> Result<Self>

Creates a 1D tensor with values from start to end (exclusive) with a given step.

Generates a sequence of values starting from start, incrementing by step, and stopping before end. Similar to Python’s range() function.

§Parameters
  • start - The starting value (inclusive)
  • end - The ending value (exclusive)
  • step - The increment between consecutive values
§Returns

A Result<Tensor> containing:

  • Ok(Tensor): A 1D tensor containing the generated sequence
  • Err: If step is zero, or if boolean type is used
§Examples
use slsl::Tensor;

// Basic usage
let tensor = Tensor::arange(0, 5, 1)?;
assert_eq!(tensor.to_vec::<i32>()?, vec![0, 1, 2, 3, 4]);

// With step > 1
let tensor = Tensor::arange(0.0, 2.0, 0.5)?;
assert_eq!(tensor.to_vec::<f64>()?, vec![0.0, 0.5, 1.0, 1.5]);

// Negative step
let tensor = Tensor::arange(5, 0, -1)?;
assert_eq!(tensor.to_vec::<i32>()?, vec![5, 4, 3, 2, 1]);
§Panics

Returns an error if:

  • step is zero
  • The tensor type is boolean
Source

pub fn linspace<T>(start: T, end: T, n: usize) -> Result<Self>
where T: TensorElement + Sub<Output = T> + Div<Output = T> + Add<Output = T> + Copy,

Creates a 1D tensor with n evenly spaced values from start to end (inclusive).

Generates n values linearly spaced between start and end, including both endpoints. Similar to NumPy’s linspace() function.

§Parameters
  • start - The starting value (inclusive)
  • end - The ending value (inclusive)
  • n - The number of values to generate
§Returns

A Result<Tensor> containing:

  • Ok(Tensor): A 1D tensor with n evenly spaced values
  • Err: If n is zero or if boolean type is used
§Examples
use slsl::Tensor;

// 5 values from 0 to 10
let tensor = Tensor::linspace(0.0f32, 10.0f32, 5)?;
assert_eq!(tensor.to_vec::<f32>()?, vec![0.0, 2.5, 5.0, 7.5, 10.0]);

// Single value
let tensor = Tensor::linspace(5.0f32, 10.0f32, 1)?;
assert_eq!(tensor.to_vec::<f32>()?, vec![5.0]);

// Negative range
let tensor = Tensor::linspace(-1.0f32, 1.0f32, 3)?;
assert_eq!(tensor.to_vec::<f32>()?, vec![-1.0, 0.0, 1.0]);
§Panics

Returns an error if:

  • n is zero
  • The tensor type is boolean
Source

pub fn rand<T: TensorElement + SampleUniform>( low: T, high: T, shape: impl Into<Shape>, ) -> Result<Self>

Creates a tensor with random values from a uniform distribution in the range [low, high).

Generates random values uniformly distributed between low (inclusive) and high (exclusive). The random number generator is automatically seeded.

§Parameters
  • low - The lower bound (inclusive)
  • high - The upper bound (exclusive)
  • shape - The shape/dimensions of the tensor to create
§Returns

A Result<Tensor> containing:

  • Ok(Tensor): A tensor filled with random values from the uniform distribution
  • Err: If the distribution parameters are invalid or tensor creation fails
§Examples
use slsl::Tensor;

// Random floats between 0.0 and 1.0
let tensor = Tensor::rand(0.0f32, 1.0f32, [2, 3])?;
assert_eq!(tensor.dims(), [2, 3]);

// Random integers between 1 and 10
let tensor = Tensor::rand(1i32, 10i32, [5])?;
assert_eq!(tensor.dims(), [5]);
// All values should be in range [1, 10)
for &val in tensor.to_vec::<i32>()?.iter() {
    assert!(val >= 1 && val < 10);
}
§Notes
  • Uses a fast random number generator (SmallRng)
  • Values are uniformly distributed in the specified range
  • The upper bound is exclusive (values will be less than high)
Source

pub fn randn<T: TensorElement + From<f32>>( shape: impl Into<Shape>, ) -> Result<Self>

Creates a tensor with random values from a standard normal distribution N(0,1).

Generates random values from a normal (Gaussian) distribution with mean 0 and standard deviation 1. This is commonly used for neural network weight initialization.

§Parameters
  • shape - The shape/dimensions of the tensor to create
§Returns

A Result<Tensor> containing:

  • Ok(Tensor): A tensor filled with normally distributed random values
  • Err: If tensor creation fails
§Examples
use slsl::Tensor;

let tensor = Tensor::randn::<f32>([2, 3])?;
assert_eq!(tensor.dims(), [2, 3]);

// Values should be roughly centered around 0
let values = tensor.to_flat_vec::<f32>()?;
let mean: f32 = values.iter().sum::<f32>() / values.len() as f32;
assert!(mean.abs() < 1.0);  // Should be close to 0 for large samples
§Notes
  • Uses a standard normal distribution (mean=0, std=1)
  • Commonly used for initializing neural network weights
  • Values follow the bell curve distribution
Source

pub fn triu<T: TensorElement>(matrix: &Tensor, k: i32) -> Result<Self>

Extracts the upper triangular part of a matrix (k-th diagonal and above).

Creates a new tensor containing only the upper triangular elements of the input matrix. Elements below the k-th diagonal are set to zero.

§Parameters
  • matrix - The input 2D tensor (must be 2-dimensional)
  • k - Diagonal offset:
    • k = 0: Main diagonal and above
    • k > 0: Above the main diagonal
    • k < 0: Below the main diagonal
§Returns

A Result<Tensor> containing:

  • Ok(Tensor): A new tensor with the upper triangular part
  • Err: If the input is not a 2D matrix
§Examples
use slsl::Tensor;

let matrix = Tensor::from_vec(vec![1, 2, 3, 4, 5, 6, 7, 8, 9], [3, 3])?;

// Main diagonal and above (k=0)
let upper = Tensor::triu::<i32>(&matrix, 0)?;
// Result: [[1, 2, 3],
//          [0, 5, 6],
//          [0, 0, 9]]

// Above main diagonal (k=1)
let upper = Tensor::triu::<i32>(&matrix, 1)?;
// Result: [[0, 2, 3],
//          [0, 0, 6],
//          [0, 0, 0]]
§Panics

Returns an error if the input tensor is not 2-dimensional.

Source

pub fn tril<T: TensorElement>(matrix: &Tensor, k: i32) -> Result<Self>

Extracts the lower triangular part of a matrix (k-th diagonal and below).

Creates a new tensor containing only the lower triangular elements of the input matrix. Elements above the k-th diagonal are set to zero.

§Parameters
  • matrix - The input 2D tensor (must be 2-dimensional)
  • k - Diagonal offset:
    • k = 0: Main diagonal and below
    • k > 0: Above the main diagonal
    • k < 0: Below the main diagonal
§Returns

A Result<Tensor> containing:

  • Ok(Tensor): A new tensor with the lower triangular part
  • Err: If the input is not a 2D matrix
§Examples
use slsl::Tensor;

let matrix = Tensor::from_vec(vec![1, 2, 3, 4, 5, 6, 7, 8, 9], [3, 3])?;

// Main diagonal and below (k=0)
let lower = Tensor::tril::<i32>(&matrix, 0)?;
// Result: [[1, 0, 0],
//          [4, 5, 0],
//          [7, 8, 9]]

// Below main diagonal (k=-1)
let lower = Tensor::tril::<i32>(&matrix, -1)?;
// Result: [[0, 0, 0],
//          [4, 0, 0],
//          [7, 8, 0]]
§Panics

Returns an error if the input tensor is not 2-dimensional.

Source

pub fn diag<T: TensorElement>(matrix: &Tensor, k: i32) -> Result<Self>

Extracts diagonal elements from a matrix.

Returns a 1D tensor containing the elements from the specified diagonal of the input matrix. The k-th diagonal can be the main diagonal (k=0), above it (k>0), or below it (k<0).

§Parameters
  • matrix - The input 2D tensor (must be 2-dimensional)
  • k - Diagonal offset:
    • k = 0: Main diagonal
    • k > 0: k-th diagonal above the main diagonal
    • k < 0: k-th diagonal below the main diagonal
§Returns

A Result<Tensor> containing:

  • Ok(Tensor): A 1D tensor with the diagonal elements
  • Err: If the input is not a 2D matrix
§Examples
use slsl::Tensor;

let matrix = Tensor::from_vec(vec![1, 2, 3, 4, 5, 6, 7, 8, 9], [3, 3])?;

// Main diagonal (k=0)
let diag = Tensor::diag::<i32>(&matrix, 0)?;
assert_eq!(diag.to_vec::<i32>()?, vec![1, 5, 9]);

// First super-diagonal (k=1)
let diag = Tensor::diag::<i32>(&matrix, 1)?;
assert_eq!(diag.to_vec::<i32>()?, vec![2, 6]);

// First sub-diagonal (k=-1)
let diag = Tensor::diag::<i32>(&matrix, -1)?;
assert_eq!(diag.to_vec::<i32>()?, vec![4, 8]);
§Notes
  • For non-square matrices, the diagonal length is determined by the matrix dimensions
  • If the requested diagonal is out of bounds, an empty tensor is returned
§Panics

Returns an error if the input tensor is not 2-dimensional.

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn iter_dim(&self, dim: usize) -> DimIter<'_, S>

Creates an iterator over the specified dimension

Returns a DimIter that yields TensorViews representing slices along the specified dimension. The iterator is optimized for performance with lazy initialization and zero-cost abstractions.

§Arguments
  • dim - The dimension index to iterate over (0-based)
§Returns

A DimIter that can be used with standard Rust iterator methods

§Performance
  • Iterator construction: O(1) with lazy initialization
  • count() operations: O(1) using cached values
  • Actual iteration: Optimized for cache-friendly memory access
§Example
use slsl::Tensor;

let tensor = Tensor::from_vec(vec![1.0f32, 2.0, 3.0, 4.0, 5.0, 6.0], [2, 3]).unwrap();

// Iterate over rows (dimension 0)
for (i, row) in tensor.iter_dim(0).enumerate() {
    println!("Row {}: {:?}", i, row.as_slice::<f32>().unwrap());
}

// Ultra-fast count
assert_eq!(tensor.iter_dim(0).count(), 2);
assert_eq!(tensor.iter_dim(1).count(), 3);
Source

pub fn dim_len(&self, dim: usize) -> usize

Get the size of a specific dimension (ultra-fast alternative to iter_dim().count())

This method provides direct access to dimension sizes without any iterator construction overhead. While iter_dim(dim).count() is also very fast due to optimizations, this method is slightly faster for simple size queries.

§Arguments
  • dim - The dimension index (0-based)
§Returns

The size of the specified dimension

§Performance

Time complexity: O(1) - direct array access

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn iter(&self) -> TensorIter<'_, S>

Create an iterator over all elements in the tensor

Source§

impl TensorBase<Storage>

Source

pub fn view(&self) -> TensorView<'_>

Creates a view of this tensor without copying data.

A tensor view provides a lightweight way to access tensor data without taking ownership. The view borrows from the original tensor’s storage.

§Returns

A TensorView that references the same data as this tensor.

§Examples
use slsl::Tensor;

let tensor = Tensor::zeros::<f32>([2, 3])?;
let view = tensor.view();
assert_eq!(view.shape(), tensor.shape());
assert_eq!(view.dtype(), tensor.dtype());
Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn strong_count(&self) -> usize

Returns the reference count for the underlying storage.

This shows how many tensor instances are sharing the same storage. Useful for memory management and debugging.

§Returns

The number of references to the underlying storage.

§Examples
use slsl::Tensor;

let tensor1 = Tensor::zeros::<f32>([2, 3])?;
assert_eq!(tensor1.strong_count(), 1);

let tensor2 = tensor1.clone();
assert_eq!(tensor1.strong_count(), 2);
assert_eq!(tensor2.strong_count(), 2);
Source

pub fn strides(&self) -> &Stride

Returns a reference to the tensor’s strides.

Strides define how to traverse the tensor data in memory. Each stride represents the number of bytes to skip to move to the next element along that dimension.

§Returns

A reference to the Stride array.

§Examples
use slsl::Tensor;

let tensor = Tensor::zeros::<f32>([2, 3])?;
let strides = tensor.strides();
assert_eq!(strides.len(), 2);
Source

pub fn shape(&self) -> &Shape

Returns a reference to the tensor’s shape.

The shape defines the size of each dimension of the tensor.

§Returns

A reference to the Shape containing dimension sizes.

§Examples
use slsl::Tensor;

let tensor = Tensor::zeros::<f32>([2, 3, 4])?;
let shape = tensor.shape();
assert_eq!(shape.len(), 3);
assert_eq!(shape[0], 2);
assert_eq!(shape[1], 3);
assert_eq!(shape[2], 4);
Source

pub fn dims(&self) -> &[usize]

Returns the tensor dimensions as a slice.

This provides a convenient way to access the shape dimensions as a standard Rust slice.

§Returns

A slice containing the size of each dimension.

§Examples
use slsl::Tensor;

let tensor = Tensor::zeros::<f32>([2, 3, 4])?;
let dims = tensor.dims();
assert_eq!(dims, &[2, 3, 4]);
assert_eq!(dims.len(), 3);
Source

pub fn rank(&self) -> usize

Returns the number of dimensions (rank) of the tensor.

A scalar has rank 0, a vector has rank 1, a matrix has rank 2, etc.

§Returns

The number of dimensions as a usize.

§Examples
use slsl::Tensor;

// Scalar (rank 0)
let scalar = Tensor::from_scalar(3.14f32)?;
assert_eq!(scalar.rank(), 0);

// Vector (rank 1)
let vector = Tensor::zeros::<f32>([5])?;
assert_eq!(vector.rank(), 1);

// Matrix (rank 2)
let matrix = Tensor::zeros::<f32>([3, 4])?;
assert_eq!(matrix.rank(), 2);
Source

pub fn ndim(&self, n: usize) -> Result<usize>

Returns the size of a specific dimension.

§Parameters
  • n - The dimension index (0-based)
§Returns

The size of the specified dimension.

§Errors

Returns an error if the dimension index is out of bounds.

§Examples
use slsl::Tensor;

let tensor = Tensor::zeros::<f32>([2, 3, 4])?;

assert_eq!(tensor.ndim(0)?, 2); // First dimension
assert_eq!(tensor.ndim(1)?, 3); // Second dimension
assert_eq!(tensor.ndim(2)?, 4); // Third dimension

// This would return an error:
// tensor.ndim(3) // Index out of bounds
Source

pub fn numel(&self) -> usize

Returns the total number of elements in the tensor.

This is the product of all dimension sizes.

§Returns

The total number of elements as a usize.

§Examples
use slsl::Tensor;

// Scalar has 1 element
let scalar = Tensor::from_scalar(3.14f32)?;
assert_eq!(scalar.numel(), 1);

// 2x3 matrix has 6 elements
let matrix = Tensor::zeros::<f32>([2, 3])?;
assert_eq!(matrix.numel(), 6);

// 2x3x4 tensor has 24 elements
let tensor3d = Tensor::zeros::<f32>([2, 3, 4])?;
assert_eq!(tensor3d.numel(), 24);
Source

pub fn is_empty(&self) -> bool

Source

pub fn dtype(&self) -> DType

Returns the data type of the tensor elements.

§Returns

The DType enum value representing the element type.

§Examples
use slsl::{Tensor, DType};

let f32_tensor = Tensor::zeros::<f32>([2, 3])?;
assert_eq!(f32_tensor.dtype(), DType::Fp32);

let i32_tensor = Tensor::zeros::<i32>([2, 3])?;
assert_eq!(i32_tensor.dtype(), DType::Int32);

let bool_tensor = Tensor::zeros::<bool>([2, 3])?;
assert_eq!(bool_tensor.dtype(), DType::Bool);
Source

pub fn offset_bytes(&self) -> usize

Returns the byte offset of this tensor in the underlying storage.

This is useful for tensor views and slices that point to a subset of the original tensor’s data.

§Returns

The byte offset as a usize.

§Examples
use slsl::Tensor;

let tensor = Tensor::zeros::<f32>([4, 4])?;
assert_eq!(tensor.offset_bytes(), 0); // Original tensor has no offset

// Sliced tensors may have non-zero offsets
// let slice = tensor.slice(...); // Slicing would create offsets
Source

pub fn as_slice<T: TensorElement + Copy>(&self) -> Result<&[T]>

Get tensor data as slice (only for contiguous tensors)

Source

pub fn as_mut_slice<T: TensorElement + Copy>(&mut self) -> Result<&mut [T]>

Get mutable tensor data as slice (only for contiguous tensors)

Source

pub fn as_ptr(&self) -> *const u8

Get data pointer

Source

pub fn as_mut_ptr(&mut self) -> *mut u8

Get a mutable pointer to the tensor data

Source

pub unsafe fn from_raw_parts( storage: S, ptr: NonNull<u8>, shape: Shape, strides: Shape, offset_bytes: usize, dtype: DType, ) -> Self

Create a new TensorView from tensor components

§Safety

Caller must ensure all parameters are valid and consistent

Source

pub fn is_contiguous(&self) -> bool

Checks if the tensor’s memory layout is contiguous(C-style (row-major) or Fortran-style (column-major)).

Source

pub fn at<T: TensorElement + Copy>(&self, indices: impl Into<Shape>) -> T

Get a single element from the tensor at the specified indices

This method provides fast, bounds-checked access to individual tensor elements. It calculates the memory offset based on the indices and strides, then returns the value at that location.

§Arguments
  • indices - The indices for each dimension. Can be:
    • A single value for 1D tensors (e.g., i, (i,), or [i])
    • An array for multi-dimensional tensors (e.g., [i, j] for 2D, [i, j, k] for 3D)
    • Any type that implements Into<Shape>
§Returns

The value at the specified indices

§Safety

This method performs bounds checking in debug mode. In release mode, bounds checking is disabled for maximum performance.

§Examples
use slsl::Tensor;

// 1D tensor - multiple formats work
let tensor_1d = Tensor::from_vec(vec![1.0, 2.0, 3.0], [3]).unwrap();
let value1 = tensor_1d.at::<f32>(1);      // Single value (most convenient)
let value2 = tensor_1d.at::<f32>((1,));   // Tuple format
let value3 = tensor_1d.at::<f32>([1]);    // Array format

// Debug: let's see what we get
println!("value1: {}, value2: {}, value3: {}", value1, value2, value3);

// 2D tensor
let tensor_2d = Tensor::from_vec(vec![1.0, 2.0, 3.0, 4.0], [2, 2]).unwrap();
let value = tensor_2d.at::<f32>([1, 0]); // Returns 3.0 (second row, first column)
Source

pub fn clone_or_copy(&self) -> Result<Tensor>

Creates a contiguous copy of the tensor.

This is an alias for Self::to_contiguous for convenience. If the tensor is already contiguous and has no offset, it returns a clone without copying data.

§Returns

A contiguous Tensor with the same data.

§Errors

Returns an error if memory allocation fails or if the data type is not supported.

§Examples
use slsl::Tensor;

let tensor = Tensor::zeros::<f32>([2, 3])?;
let copy = tensor.clone_or_copy()?;

assert_eq!(tensor.shape(), copy.shape());
assert_eq!(tensor.dtype(), copy.dtype());
assert_eq!(tensor.numel(), copy.numel());
Source

pub fn to_contiguous(&self) -> Result<Tensor>

Creates a contiguous copy of the tensor if necessary.

If the tensor is already contiguous with no offset, this method returns a clone without copying data. Otherwise, it creates a new tensor with contiguous memory layout.

§Returns

A contiguous Tensor with the same data and shape.

§Errors

Returns an error if:

  • Memory allocation fails
  • The tensor’s data type is not supported
§Examples
use slsl::Tensor;

// Create a tensor
let tensor = Tensor::zeros::<f32>([2, 3])?;

// Get a contiguous version
let contiguous = tensor.to_contiguous()?;

assert_eq!(tensor.shape(), contiguous.shape());
assert_eq!(tensor.dtype(), contiguous.dtype());
assert!(contiguous.is_contiguous());

// For already contiguous tensors, this is very efficient
let contiguous2 = contiguous.to_contiguous()?;
assert_eq!(contiguous.strong_count(), contiguous2.strong_count());
Source

pub fn to_dtype<T: TensorElement>(&self) -> Result<Tensor>

Convert tensor to a different data type

This method creates a new tensor with the specified data type, performing element-wise type conversion. The operation is optimized for both contiguous and non-contiguous tensors.

§Type Parameters
  • T - Target tensor element type implementing TensorElement
§Returns
  • Result<Tensor> - New tensor with converted data type
§Performance
  • Fast path for same-type conversion (zero-cost clone)
  • Optimized memory layout for contiguous tensors
  • Efficient strided access for non-contiguous tensors
§Example
use slsl::Tensor;
let tensor_f32 = Tensor::from_vec(vec![1.0f32, 2.0, 3.0], [3])?;
let tensor_i32 = tensor_f32.to_dtype::<i32>()?;
Source

pub fn to_owned(&self) -> Result<Tensor>

Deep clone - creates new storage with copied data

This method creates a completely independent copy of the tensor data, regardless of whether the original is contiguous or not.

§Performance
  • Fast path for contiguous tensors using optimized memcpy
  • Efficient strided copy for non-contiguous tensors
  • Type-specific optimizations for different data types
§Returns
  • Result<Tensor> - A new owned tensor with copied data
Source

pub fn map<T>(&self, f: impl FnMut(&T) -> T) -> Result<Tensor>
where T: Copy + 'static + TensorElement,

Generic map function that applies a closure to each element Input and output types are the same by default

§Arguments
  • f - Closure that takes a reference to an element and returns a new value
§Returns
  • Result<Tensor> - New tensor with mapped values
§Examples
use slsl::Tensor;

// Same type operation (most common case)
let tensor = Tensor::from_vec(vec![1.0f32, 2.0, 3.0], [3]).unwrap();
let result = tensor.map::<f32>(|x| x.abs()).unwrap();

// For i32 tensors
let tensor = Tensor::from_vec(vec![1i32, -2, 3], [3]).unwrap();
let result = tensor.map::<i32>(|x| x.abs()).unwrap();
Source

pub fn map_contiguous<T>(&self, f: impl FnMut(&T) -> T) -> Result<Tensor>
where T: Copy + 'static + TensorElement,

Applies a function f to each element of a contiguous tensor.

§Arguments
  • f - A mutable closure that takes a reference to an element of type T and returns a new element of type T.
§Returns

A Result containing a new Tensor with the mapped elements, or an error if the operation fails.

Source

pub fn map_non_contiguous<T>(&self, f: impl FnMut(&T) -> T) -> Result<Tensor>
where T: Copy + 'static + TensorElement,

Applies a function f to each element of a non-contiguous tensor. This method iterates through the tensor’s elements based on its strides and applies the given function.

§Arguments
  • f - A mutable closure that takes a reference to an element of type T and returns a new element of type T.
§Returns

A Result containing a new Tensor with the mapped elements, or an error if the operation fails.

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn to_scalar<T: TensorElement + Copy>(&self) -> Result<T>

Extract scalar value from a 0-dimensional tensor

§Performance

This method is optimized for zero-cost scalar extraction with compile-time checks.

§Errors

Returns error if tensor is not 0-dimensional or dtype mismatch

Source

pub fn to_vec<T: TensorElement + Copy>(&self) -> Result<Vec<T>>

Convert 1D tensor to Vec<T>

§Performance
  • Contiguous tensors: Direct memory copy (memcpy-like performance)
  • Non-contiguous tensors: Optimized iteration
§Errors

Returns error if tensor is not 1D or dtype mismatch

Source

pub fn to_flat_vec<T: TensorElement + Copy>(&self) -> Result<Vec<T>>

Convert tensor to flat Vec<T> (any dimensionality)

§Performance
  • Contiguous tensors: Direct memory copy
  • Non-contiguous tensors: Optimized recursive iteration
§Errors

Returns error if dtype mismatch

Source

pub fn to_vec2<T: TensorElement + Copy>(&self) -> Result<Vec<Vec<T>>>

Convert 2D tensor to Vec<Vec<T>>

Source

pub fn to_vec3<T: TensorElement + Copy>(&self) -> Result<Vec<Vec<Vec<T>>>>

Convert 3D tensor to Vec<Vec<Vec<T>>>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn dot(&self, other: &Self) -> Result<f64>

Compute dot product between two 1D tensors

§Arguments
  • other - The other tensor to compute dot product with
§Returns
  • Result<f64> - The scalar dot product result as f64 (safe for all numeric types)
§Errors
  • Returns error if tensors are not 1D
  • Returns error if vector lengths don’t match
  • Returns error if dtypes don’t match
  • Returns error if dtype is not supported
Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn matmul(&self, other: &Self) -> Result<Tensor>

Compute matrix multiplication between two 2D tensors

§Arguments
  • other - The other tensor to compute matrix multiplication with
§Returns
  • Result<Tensor> - The matrix multiplication result
§Errors
  • Returns error if tensors are not 2D
  • Returns error if matrix dimensions are incompatible
  • Returns error if dtypes don’t match
  • Returns error if dtype is not supported
Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn norm<D: Dim>(&self, dim: D, ord: f32) -> Result<Tensor>

Compute various norms of a tensor, similar to PyTorch’s torch.linalg.norm. Supports L1, L2, and Lp norms for floating-point tensors.

  • ord - Order of the norm. Supported values:
    • None (default): L2 norm (Euclidean norm)
    • Some(1.0): L1 norm (Manhattan norm) - uses backend asum
    • Some(2.0): L2 norm (Euclidean norm) - uses backend nrm2
Source

pub fn norm_keepdim<D: Dim>(&self, dim: D, ord: f32) -> Result<Tensor>

Compute various norms of a tensor, similar to PyTorch’s torch.linalg.norm, but keeping the specified dimension(s).

Source

pub fn norm_l1<D: Dim>(&self, dim: D) -> Result<Tensor>

Compute L1 norm (sum of absolute values) along the specified dimension

Source

pub fn norm1_keepdim<D: Dim>(&self, dim: D) -> Result<Tensor>

Compute L1 norm (sum of absolute values) along the specified dimension, keeping dimensions

Source

pub fn norm_l2<D: Dim>(&self, dim: D) -> Result<Tensor>

Compute L2 norm (Euclidean norm) along the specified dimension

Source

pub fn norm2_keepdim<D: Dim>(&self, dim: D) -> Result<Tensor>

Compute L2 norm (Euclidean norm) along the specified dimension, keeping dimensions

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn broadcast_to<D: Into<Shape>>( &self, target_shape: D, ) -> Result<TensorView<'_>>

Broadcast tensor to new shape

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn cat<D: Dim>(tensors: &[&Self], dim: D) -> Result<Tensor>

Concatenates a sequence of tensors along an existing dimension.

This function creates a new tensor by concatenating the input tensors along the specified dimension. All input tensors must have the same rank and data type, and all dimensions except the concatenation dimension must have the same size.

§Arguments
  • tensors - A slice of tensor references to concatenate. Must not be empty.
  • dim - The dimension along which to concatenate. Must be a valid dimension index for the input tensors.
§Returns

A Result<Tensor> containing:

  • Ok(Tensor): A new tensor with the concatenated data
  • Err: If tensors have different ranks/shapes/dtypes, or if the dimension is invalid
§Examples
use slsl::Tensor;

// Concatenate 1D tensors
let tensor1 = Tensor::from_vec(vec![1, 2, 3], [3])?;
let tensor2 = Tensor::from_vec(vec![4, 5, 6], [3])?;
let tensor3 = Tensor::from_vec(vec![7, 8, 9], [3])?;
let tensors = vec![&tensor1, &tensor2, &tensor3];

// Concatenate along dimension 0
let concatenated = Tensor::cat(&tensors, 0)?;
assert_eq!(concatenated.dims(), [9]);
assert_eq!(concatenated.to_flat_vec::<i32>()?, vec![1, 2, 3, 4, 5, 6, 7, 8, 9]);

// Concatenate 2D tensors along dimension 0
let tensor_2d1 = Tensor::from_vec(vec![1, 2, 3, 4], [2, 2])?;
let tensor_2d2 = Tensor::from_vec(vec![5, 6, 7, 8], [2, 2])?;
let tensors_2d = vec![&tensor_2d1, &tensor_2d2];

let concatenated_2d = Tensor::cat(&tensors_2d, 0)?;
assert_eq!(concatenated_2d.dims(), [4, 2]);

// Concatenate along dimension 1
let concatenated_2d_dim1 = Tensor::cat(&tensors_2d, 1)?;
assert_eq!(concatenated_2d_dim1.dims(), [2, 4]);
§Notes
  • All input tensors must have identical ranks and data types
  • All dimensions except the concatenation dimension must have the same size
  • The concatenation dimension size equals the sum of all input tensor sizes in that dimension
  • The function follows PyTorch’s cat behavior
  • For out-of-bounds dimensions, the function will return an error

Concatenate tensors with any storage type (Tensor or TensorView)

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn flatten<D1: Dim, D2: Dim>( &self, start_dim: D1, end_dim: D2, ) -> Result<TensorView<'_>>

Flattens a range of dimensions in the tensor into a single dimension.

This function creates a new view of the tensor where the specified range of dimensions is merged into a single dimension. The returned tensor shares the same underlying data as the original tensor, making this a zero-copy operation.

§Arguments
  • start_dim - The starting dimension to flatten (inclusive)
  • end_dim - The ending dimension to flatten (inclusive)

Both dimensions must be valid indices for this tensor, and start_dim <= end_dim.

§Returns

A Result<TensorView> containing:

  • Ok(TensorView): A new view with the specified dimensions flattened
  • Err: If the dimension indices are invalid or out of bounds
§Examples
use slsl::Tensor;

// 3D tensor
let tensor = Tensor::from_vec(vec![1, 2, 3, 4, 5, 6, 7, 8], [2, 2, 2])?;

// Flatten dimensions 0 and 1
let flattened = tensor.flatten(0, 1)?;
assert_eq!(flattened.dims(), [4, 2]);

// Flatten dimensions 1 and 2
let flattened = tensor.flatten(1, 2)?;
assert_eq!(flattened.dims(), [2, 4]);

// Flatten all dimensions
let flattened = tensor.flatten_all()?;
assert_eq!(flattened.dims(), [8]);
§Notes
  • This operation is memory-efficient as it returns a view rather than copying data
  • The flattened dimension size is the product of all dimensions in the range
  • Strides are recalculated to maintain correct memory access patterns
  • The function follows PyTorch’s flatten behavior
  • For invalid dimension ranges, the function will return an error
§See Also
Source

pub fn flatten_all(&self) -> Result<TensorView<'_>>

Flattens all dimensions of the tensor into a single dimension.

This is a convenience function that flattens the entire tensor into a 1D tensor. It’s equivalent to calling flatten(0, self.rank() - 1).

§Returns

A Result<TensorView> containing:

  • Ok(TensorView): A new 1D view with all dimensions flattened
  • Err: If there’s an error during the flatten operation
§Examples
use slsl::Tensor;

// 3D tensor
let tensor = Tensor::from_vec(vec![1, 2, 3, 4, 5, 6, 7, 8], [2, 2, 2])?;

// Flatten all dimensions
let flattened = tensor.flatten_all()?;
assert_eq!(flattened.dims(), [8]);
assert_eq!(flattened.to_flat_vec::<i32>()?, vec![1, 2, 3, 4, 5, 6, 7, 8]);

// 2D tensor
let tensor_2d = Tensor::from_vec(vec![1, 2, 3, 4], [2, 2])?;
let flattened = tensor_2d.flatten_all()?;
assert_eq!(flattened.dims(), [4]);
§Notes
  • This operation is memory-efficient as it returns a view rather than copying data
  • The resulting tensor will always have exactly one dimension
  • This is equivalent to self.flatten(0, self.rank() - 1)
§See Also
Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn normalize<T: TensorElement + Float + Debug>( &self, min: T, max: T, ) -> Result<Tensor>

Normalizes the tensor to the range [0, 1] using min-max normalization.

The formula used is: x_normalized = (x - min) / (max - min)

§Arguments
  • min - The minimum value in the original range
  • max - The maximum value in the original range
§Errors

Returns an error if:

  • The dtype of min and max does not match the tensor’s dtype.
  • min equals max (division by zero).
  • min is greater than max.
§Examples
use slsl::Tensor;

// Normalize a 3D image tensor from [0, 255] to [0, 1]
let image_data = vec![
    0.0, 128.0, 255.0,   // Pixel 1, Channel R, G, B
    64.0, 192.0, 32.0,   // Pixel 2, Channel R, G, B
];
let tensor = Tensor::from_vec(image_data, [2, 3]).unwrap();
let normalized = tensor.normalize(0., 255.).unwrap();
Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn permute<D: Dims>(self, dims: D) -> Result<TensorBase<S>>

Permute tensor dimensions according to the given order

Source

pub fn flip_dims(self) -> Result<TensorBase<S>>

Flip tensor dimensions, reversing the order of dimensions For example: shape [1, 2, 3, 4] becomes [4, 3, 2, 1]

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn reshape<D: Into<Shape>>(self, new_shape: D) -> Result<TensorBase<S>>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn split_at<D: Dim>( &self, dim: D, index: usize, ) -> Result<(TensorView<'_>, TensorView<'_>)>

Split the tensor along dim at index, returning (left, right).

Rules:

  • index must be > 0 and < size_of(dim). If index == 0 or index == size, this returns an error.
  • Does not allocate; returns two views.
Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn squeeze<D: Dims>(&self, dims: D) -> Result<TensorView<'_>>

Returns a new tensor with dimensions of size one removed from the specified positions.

This function creates a new view of the tensor with dimensions of size 1 removed from the specified positions. The returned tensor shares the same underlying data as the original tensor, making this a zero-copy operation.

§Arguments
  • dims - The dimensions to squeeze. Must be valid dimension indices for this tensor.
    • Can be a single dimension index, a slice of indices, or a range
    • Only dimensions of size 1 will be removed
    • Dimensions of size greater than 1 will be ignored
§Returns

A Result<TensorView> containing:

  • Ok(TensorView): A new view with the specified dimensions removed
  • Err: If any dimension index is out of bounds
§Examples
use slsl::Tensor;

// 3D tensor with some dimensions of size 1
let tensor = Tensor::from_vec(vec![1, 2, 3, 4], [1, 4, 1])?;

// Squeeze specific dimensions
let squeezed = tensor.squeeze(0)?;  // Remove dimension 0
assert_eq!(squeezed.dims(), [4, 1]);

let squeezed = tensor.squeeze([0, 2])?;  // Remove dimensions 0 and 2
assert_eq!(squeezed.dims(), [4]);

// Squeeze all dimensions of size 1
let squeezed = tensor.squeeze_all()?;
assert_eq!(squeezed.dims(), [4]);

// 2D tensor with no dimensions of size 1
let tensor_2d = Tensor::from_vec(vec![1, 2, 3, 4], [2, 2])?;
let squeezed = tensor_2d.squeeze(0)?;  // No effect
assert_eq!(squeezed.dims(), [2, 2]);
§Notes
  • This operation is memory-efficient as it returns a view rather than copying data
  • Only dimensions of size 1 are removed; larger dimensions are preserved
  • If all dimensions are removed, a scalar tensor (shape [1]) is returned
  • The function follows PyTorch’s squeeze behavior
  • For out-of-bounds dimensions, the function will return an error
§See Also
Source

pub fn squeeze_all(&self) -> Result<TensorView<'_>>

Returns a new tensor with all dimensions of size one removed.

This is a convenience function that removes all dimensions of size 1 from the tensor. It’s equivalent to calling squeeze with all dimension indices.

§Returns

A Result<TensorView> containing:

  • Ok(TensorView): A new view with all size-1 dimensions removed
  • Err: If there’s an error during the squeeze operation
§Examples
use slsl::Tensor;

// Tensor with multiple dimensions of size 1
let tensor = Tensor::from_vec(vec![1, 2, 3, 4], [1, 4, 1, 1])?;

// Remove all dimensions of size 1
let squeezed = tensor.squeeze_all()?;
assert_eq!(squeezed.dims(), [4]);

// Tensor with no dimensions of size 1
let tensor_2d = Tensor::from_vec(vec![1, 2, 3, 4], [2, 2])?;
let squeezed = tensor_2d.squeeze_all()?;
assert_eq!(squeezed.dims(), [2, 2]);  // No change
§Notes
  • This operation is memory-efficient as it returns a view rather than copying data
  • If all dimensions are of size 1, a scalar tensor (shape [1]) is returned
  • This is equivalent to self.squeeze(0..self.rank())
§See Also
Source

pub fn unsqueeze<D: Dim>(&self, dim: D) -> Result<TensorView<'_>>

Returns a new tensor with a dimension of size one inserted at the specified position.

This function creates a new view of the tensor with an additional dimension of size 1 inserted at the specified position. The returned tensor shares the same underlying data as the original tensor, making this a zero-copy operation.

§Arguments
  • dim - The position at which to insert the new dimension. Must be in the range [-rank-1, rank].
    • For a 1D tensor [4], valid values are [-2, 1]
    • For a 2D tensor [2, 2], valid values are [-3, 2]
    • Negative indices count from the end: -1 means the last position, -2 means the second-to-last, etc.
§Returns

A Result<TensorView> containing:

  • Ok(TensorView): A new view with the inserted dimension
  • Err: If the dimension index is out of bounds
§Examples
use slsl::Tensor;

// 1D tensor
let tensor = Tensor::from_vec(vec![1, 2, 3, 4], [4])?;

// Insert at beginning (dimension 0)
let unsqueezed = tensor.unsqueeze(0)?;
assert_eq!(unsqueezed.dims(), [1, 4]);

// Insert at end (dimension 1)
let unsqueezed = tensor.unsqueeze(1)?;
assert_eq!(unsqueezed.dims(), [4, 1]);

// Using negative indices
let unsqueezed = tensor.unsqueeze(-1)?;  // Same as unsqueeze(1)
assert_eq!(unsqueezed.dims(), [4, 1]);

let unsqueezed = tensor.unsqueeze(-2)?;  // Same as unsqueeze(0)
assert_eq!(unsqueezed.dims(), [1, 4]);

// 2D tensor
let tensor_2d = Tensor::from_vec(vec![1, 2, 3, 4], [2, 2])?;
let unsqueezed = tensor_2d.unsqueeze(1)?;
assert_eq!(unsqueezed.dims(), [2, 1, 2]);
§Notes
  • This operation is memory-efficient as it returns a view rather than copying data
  • The stride for the new dimension is set to 0 since its size is 1
  • The function follows PyTorch’s unsqueeze behavior for dimension indexing
  • For out-of-bounds dimensions, the function will return an error rather than silently inserting at the end, ensuring user intent is clear
§See Also
Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn stack<D: Dim>(tensors: &[&Self], dim: D) -> Result<Tensor>

Stacks a sequence of tensors along a new dimension.

This function creates a new tensor by stacking the input tensors along a new dimension. All input tensors must have the same shape and data type. The resulting tensor will have one more dimension than the input tensors.

§Arguments
  • tensors - A slice of tensor references to stack. Must not be empty.
  • dim - The dimension along which to stack. Can be in range [0, rank] where rank is the rank of the input tensors. A value of rank will insert the new dimension at the end.
§Returns

A Result<Tensor> containing:

  • Ok(Tensor): A new tensor with the stacked data
  • Err: If tensors have different shapes/dtypes, or if the dimension is invalid
§Examples
use slsl::Tensor;

// Stack 1D tensors
let tensor1 = Tensor::from_vec(vec![1, 2, 3], [3])?;
let tensor2 = Tensor::from_vec(vec![4, 5, 6], [3])?;
let tensor3 = Tensor::from_vec(vec![7, 8, 9], [3])?;
let tensors = vec![&tensor1, &tensor2, &tensor3];

// Stack along dimension 0 (beginning)
let stacked = Tensor::stack(&tensors, 0)?;
assert_eq!(stacked.dims(), [3, 3]);
assert_eq!(stacked.to_flat_vec::<i32>()?, vec![1, 2, 3, 4, 5, 6, 7, 8, 9]);

// Stack along dimension 1 (end)
let stacked = Tensor::stack(&tensors, 1)?;
assert_eq!(stacked.dims(), [3, 3]);
assert_eq!(stacked.to_flat_vec::<i32>()?, vec![1, 4, 7, 2, 5, 8, 3, 6, 9]);

// Stack 2D tensors
let tensor_2d1 = Tensor::from_vec(vec![1, 2, 3, 4], [2, 2])?;
let tensor_2d2 = Tensor::from_vec(vec![5, 6, 7, 8], [2, 2])?;
let tensors_2d = vec![&tensor_2d1, &tensor_2d2];

let stacked_2d = Tensor::stack(&tensors_2d, 0)?;
assert_eq!(stacked_2d.dims(), [2, 2, 2]);
§Notes
  • All input tensors must have identical shapes and data types
  • The new dimension size equals the number of input tensors
  • The function follows PyTorch’s stack behavior
  • For out-of-bounds dimensions, the new dimension is inserted at the end
Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn standardize<T: TensorElement + Float>( &self, mean: &[T], std: &[T], dim: impl Dim, ) -> Result<Tensor>

Standardizes the tensor by subtracting the mean and dividing by the standard deviation along a specified dimension.

§Arguments
  • mean - The mean(s) to subtract. Length must match the size of the channel dimension.
  • std - The standard deviation(s) to divide by. Length must match the size of the channel dimension.
  • dim - The dimension along which to apply standardization (e.g., channel dimension).
§Errors

Returns an error if:

  • mean or std are empty.
  • mean and std have different lengths.
  • The dtype of mean and std does not match the tensor’s dtype.
  • Any value in std is zero.
  • dim is out of bounds for the tensor’s rank.
  • Length of mean/std doesn’t match the size of the specified channel dimension.
§Examples
use slsl::Tensor;

// HWC format: Height x Width x Channels (dim = 2)
let hwc_data = vec![
    10.0, 20.0, 30.0, // Pixel (0,0), Channels R, G, B
    40.0, 50.0, 60.0, // Pixel (0,1), Channels R, G, B
    70.0, 80.0, 90.0, // Pixel (1,0), Channels R, G, B
    100.0, 110.0, 120.0, // Pixel (1,1), Channels R, G, B
];
let hwc_tensor = Tensor::from_vec(hwc_data, [2, 2, 3]).unwrap();
let mean_rgb = [65.0, 75.0, 85.0];
let std_rgb = [30.0, 30.0, 30.0];
let standardized_hwc = hwc_tensor.standardize(&mean_rgb, &std_rgb, 2).unwrap();

// CHW format: Channels x Height x Width (dim = 0)
let chw_data = vec![
    10.0, 40.0, 70.0, 100.0, // Channel R: all pixels
    20.0, 50.0, 80.0, 110.0, // Channel G: all pixels
    30.0, 60.0, 90.0, 120.0, // Channel B: all pixels
];
let chw_tensor = Tensor::from_vec(chw_data, [3, 2, 2]).unwrap();
let standardized_chw = chw_tensor.standardize(&mean_rgb, &std_rgb, 0).unwrap();

This function’s behavior is similar to PyTorch’s torchvision.transforms.functional.normalize. For more details, refer to the PyTorch documentation:

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn tile<D: Into<Shape>>(&self, repeats: D) -> Result<Tensor>

Constructs a tensor by repeating the elements of self according to the specified pattern.

This function creates a new tensor where each element of the input tensor is repeated according to the repeats argument. The repeats argument specifies the number of repetitions in each dimension.

§Arguments
  • repeats - An array specifying the number of repetitions for each dimension. Can be converted into Shape (e.g., [2, 3], vec![2, 3], etc.)
§Behavior
  • Fewer dimensions in repeats: If repeats has fewer dimensions than self, ones are prepended to repeats until all dimensions are specified.

    • Example: self shape (8, 6, 4, 2), repeats [2, 2] → treated as [1, 1, 2, 2]
  • More dimensions in repeats: If self has fewer dimensions than repeats, self is treated as if it were unsqueezed at dimension zero until it has as many dimensions as repeats specifies.

    • Example: self shape (4, 2), repeats [3, 3, 2, 2]self treated as (1, 1, 4, 2)
§Returns

A Result<Tensor> containing:

  • Ok(Tensor): A new tensor with the repeated data
  • Err: If there’s an error during the tiling operation
§Examples
use slsl::Tensor;

// 1D tensor
let tensor = Tensor::from_vec(vec![1, 2, 3], [3])?;

// Repeat 2 times
let tiled = tensor.tile([2])?;
assert_eq!(tiled.dims(), [6]);
assert_eq!(tiled.to_flat_vec::<i32>()?, vec![1, 2, 3, 1, 2, 3]);

// 2D tensor
let tensor_2d = Tensor::from_vec(vec![1, 2, 3, 4], [2, 2])?;

// Repeat 2 times in each dimension
let tiled = tensor_2d.tile([2, 2])?;
assert_eq!(tiled.dims(), [4, 4]);

// Repeat with different counts
let tiled = tensor_2d.tile([3, 1])?;
assert_eq!(tiled.dims(), [6, 2]);
§Notes
  • This operation creates a new tensor with copied data (not a view)
  • The resulting tensor size is the element-wise product of self.shape and repeats
  • The function follows PyTorch’s tile behavior
  • All supported data types are handled automatically
§See Also
Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn argmax<D: Dim + Clone>(&self, dim: D) -> Result<Tensor>

Source

pub fn argmax_keepdim<D: Dim + Clone>(&self, dim: D) -> Result<Tensor>

Source

pub fn argmax_impl<D: Dim + Clone>( &self, dim: D, keepdim: bool, ) -> Result<Tensor>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn argmin<D: Dim + Clone>(&self, dim: D) -> Result<Tensor>

Source

pub fn argmin_keepdim<D: Dim + Clone>(&self, dim: D) -> Result<Tensor>

Source

pub fn argmin_impl<D: Dim + Clone>( &self, dim: D, keepdim: bool, ) -> Result<Tensor>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn argmin_argmax<D: Dim + Clone>(&self, dim: D) -> Result<(Tensor, Tensor)>

Source

pub fn argmin_argmax_keepdim<D: Dim + Clone>( &self, dim: D, ) -> Result<(Tensor, Tensor)>

Source

pub fn argmin_argmax_impl<D: Dim + Clone>( &self, dim: D, keepdim: bool, ) -> Result<(Tensor, Tensor)>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn max<D: Dim + Clone>(&self, dim: D) -> Result<Tensor>

Source

pub fn max_keepdim<D: Dim + Clone>(&self, dim: D) -> Result<Tensor>

Source

pub fn max_impl<D: Dim + Clone>(&self, dim: D, keepdim: bool) -> Result<Tensor>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn max_argmax<D: Dim + Clone>(&self, dim: D) -> Result<(Tensor, Tensor)>

Source

pub fn max_argmax_keepdim<D: Dim + Clone>( &self, dim: D, ) -> Result<(Tensor, Tensor)>

Source

pub fn max_argmax_impl<D: Dim + Clone>( &self, dim: D, keepdim: bool, ) -> Result<(Tensor, Tensor)>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn mean<D: Dims + Copy>(&self, dims: D) -> Result<Tensor>

Source

pub fn mean_keepdim<D: Dims + Copy>(&self, dims: D) -> Result<Tensor>

Source

pub fn mean_all(&self) -> Result<f64>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn min_all<T: TensorElement + PartialOrd>(&self) -> Result<T>

Source

pub fn min<D: Dim + Clone>(&self, dim: D) -> Result<Tensor>

Source

pub fn min_keepdim<D: Dim + Clone>(&self, dim: D) -> Result<Tensor>

Source

pub fn min_impl<D: Dim + Clone>(&self, dim: D, keepdim: bool) -> Result<Tensor>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn min_argmin<D: Dim + Clone>(&self, dim: D) -> Result<(Tensor, Tensor)>

Source

pub fn min_argmin_keepdim<D: Dim + Clone>( &self, dim: D, ) -> Result<(Tensor, Tensor)>

Source

pub fn min_argmin_impl<D: Dim + Clone>( &self, dim: D, keepdim: bool, ) -> Result<(Tensor, Tensor)>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn min_max<D: Dim + Clone>(&self, dim: D) -> Result<(Tensor, Tensor)>

Source

pub fn min_max_keepdim<D: Dim + Clone>( &self, dim: D, ) -> Result<(Tensor, Tensor)>

Source

pub fn min_max_impl<D: Dim + Clone>( &self, dim: D, keepdim: bool, ) -> Result<(Tensor, Tensor)>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn min_max_argmin_argmax<D: Dim + Clone>( &self, dim: D, ) -> Result<(Tensor, Tensor, Tensor, Tensor)>

Source

pub fn min_max_argmin_argmax_keepdim<D: Dim + Clone>( &self, dim: D, ) -> Result<(Tensor, Tensor, Tensor, Tensor)>

Source

pub fn min_max_argmin_argmax_impl<D: Dim + Clone>( &self, dim: D, keepdim: bool, ) -> Result<(Tensor, Tensor, Tensor, Tensor)>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn sum<D: Dims>(&self, dims: D) -> Result<Tensor>

Source

pub fn sum_keepdim<D: Dims>(&self, dims: D) -> Result<Tensor>

Source

pub fn sum_all(&self) -> Result<f64>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn can_reduce_over_last_dims(&self, dim_indices: &[usize]) -> bool

Check if we can reduce over dimensions efficiently using contiguous memory access

Source§

impl<'a> TensorBase<&'a Storage>

Implementation for TensorView - reuse Tensor logic

Source

pub fn slice<S: IntoSliceElem + Copy>(&self, specs: S) -> TensorView<'a>

Creates a slice view of this tensor view.

This method provides zero-copy slicing capabilities for tensor views.

§Parameters
  • specs - The slice specification (indices, ranges, tuples, etc.)
§Returns

A new TensorView representing the sliced portion.

§Examples
use slsl::{s, Tensor};

let tensor = Tensor::zeros::<f32>([4, 6])?;
let view = tensor.view();

// Single index
let row = view.slice(1);
assert_eq!(row.shape().as_slice(), &[6]);

// Range slice using s! macro
let rows = view.slice(s![1..3]);
assert_eq!(rows.shape().as_slice(), &[2, 6]);
Source§

impl TensorBase<Storage>

Source

pub fn slice<S: IntoSliceElem + Copy>(&self, specs: S) -> TensorView<'_>

Creates a slice view of this tensor with optimized performance.

This method provides efficient tensor slicing with fast paths for common patterns. Creates a zero-copy view without memory allocation.

§Parameters
  • specs - The slice specification (index, range with s! macro, tuple, etc.)
§Returns

A TensorView representing the sliced portion of the tensor.

§Examples
use slsl::{s, Tensor};

let tensor = Tensor::zeros::<f32>([4, 6])?;

// Single index
let row = tensor.slice(1);
assert_eq!(row.shape().as_slice(), &[6]);

// Range slice using s! macro
let rows = tensor.slice(s![1..3]);
assert_eq!(rows.shape().as_slice(), &[2]);
Source

pub fn compute_slice( shape: &Shape, strides: &Stride, slice: SliceSpecs, ) -> (Shape, Stride, usize)

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn abs(&self) -> Result<Tensor>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn ceil(&self) -> Result<Tensor>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn clamp(&self, min: Option<f32>, max: Option<f32>) -> Result<Tensor>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn cos(&self) -> Result<Tensor>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn exp(&self) -> Result<Tensor>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn floor(&self) -> Result<Tensor>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn log(&self) -> Result<Tensor>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn neg(&self) -> Result<Tensor>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn recip(&self) -> Result<Tensor>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn relu(&self) -> Result<Tensor>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn round(&self) -> Result<Tensor>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn sigmoid(&self) -> Result<Tensor>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn sin(&self) -> Result<Tensor>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn softmax<D: Dim>(&self, dim: D) -> Result<Tensor>

Computes the softmax function along the specified dimension

The softmax function is defined as: softmax(x_i) = exp(x_i) / sum(exp(x_j)) for all j

This implementation uses the numerically stable version: softmax(x_i) = exp(x_i - max(x)) / sum(exp(x_j - max(x)))

§Arguments
  • dim - The dimension along which to compute the softmax
§Returns

A new tensor with the same shape and dtype as the input, containing the softmax values

§Examples
use slsl::Tensor;
let x = Tensor::from_vec(vec![1.0f32, 2.0, 3.0], [3]).unwrap();
let result = x.softmax(-1).unwrap();
Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn sqr(&self) -> Result<Tensor>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn sqrt(&self) -> Result<Tensor>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn tan(&self) -> Result<Tensor>

Source§

impl<S: StorageTrait> TensorBase<S>

Source

pub fn tanh(&self) -> Result<Tensor>

Trait Implementations§

Source§

impl<S1: StorageTrait, S2: StorageTrait> Add<&TensorBase<S2>> for &TensorBase<S1>

Source§

type Output = TensorBase<Storage>

The resulting type after applying the + operator.
Source§

fn add(self, other: &TensorBase<S2>) -> Self::Output

Performs the + operation. Read more
Source§

impl<S1: StorageTrait, S2: StorageTrait> Add<&TensorBase<S2>> for TensorBase<S1>

Source§

type Output = TensorBase<Storage>

The resulting type after applying the + operator.
Source§

fn add(self, other: &TensorBase<S2>) -> Self::Output

Performs the + operation. Read more
Source§

impl<S1: StorageTrait, S2: StorageTrait> Add<TensorBase<S2>> for &TensorBase<S1>

Source§

type Output = TensorBase<Storage>

The resulting type after applying the + operator.
Source§

fn add(self, other: TensorBase<S2>) -> Self::Output

Performs the + operation. Read more
Source§

impl<S1: StorageTrait, S2: StorageTrait> Add<TensorBase<S2>> for TensorBase<S1>

Source§

type Output = TensorBase<Storage>

The resulting type after applying the + operator.
Source§

fn add(self, other: TensorBase<S2>) -> Self::Output

Performs the + operation. Read more
Source§

impl<S: Clone + StorageTrait> Clone for TensorBase<S>

Source§

fn clone(&self) -> TensorBase<S>

Returns a duplicate of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl<S1: StorageTrait, S2: StorageTrait> Div<&TensorBase<S2>> for &TensorBase<S1>

Source§

type Output = TensorBase<Storage>

The resulting type after applying the / operator.
Source§

fn div(self, other: &TensorBase<S2>) -> Self::Output

Performs the / operation. Read more
Source§

impl<S1: StorageTrait, S2: StorageTrait> Div<&TensorBase<S2>> for TensorBase<S1>

Source§

type Output = TensorBase<Storage>

The resulting type after applying the / operator.
Source§

fn div(self, other: &TensorBase<S2>) -> Self::Output

Performs the / operation. Read more
Source§

impl<S1: StorageTrait, S2: StorageTrait> Div<TensorBase<S2>> for &TensorBase<S1>

Source§

type Output = TensorBase<Storage>

The resulting type after applying the / operator.
Source§

fn div(self, other: TensorBase<S2>) -> Self::Output

Performs the / operation. Read more
Source§

impl<S1: StorageTrait, S2: StorageTrait> Div<TensorBase<S2>> for TensorBase<S1>

Source§

type Output = TensorBase<Storage>

The resulting type after applying the / operator.
Source§

fn div(self, other: TensorBase<S2>) -> Self::Output

Performs the / operation. Read more
Source§

impl<'a, S: StorageTrait> IntoIterator for &'a TensorBase<S>

Source§

type Item = TensorIterElement

The type of the elements being iterated over.
Source§

type IntoIter = TensorIter<'a, S>

Which kind of iterator are we turning this into?
Source§

fn into_iter(self) -> Self::IntoIter

Creates an iterator from a value. Read more
Source§

impl<S1: StorageTrait, S2: StorageTrait> Mul<&TensorBase<S2>> for &TensorBase<S1>

Source§

type Output = TensorBase<Storage>

The resulting type after applying the * operator.
Source§

fn mul(self, other: &TensorBase<S2>) -> Self::Output

Performs the * operation. Read more
Source§

impl<S1: StorageTrait, S2: StorageTrait> Mul<&TensorBase<S2>> for TensorBase<S1>

Source§

type Output = TensorBase<Storage>

The resulting type after applying the * operator.
Source§

fn mul(self, other: &TensorBase<S2>) -> Self::Output

Performs the * operation. Read more
Source§

impl<S1: StorageTrait, S2: StorageTrait> Mul<TensorBase<S2>> for &TensorBase<S1>

Source§

type Output = TensorBase<Storage>

The resulting type after applying the * operator.
Source§

fn mul(self, other: TensorBase<S2>) -> Self::Output

Performs the * operation. Read more
Source§

impl<S1: StorageTrait, S2: StorageTrait> Mul<TensorBase<S2>> for TensorBase<S1>

Source§

type Output = TensorBase<Storage>

The resulting type after applying the * operator.
Source§

fn mul(self, other: TensorBase<S2>) -> Self::Output

Performs the * operation. Read more
Source§

impl<S: StorageTrait> Neg for &TensorBase<S>

Source§

type Output = TensorBase<Storage>

The resulting type after applying the - operator.
Source§

fn neg(self) -> Self::Output

Performs the unary - operation. Read more
Source§

impl<S: StorageTrait> Neg for TensorBase<S>

Source§

type Output = TensorBase<Storage>

The resulting type after applying the - operator.
Source§

fn neg(self) -> Self::Output

Performs the unary - operation. Read more
Source§

impl<S1: StorageTrait, S2: StorageTrait> Sub<&TensorBase<S2>> for &TensorBase<S1>

Source§

type Output = TensorBase<Storage>

The resulting type after applying the - operator.
Source§

fn sub(self, other: &TensorBase<S2>) -> Self::Output

Performs the - operation. Read more
Source§

impl<S1: StorageTrait, S2: StorageTrait> Sub<&TensorBase<S2>> for TensorBase<S1>

Source§

type Output = TensorBase<Storage>

The resulting type after applying the - operator.
Source§

fn sub(self, other: &TensorBase<S2>) -> Self::Output

Performs the - operation. Read more
Source§

impl<S1: StorageTrait, S2: StorageTrait> Sub<TensorBase<S2>> for &TensorBase<S1>

Source§

type Output = TensorBase<Storage>

The resulting type after applying the - operator.
Source§

fn sub(self, other: TensorBase<S2>) -> Self::Output

Performs the - operation. Read more
Source§

impl<S1: StorageTrait, S2: StorageTrait> Sub<TensorBase<S2>> for TensorBase<S1>

Source§

type Output = TensorBase<Storage>

The resulting type after applying the - operator.
Source§

fn sub(self, other: TensorBase<S2>) -> Self::Output

Performs the - operation. Read more
Source§

impl<S: StorageTrait + Send> Send for TensorBase<S>

Source§

impl<S: StorageTrait + Sync> Sync for TensorBase<S>

Auto Trait Implementations§

§

impl<S> Freeze for TensorBase<S>
where S: Freeze,

§

impl<S> RefUnwindSafe for TensorBase<S>
where S: RefUnwindSafe,

§

impl<S> Unpin for TensorBase<S>
where S: Unpin,

§

impl<S> UnwindSafe for TensorBase<S>
where S: UnwindSafe,

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoEither for T

Source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

impl<T> Pointable for T

Source§

const ALIGN: usize

The alignment of pointer.
Source§

type Init = T

The type for initializers.
Source§

unsafe fn init(init: <T as Pointable>::Init) -> usize

Initializes a with the given initializer. Read more
Source§

unsafe fn deref<'a>(ptr: usize) -> &'a T

Dereferences the given pointer. Read more
Source§

unsafe fn deref_mut<'a>(ptr: usize) -> &'a mut T

Mutably dereferences the given pointer. Read more
Source§

unsafe fn drop(ptr: usize)

Drops the object pointed to by the given pointer. Read more
Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
Source§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

Source§

fn vzip(self) -> V