pub struct TensorBase<S: StorageTrait> { /* private fields */ }Expand description
Core tensor structure that can work with different storage backends.
TensorBase is a generic tensor implementation that abstracts over different
storage types through the StorageTrait. This allows for both owned tensors
and tensor views with zero-cost abstractions.
§Type Parameters
S- Storage type that implementsStorageTrait
§Fields
storage- The underlying storage backendptr- Raw pointer to the tensor data for fast accessdtype- Data type of tensor elementsshape- Dimensions of the tensorstrides- Memory layout information for indexingoffset_bytes- Byte offset into the storage
Implementations§
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Sourcepub fn add<T: StorageTrait>(&self, other: &TensorBase<T>) -> Result<Tensor>
pub fn add<T: StorageTrait>(&self, other: &TensorBase<T>) -> Result<Tensor>
Add two tensors element-wise
Sourcepub fn add_scalar<T: TensorElement + Add<Output = T>>(
&self,
scalar: T,
) -> Result<Tensor>
pub fn add_scalar<T: TensorElement + Add<Output = T>>( &self, scalar: T, ) -> Result<Tensor>
Add scalar to tensor
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Sourcepub fn div<T: StorageTrait>(&self, other: &TensorBase<T>) -> Result<Tensor>
pub fn div<T: StorageTrait>(&self, other: &TensorBase<T>) -> Result<Tensor>
Divide two tensors element-wise (shapes must match exactly)
Sourcepub fn div_scalar<T: TensorElement + Div<Output = T>>(
&self,
scalar: T,
) -> Result<Tensor>
pub fn div_scalar<T: TensorElement + Div<Output = T>>( &self, scalar: T, ) -> Result<Tensor>
Divide all elements of the tensor by a scalar
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Sourcepub fn mul<T: StorageTrait>(&self, other: &TensorBase<T>) -> Result<Tensor>
pub fn mul<T: StorageTrait>(&self, other: &TensorBase<T>) -> Result<Tensor>
Multiply two tensors element-wise
Sourcepub fn mul_scalar<T: TensorElement + Mul<Output = T>>(
&self,
scalar: T,
) -> Result<Tensor>
pub fn mul_scalar<T: TensorElement + Mul<Output = T>>( &self, scalar: T, ) -> Result<Tensor>
Multiply all elements of the tensor by a scalar
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Sourcepub fn sub<T: StorageTrait>(&self, other: &TensorBase<T>) -> Result<Tensor>
pub fn sub<T: StorageTrait>(&self, other: &TensorBase<T>) -> Result<Tensor>
Subtract two tensors element-wise (shapes must match exactly)
Sourcepub fn sub_scalar<T: TensorElement + Sub<Output = T>>(
&self,
scalar: T,
) -> Result<Tensor>
pub fn sub_scalar<T: TensorElement + Sub<Output = T>>( &self, scalar: T, ) -> Result<Tensor>
Subtract a scalar from all elements of the tensor
Source§impl TensorBase<Storage>
impl TensorBase<Storage>
Sourcepub fn from_scalar<T: TensorElement>(value: T) -> Result<Self>
pub fn from_scalar<T: TensorElement>(value: T) -> Result<Self>
Creates a scalar (0-dimensional) tensor from a single value.
§Parameters
value- The scalar value to store in the tensor
§Returns
A Result<Tensor> containing:
Ok(Tensor): A scalar tensor with empty dimensions[]Err: If tensor creation fails
§Examples
use slsl::Tensor;
let scalar = Tensor::from_scalar(42.0f32)?;
assert_eq!(scalar.dims(), []);
assert_eq!(scalar.to_scalar::<f32>()?, 42.0);
let int_scalar = Tensor::from_scalar(10i32)?;
assert_eq!(int_scalar.to_scalar::<i32>()?, 10);Sourcepub fn from_vec<T: TensorElement, S: Into<Shape>>(
data: Vec<T>,
shape: S,
) -> Result<Self>
pub fn from_vec<T: TensorElement, S: Into<Shape>>( data: Vec<T>, shape: S, ) -> Result<Self>
Creates a tensor from a vector with the specified shape.
Takes ownership of the vector data and reshapes it according to the given dimensions. The total number of elements in the vector must match the product of all dimensions.
§Parameters
data- Vector containing the tensor elementsshape- The desired shape/dimensions for the tensor
§Returns
A Result<Tensor> containing:
Ok(Tensor): A tensor with the specified shapeErr: If the data length doesn’t match the expected shape size
§Examples
use slsl::Tensor;
// Create 2D tensor
let data = vec![1, 2, 3, 4, 5, 6];
let tensor = Tensor::from_vec(data, [2, 3])?;
assert_eq!(tensor.dims(), [2, 3]);
// Create 1D tensor
let data = vec![1.0, 2.0, 3.0];
let tensor = Tensor::from_vec(data, [3])?;
assert_eq!(tensor.dims(), [3]);§Panics
Returns an error if data.len() doesn’t equal the product of shape dimensions.
Sourcepub fn from_slice<T: TensorElement, S: Into<Shape>>(
data: &[T],
shape: S,
) -> Result<Self>
pub fn from_slice<T: TensorElement, S: Into<Shape>>( data: &[T], shape: S, ) -> Result<Self>
Creates a tensor from a slice with the specified shape.
Copies data from the slice and creates a tensor with the given dimensions. The slice length must match the product of all shape dimensions.
§Parameters
data- Slice containing the tensor elementsshape- The desired shape/dimensions for the tensor
§Returns
A Result<Tensor> containing:
Ok(Tensor): A tensor with the specified shape containing copied dataErr: If the slice length doesn’t match the expected shape size
§Examples
use slsl::Tensor;
let data = [1, 2, 3, 4];
let tensor = Tensor::from_slice(&data, [2, 2])?;
assert_eq!(tensor.dims(), [2, 2]);
assert_eq!(tensor.at::<i32>([0, 1]), 2);
let data = [1.0, 2.0, 3.0, 4.0, 5.0, 6.0];
let tensor = Tensor::from_slice(&data, [3, 2])?;
assert_eq!(tensor.dims(), [3, 2]);Sourcepub fn full<T: TensorElement>(shape: impl Into<Shape>, value: T) -> Result<Self>
pub fn full<T: TensorElement>(shape: impl Into<Shape>, value: T) -> Result<Self>
Creates a tensor filled with a specific value.
All elements in the tensor will be set to the provided value.
§Parameters
shape- The shape/dimensions of the tensor to createvalue- The value to fill all tensor elements with
§Returns
A Result<Tensor> containing:
Ok(Tensor): A tensor filled with the specified valueErr: If tensor creation fails
§Examples
use slsl::Tensor;
let tensor = Tensor::full([2, 3], 7.5f32)?;
assert_eq!(tensor.dims(), [2, 3]);
assert_eq!(tensor.at::<f32>([0, 0]), 7.5);
assert_eq!(tensor.at::<f32>([1, 2]), 7.5);
let tensor = Tensor::full([4], -1i32)?;
assert_eq!(tensor.to_vec::<i32>()?, vec![-1, -1, -1, -1]);Sourcepub fn ones<T: TensorElement>(shape: impl Into<Shape>) -> Result<Self>
pub fn ones<T: TensorElement>(shape: impl Into<Shape>) -> Result<Self>
Creates a tensor filled with ones.
All elements in the tensor will be set to the numeric value 1 for the specified type.
§Parameters
shape- The shape/dimensions of the tensor to create
§Returns
A Result<Tensor> containing:
Ok(Tensor): A tensor filled with onesErr: If tensor creation fails
§Examples
use slsl::Tensor;
let tensor = Tensor::ones::<f32>([2, 2])?;
assert_eq!(tensor.dims(), [2, 2]);
assert_eq!(tensor.to_flat_vec::<f32>()?, vec![1.0, 1.0, 1.0, 1.0]);
let tensor = Tensor::ones::<i32>([3])?;
assert_eq!(tensor.to_vec::<i32>()?, vec![1, 1, 1]);Sourcepub fn zeros<T: TensorElement>(shape: impl Into<Shape>) -> Result<Self>
pub fn zeros<T: TensorElement>(shape: impl Into<Shape>) -> Result<Self>
Creates a tensor filled with zeros.
All elements in the tensor will be set to the numeric value 0 for the specified type. This operation is optimized for performance.
§Parameters
shape- The shape/dimensions of the tensor to create
§Returns
A Result<Tensor> containing:
Ok(Tensor): A tensor filled with zerosErr: If tensor creation fails
§Examples
use slsl::Tensor;
let tensor = Tensor::zeros::<f64>([3, 2])?;
assert_eq!(tensor.dims(), [3, 2]);
assert_eq!(tensor.to_flat_vec::<f64>()?, vec![0.0; 6]);
let tensor = Tensor::zeros::<i32>([4])?;
assert_eq!(tensor.to_vec::<i32>()?, vec![0, 0, 0, 0]);Sourcepub fn ones_like<T: TensorElement>(tensor: &Self) -> Result<Self>
pub fn ones_like<T: TensorElement>(tensor: &Self) -> Result<Self>
Creates a tensor filled with ones, with the same shape as the input tensor.
This is a convenience function that creates a new tensor with the same dimensions as the reference tensor, but filled with ones of the specified type.
§Parameters
tensor- The reference tensor whose shape will be copied
§Returns
A Result<Tensor> containing:
Ok(Tensor): A tensor with the same shape as input, filled with onesErr: If tensor creation fails
§Examples
use slsl::Tensor;
let original = Tensor::zeros::<f32>([2, 3])?;
let ones_tensor = Tensor::ones_like::<f32>(&original)?;
assert_eq!(ones_tensor.dims(), [2, 3]);
assert_eq!(ones_tensor.to_flat_vec::<f32>()?, vec![1.0; 6]);Sourcepub fn zeros_like<T: TensorElement>(tensor: &Self) -> Result<Self>
pub fn zeros_like<T: TensorElement>(tensor: &Self) -> Result<Self>
Creates a tensor filled with zeros, with the same shape as the input tensor.
This is a convenience function that creates a new tensor with the same dimensions as the reference tensor, but filled with zeros of the specified type.
§Parameters
tensor- The reference tensor whose shape will be copied
§Returns
A Result<Tensor> containing:
Ok(Tensor): A tensor with the same shape as input, filled with zerosErr: If tensor creation fails
§Examples
use slsl::Tensor;
let original = Tensor::ones::<f32>([2, 2])?;
let zeros_tensor = Tensor::zeros_like::<f32>(&original)?;
assert_eq!(zeros_tensor.dims(), [2, 2]);
assert_eq!(zeros_tensor.to_flat_vec::<f32>()?, vec![0.0; 4]);Sourcepub fn eye<T: TensorElement>(n: usize) -> Result<Self>
pub fn eye<T: TensorElement>(n: usize) -> Result<Self>
Creates an identity matrix of size n × n.
An identity matrix has ones on the main diagonal and zeros elsewhere. This function creates a 2D square tensor with these properties.
§Parameters
n- The size of the square matrix (both width and height)
§Returns
A Result<Tensor> containing:
Ok(Tensor): A 2D tensor of shape[n, n]representing the identity matrixErr: If tensor creation fails
§Examples
use slsl::Tensor;
let eye = Tensor::eye::<f32>(3)?;
assert_eq!(eye.dims(), [3, 3]);
assert_eq!(eye.at::<f32>([0, 0]), 1.0); // Diagonal elements
assert_eq!(eye.at::<f32>([1, 1]), 1.0);
assert_eq!(eye.at::<f32>([2, 2]), 1.0);
assert_eq!(eye.at::<f32>([0, 1]), 0.0); // Off-diagonal elements
assert_eq!(eye.at::<f32>([1, 0]), 0.0);Sourcepub fn arange<T: TensorElement + PartialOrd + Add<Output = T> + AsPrimitive<f32>>(
start: T,
end: T,
step: T,
) -> Result<Self>
pub fn arange<T: TensorElement + PartialOrd + Add<Output = T> + AsPrimitive<f32>>( start: T, end: T, step: T, ) -> Result<Self>
Creates a 1D tensor with values from start to end (exclusive) with a given step.
Generates a sequence of values starting from start, incrementing by step,
and stopping before end. Similar to Python’s range() function.
§Parameters
start- The starting value (inclusive)end- The ending value (exclusive)step- The increment between consecutive values
§Returns
A Result<Tensor> containing:
Ok(Tensor): A 1D tensor containing the generated sequenceErr: If step is zero, or if boolean type is used
§Examples
use slsl::Tensor;
// Basic usage
let tensor = Tensor::arange(0, 5, 1)?;
assert_eq!(tensor.to_vec::<i32>()?, vec![0, 1, 2, 3, 4]);
// With step > 1
let tensor = Tensor::arange(0.0, 2.0, 0.5)?;
assert_eq!(tensor.to_vec::<f64>()?, vec![0.0, 0.5, 1.0, 1.5]);
// Negative step
let tensor = Tensor::arange(5, 0, -1)?;
assert_eq!(tensor.to_vec::<i32>()?, vec![5, 4, 3, 2, 1]);§Panics
Returns an error if:
stepis zero- The tensor type is boolean
Sourcepub fn linspace<T>(start: T, end: T, n: usize) -> Result<Self>
pub fn linspace<T>(start: T, end: T, n: usize) -> Result<Self>
Creates a 1D tensor with n evenly spaced values from start to end (inclusive).
Generates n values linearly spaced between start and end, including both endpoints.
Similar to NumPy’s linspace() function.
§Parameters
start- The starting value (inclusive)end- The ending value (inclusive)n- The number of values to generate
§Returns
A Result<Tensor> containing:
Ok(Tensor): A 1D tensor withnevenly spaced valuesErr: Ifnis zero or if boolean type is used
§Examples
use slsl::Tensor;
// 5 values from 0 to 10
let tensor = Tensor::linspace(0.0f32, 10.0f32, 5)?;
assert_eq!(tensor.to_vec::<f32>()?, vec![0.0, 2.5, 5.0, 7.5, 10.0]);
// Single value
let tensor = Tensor::linspace(5.0f32, 10.0f32, 1)?;
assert_eq!(tensor.to_vec::<f32>()?, vec![5.0]);
// Negative range
let tensor = Tensor::linspace(-1.0f32, 1.0f32, 3)?;
assert_eq!(tensor.to_vec::<f32>()?, vec![-1.0, 0.0, 1.0]);§Panics
Returns an error if:
nis zero- The tensor type is boolean
Sourcepub fn rand<T: TensorElement + SampleUniform>(
low: T,
high: T,
shape: impl Into<Shape>,
) -> Result<Self>
pub fn rand<T: TensorElement + SampleUniform>( low: T, high: T, shape: impl Into<Shape>, ) -> Result<Self>
Creates a tensor with random values from a uniform distribution in the range [low, high).
Generates random values uniformly distributed between low (inclusive) and high (exclusive).
The random number generator is automatically seeded.
§Parameters
low- The lower bound (inclusive)high- The upper bound (exclusive)shape- The shape/dimensions of the tensor to create
§Returns
A Result<Tensor> containing:
Ok(Tensor): A tensor filled with random values from the uniform distributionErr: If the distribution parameters are invalid or tensor creation fails
§Examples
use slsl::Tensor;
// Random floats between 0.0 and 1.0
let tensor = Tensor::rand(0.0f32, 1.0f32, [2, 3])?;
assert_eq!(tensor.dims(), [2, 3]);
// Random integers between 1 and 10
let tensor = Tensor::rand(1i32, 10i32, [5])?;
assert_eq!(tensor.dims(), [5]);
// All values should be in range [1, 10)
for &val in tensor.to_vec::<i32>()?.iter() {
assert!(val >= 1 && val < 10);
}§Notes
- Uses a fast random number generator (SmallRng)
- Values are uniformly distributed in the specified range
- The upper bound is exclusive (values will be less than
high)
Sourcepub fn randn<T: TensorElement + From<f32>>(
shape: impl Into<Shape>,
) -> Result<Self>
pub fn randn<T: TensorElement + From<f32>>( shape: impl Into<Shape>, ) -> Result<Self>
Creates a tensor with random values from a standard normal distribution N(0,1).
Generates random values from a normal (Gaussian) distribution with mean 0 and standard deviation 1. This is commonly used for neural network weight initialization.
§Parameters
shape- The shape/dimensions of the tensor to create
§Returns
A Result<Tensor> containing:
Ok(Tensor): A tensor filled with normally distributed random valuesErr: If tensor creation fails
§Examples
use slsl::Tensor;
let tensor = Tensor::randn::<f32>([2, 3])?;
assert_eq!(tensor.dims(), [2, 3]);
// Values should be roughly centered around 0
let values = tensor.to_flat_vec::<f32>()?;
let mean: f32 = values.iter().sum::<f32>() / values.len() as f32;
assert!(mean.abs() < 1.0); // Should be close to 0 for large samples§Notes
- Uses a standard normal distribution (mean=0, std=1)
- Commonly used for initializing neural network weights
- Values follow the bell curve distribution
Sourcepub fn triu<T: TensorElement>(matrix: &Tensor, k: i32) -> Result<Self>
pub fn triu<T: TensorElement>(matrix: &Tensor, k: i32) -> Result<Self>
Extracts the upper triangular part of a matrix (k-th diagonal and above).
Creates a new tensor containing only the upper triangular elements of the input matrix. Elements below the k-th diagonal are set to zero.
§Parameters
matrix- The input 2D tensor (must be 2-dimensional)k- Diagonal offset:k = 0: Main diagonal and abovek > 0: Above the main diagonalk < 0: Below the main diagonal
§Returns
A Result<Tensor> containing:
Ok(Tensor): A new tensor with the upper triangular partErr: If the input is not a 2D matrix
§Examples
use slsl::Tensor;
let matrix = Tensor::from_vec(vec![1, 2, 3, 4, 5, 6, 7, 8, 9], [3, 3])?;
// Main diagonal and above (k=0)
let upper = Tensor::triu::<i32>(&matrix, 0)?;
// Result: [[1, 2, 3],
// [0, 5, 6],
// [0, 0, 9]]
// Above main diagonal (k=1)
let upper = Tensor::triu::<i32>(&matrix, 1)?;
// Result: [[0, 2, 3],
// [0, 0, 6],
// [0, 0, 0]]§Panics
Returns an error if the input tensor is not 2-dimensional.
Sourcepub fn tril<T: TensorElement>(matrix: &Tensor, k: i32) -> Result<Self>
pub fn tril<T: TensorElement>(matrix: &Tensor, k: i32) -> Result<Self>
Extracts the lower triangular part of a matrix (k-th diagonal and below).
Creates a new tensor containing only the lower triangular elements of the input matrix. Elements above the k-th diagonal are set to zero.
§Parameters
matrix- The input 2D tensor (must be 2-dimensional)k- Diagonal offset:k = 0: Main diagonal and belowk > 0: Above the main diagonalk < 0: Below the main diagonal
§Returns
A Result<Tensor> containing:
Ok(Tensor): A new tensor with the lower triangular partErr: If the input is not a 2D matrix
§Examples
use slsl::Tensor;
let matrix = Tensor::from_vec(vec![1, 2, 3, 4, 5, 6, 7, 8, 9], [3, 3])?;
// Main diagonal and below (k=0)
let lower = Tensor::tril::<i32>(&matrix, 0)?;
// Result: [[1, 0, 0],
// [4, 5, 0],
// [7, 8, 9]]
// Below main diagonal (k=-1)
let lower = Tensor::tril::<i32>(&matrix, -1)?;
// Result: [[0, 0, 0],
// [4, 0, 0],
// [7, 8, 0]]§Panics
Returns an error if the input tensor is not 2-dimensional.
Sourcepub fn diag<T: TensorElement>(matrix: &Tensor, k: i32) -> Result<Self>
pub fn diag<T: TensorElement>(matrix: &Tensor, k: i32) -> Result<Self>
Extracts diagonal elements from a matrix.
Returns a 1D tensor containing the elements from the specified diagonal of the input matrix. The k-th diagonal can be the main diagonal (k=0), above it (k>0), or below it (k<0).
§Parameters
matrix- The input 2D tensor (must be 2-dimensional)k- Diagonal offset:k = 0: Main diagonalk > 0: k-th diagonal above the main diagonalk < 0: k-th diagonal below the main diagonal
§Returns
A Result<Tensor> containing:
Ok(Tensor): A 1D tensor with the diagonal elementsErr: If the input is not a 2D matrix
§Examples
use slsl::Tensor;
let matrix = Tensor::from_vec(vec![1, 2, 3, 4, 5, 6, 7, 8, 9], [3, 3])?;
// Main diagonal (k=0)
let diag = Tensor::diag::<i32>(&matrix, 0)?;
assert_eq!(diag.to_vec::<i32>()?, vec![1, 5, 9]);
// First super-diagonal (k=1)
let diag = Tensor::diag::<i32>(&matrix, 1)?;
assert_eq!(diag.to_vec::<i32>()?, vec![2, 6]);
// First sub-diagonal (k=-1)
let diag = Tensor::diag::<i32>(&matrix, -1)?;
assert_eq!(diag.to_vec::<i32>()?, vec![4, 8]);§Notes
- For non-square matrices, the diagonal length is determined by the matrix dimensions
- If the requested diagonal is out of bounds, an empty tensor is returned
§Panics
Returns an error if the input tensor is not 2-dimensional.
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Sourcepub fn iter_dim(&self, dim: usize) -> DimIter<'_, S> ⓘ
pub fn iter_dim(&self, dim: usize) -> DimIter<'_, S> ⓘ
Creates an iterator over the specified dimension
Returns a DimIter that yields TensorViews representing slices
along the specified dimension. The iterator is optimized for performance
with lazy initialization and zero-cost abstractions.
§Arguments
dim- The dimension index to iterate over (0-based)
§Returns
A DimIter that can be used with standard Rust iterator methods
§Performance
- Iterator construction: O(1) with lazy initialization
count()operations: O(1) using cached values- Actual iteration: Optimized for cache-friendly memory access
§Example
use slsl::Tensor;
let tensor = Tensor::from_vec(vec![1.0f32, 2.0, 3.0, 4.0, 5.0, 6.0], [2, 3]).unwrap();
// Iterate over rows (dimension 0)
for (i, row) in tensor.iter_dim(0).enumerate() {
println!("Row {}: {:?}", i, row.as_slice::<f32>().unwrap());
}
// Ultra-fast count
assert_eq!(tensor.iter_dim(0).count(), 2);
assert_eq!(tensor.iter_dim(1).count(), 3);Sourcepub fn dim_len(&self, dim: usize) -> usize
pub fn dim_len(&self, dim: usize) -> usize
Get the size of a specific dimension (ultra-fast alternative to iter_dim().count())
This method provides direct access to dimension sizes without any iterator
construction overhead. While iter_dim(dim).count() is also very fast due
to optimizations, this method is slightly faster for simple size queries.
§Arguments
dim- The dimension index (0-based)
§Returns
The size of the specified dimension
§Performance
Time complexity: O(1) - direct array access
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Sourcepub fn iter(&self) -> TensorIter<'_, S> ⓘ
pub fn iter(&self) -> TensorIter<'_, S> ⓘ
Create an iterator over all elements in the tensor
Source§impl TensorBase<Storage>
impl TensorBase<Storage>
Sourcepub fn view(&self) -> TensorView<'_>
pub fn view(&self) -> TensorView<'_>
Creates a view of this tensor without copying data.
A tensor view provides a lightweight way to access tensor data without taking ownership. The view borrows from the original tensor’s storage.
§Returns
A TensorView that references the same data as this tensor.
§Examples
use slsl::Tensor;
let tensor = Tensor::zeros::<f32>([2, 3])?;
let view = tensor.view();
assert_eq!(view.shape(), tensor.shape());
assert_eq!(view.dtype(), tensor.dtype());Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Sourcepub fn strong_count(&self) -> usize
pub fn strong_count(&self) -> usize
Returns the reference count for the underlying storage.
This shows how many tensor instances are sharing the same storage. Useful for memory management and debugging.
§Returns
The number of references to the underlying storage.
§Examples
use slsl::Tensor;
let tensor1 = Tensor::zeros::<f32>([2, 3])?;
assert_eq!(tensor1.strong_count(), 1);
let tensor2 = tensor1.clone();
assert_eq!(tensor1.strong_count(), 2);
assert_eq!(tensor2.strong_count(), 2);Sourcepub fn strides(&self) -> &Stride
pub fn strides(&self) -> &Stride
Returns a reference to the tensor’s strides.
Strides define how to traverse the tensor data in memory. Each stride represents the number of bytes to skip to move to the next element along that dimension.
§Returns
A reference to the Stride array.
§Examples
use slsl::Tensor;
let tensor = Tensor::zeros::<f32>([2, 3])?;
let strides = tensor.strides();
assert_eq!(strides.len(), 2);Sourcepub fn shape(&self) -> &Shape
pub fn shape(&self) -> &Shape
Returns a reference to the tensor’s shape.
The shape defines the size of each dimension of the tensor.
§Returns
A reference to the Shape containing dimension sizes.
§Examples
use slsl::Tensor;
let tensor = Tensor::zeros::<f32>([2, 3, 4])?;
let shape = tensor.shape();
assert_eq!(shape.len(), 3);
assert_eq!(shape[0], 2);
assert_eq!(shape[1], 3);
assert_eq!(shape[2], 4);Sourcepub fn dims(&self) -> &[usize]
pub fn dims(&self) -> &[usize]
Returns the tensor dimensions as a slice.
This provides a convenient way to access the shape dimensions as a standard Rust slice.
§Returns
A slice containing the size of each dimension.
§Examples
use slsl::Tensor;
let tensor = Tensor::zeros::<f32>([2, 3, 4])?;
let dims = tensor.dims();
assert_eq!(dims, &[2, 3, 4]);
assert_eq!(dims.len(), 3);Sourcepub fn rank(&self) -> usize
pub fn rank(&self) -> usize
Returns the number of dimensions (rank) of the tensor.
A scalar has rank 0, a vector has rank 1, a matrix has rank 2, etc.
§Returns
The number of dimensions as a usize.
§Examples
use slsl::Tensor;
// Scalar (rank 0)
let scalar = Tensor::from_scalar(3.14f32)?;
assert_eq!(scalar.rank(), 0);
// Vector (rank 1)
let vector = Tensor::zeros::<f32>([5])?;
assert_eq!(vector.rank(), 1);
// Matrix (rank 2)
let matrix = Tensor::zeros::<f32>([3, 4])?;
assert_eq!(matrix.rank(), 2);Sourcepub fn ndim(&self, n: usize) -> Result<usize>
pub fn ndim(&self, n: usize) -> Result<usize>
Returns the size of a specific dimension.
§Parameters
n- The dimension index (0-based)
§Returns
The size of the specified dimension.
§Errors
Returns an error if the dimension index is out of bounds.
§Examples
use slsl::Tensor;
let tensor = Tensor::zeros::<f32>([2, 3, 4])?;
assert_eq!(tensor.ndim(0)?, 2); // First dimension
assert_eq!(tensor.ndim(1)?, 3); // Second dimension
assert_eq!(tensor.ndim(2)?, 4); // Third dimension
// This would return an error:
// tensor.ndim(3) // Index out of boundsSourcepub fn numel(&self) -> usize
pub fn numel(&self) -> usize
Returns the total number of elements in the tensor.
This is the product of all dimension sizes.
§Returns
The total number of elements as a usize.
§Examples
use slsl::Tensor;
// Scalar has 1 element
let scalar = Tensor::from_scalar(3.14f32)?;
assert_eq!(scalar.numel(), 1);
// 2x3 matrix has 6 elements
let matrix = Tensor::zeros::<f32>([2, 3])?;
assert_eq!(matrix.numel(), 6);
// 2x3x4 tensor has 24 elements
let tensor3d = Tensor::zeros::<f32>([2, 3, 4])?;
assert_eq!(tensor3d.numel(), 24);pub fn is_empty(&self) -> bool
Sourcepub fn dtype(&self) -> DType
pub fn dtype(&self) -> DType
Returns the data type of the tensor elements.
§Returns
The DType enum value representing the element type.
§Examples
use slsl::{Tensor, DType};
let f32_tensor = Tensor::zeros::<f32>([2, 3])?;
assert_eq!(f32_tensor.dtype(), DType::Fp32);
let i32_tensor = Tensor::zeros::<i32>([2, 3])?;
assert_eq!(i32_tensor.dtype(), DType::Int32);
let bool_tensor = Tensor::zeros::<bool>([2, 3])?;
assert_eq!(bool_tensor.dtype(), DType::Bool);Sourcepub fn offset_bytes(&self) -> usize
pub fn offset_bytes(&self) -> usize
Returns the byte offset of this tensor in the underlying storage.
This is useful for tensor views and slices that point to a subset of the original tensor’s data.
§Returns
The byte offset as a usize.
§Examples
use slsl::Tensor;
let tensor = Tensor::zeros::<f32>([4, 4])?;
assert_eq!(tensor.offset_bytes(), 0); // Original tensor has no offset
// Sliced tensors may have non-zero offsets
// let slice = tensor.slice(...); // Slicing would create offsetsSourcepub fn as_slice<T: TensorElement + Copy>(&self) -> Result<&[T]>
pub fn as_slice<T: TensorElement + Copy>(&self) -> Result<&[T]>
Get tensor data as slice (only for contiguous tensors)
Sourcepub fn as_mut_slice<T: TensorElement + Copy>(&mut self) -> Result<&mut [T]>
pub fn as_mut_slice<T: TensorElement + Copy>(&mut self) -> Result<&mut [T]>
Get mutable tensor data as slice (only for contiguous tensors)
Sourcepub fn as_mut_ptr(&mut self) -> *mut u8
pub fn as_mut_ptr(&mut self) -> *mut u8
Get a mutable pointer to the tensor data
Sourcepub unsafe fn from_raw_parts(
storage: S,
ptr: NonNull<u8>,
shape: Shape,
strides: Shape,
offset_bytes: usize,
dtype: DType,
) -> Self
pub unsafe fn from_raw_parts( storage: S, ptr: NonNull<u8>, shape: Shape, strides: Shape, offset_bytes: usize, dtype: DType, ) -> Self
Create a new TensorView from tensor components
§Safety
Caller must ensure all parameters are valid and consistent
Sourcepub fn is_contiguous(&self) -> bool
pub fn is_contiguous(&self) -> bool
Checks if the tensor’s memory layout is contiguous(C-style (row-major) or Fortran-style (column-major)).
Sourcepub fn at<T: TensorElement + Copy>(&self, indices: impl Into<Shape>) -> T
pub fn at<T: TensorElement + Copy>(&self, indices: impl Into<Shape>) -> T
Get a single element from the tensor at the specified indices
This method provides fast, bounds-checked access to individual tensor elements. It calculates the memory offset based on the indices and strides, then returns the value at that location.
§Arguments
indices- The indices for each dimension. Can be:- A single value for 1D tensors (e.g.,
i,(i,), or[i]) - An array for multi-dimensional tensors (e.g.,
[i, j]for 2D,[i, j, k]for 3D) - Any type that implements
Into<Shape>
- A single value for 1D tensors (e.g.,
§Returns
The value at the specified indices
§Safety
This method performs bounds checking in debug mode. In release mode, bounds checking is disabled for maximum performance.
§Examples
use slsl::Tensor;
// 1D tensor - multiple formats work
let tensor_1d = Tensor::from_vec(vec![1.0, 2.0, 3.0], [3]).unwrap();
let value1 = tensor_1d.at::<f32>(1); // Single value (most convenient)
let value2 = tensor_1d.at::<f32>((1,)); // Tuple format
let value3 = tensor_1d.at::<f32>([1]); // Array format
// Debug: let's see what we get
println!("value1: {}, value2: {}, value3: {}", value1, value2, value3);
// 2D tensor
let tensor_2d = Tensor::from_vec(vec![1.0, 2.0, 3.0, 4.0], [2, 2]).unwrap();
let value = tensor_2d.at::<f32>([1, 0]); // Returns 3.0 (second row, first column)Sourcepub fn clone_or_copy(&self) -> Result<Tensor>
pub fn clone_or_copy(&self) -> Result<Tensor>
Creates a contiguous copy of the tensor.
This is an alias for Self::to_contiguous for convenience.
If the tensor is already contiguous and has no offset, it returns
a clone without copying data.
§Returns
A contiguous Tensor with the same data.
§Errors
Returns an error if memory allocation fails or if the data type is not supported.
§Examples
use slsl::Tensor;
let tensor = Tensor::zeros::<f32>([2, 3])?;
let copy = tensor.clone_or_copy()?;
assert_eq!(tensor.shape(), copy.shape());
assert_eq!(tensor.dtype(), copy.dtype());
assert_eq!(tensor.numel(), copy.numel());Sourcepub fn to_contiguous(&self) -> Result<Tensor>
pub fn to_contiguous(&self) -> Result<Tensor>
Creates a contiguous copy of the tensor if necessary.
If the tensor is already contiguous with no offset, this method returns a clone without copying data. Otherwise, it creates a new tensor with contiguous memory layout.
§Returns
A contiguous Tensor with the same data and shape.
§Errors
Returns an error if:
- Memory allocation fails
- The tensor’s data type is not supported
§Examples
use slsl::Tensor;
// Create a tensor
let tensor = Tensor::zeros::<f32>([2, 3])?;
// Get a contiguous version
let contiguous = tensor.to_contiguous()?;
assert_eq!(tensor.shape(), contiguous.shape());
assert_eq!(tensor.dtype(), contiguous.dtype());
assert!(contiguous.is_contiguous());
// For already contiguous tensors, this is very efficient
let contiguous2 = contiguous.to_contiguous()?;
assert_eq!(contiguous.strong_count(), contiguous2.strong_count());Sourcepub fn to_dtype<T: TensorElement>(&self) -> Result<Tensor>
pub fn to_dtype<T: TensorElement>(&self) -> Result<Tensor>
Convert tensor to a different data type
This method creates a new tensor with the specified data type, performing element-wise type conversion. The operation is optimized for both contiguous and non-contiguous tensors.
§Type Parameters
T- Target tensor element type implementingTensorElement
§Returns
Result<Tensor>- New tensor with converted data type
§Performance
- Fast path for same-type conversion (zero-cost clone)
- Optimized memory layout for contiguous tensors
- Efficient strided access for non-contiguous tensors
§Example
use slsl::Tensor;
let tensor_f32 = Tensor::from_vec(vec![1.0f32, 2.0, 3.0], [3])?;
let tensor_i32 = tensor_f32.to_dtype::<i32>()?;Sourcepub fn to_owned(&self) -> Result<Tensor>
pub fn to_owned(&self) -> Result<Tensor>
Deep clone - creates new storage with copied data
This method creates a completely independent copy of the tensor data, regardless of whether the original is contiguous or not.
§Performance
- Fast path for contiguous tensors using optimized memcpy
- Efficient strided copy for non-contiguous tensors
- Type-specific optimizations for different data types
§Returns
Result<Tensor>- A new owned tensor with copied data
Sourcepub fn map<T>(&self, f: impl FnMut(&T) -> T) -> Result<Tensor>where
T: Copy + 'static + TensorElement,
pub fn map<T>(&self, f: impl FnMut(&T) -> T) -> Result<Tensor>where
T: Copy + 'static + TensorElement,
Generic map function that applies a closure to each element Input and output types are the same by default
§Arguments
f- Closure that takes a reference to an element and returns a new value
§Returns
Result<Tensor>- New tensor with mapped values
§Examples
use slsl::Tensor;
// Same type operation (most common case)
let tensor = Tensor::from_vec(vec![1.0f32, 2.0, 3.0], [3]).unwrap();
let result = tensor.map::<f32>(|x| x.abs()).unwrap();
// For i32 tensors
let tensor = Tensor::from_vec(vec![1i32, -2, 3], [3]).unwrap();
let result = tensor.map::<i32>(|x| x.abs()).unwrap();Sourcepub fn map_contiguous<T>(&self, f: impl FnMut(&T) -> T) -> Result<Tensor>where
T: Copy + 'static + TensorElement,
pub fn map_contiguous<T>(&self, f: impl FnMut(&T) -> T) -> Result<Tensor>where
T: Copy + 'static + TensorElement,
Sourcepub fn map_non_contiguous<T>(&self, f: impl FnMut(&T) -> T) -> Result<Tensor>where
T: Copy + 'static + TensorElement,
pub fn map_non_contiguous<T>(&self, f: impl FnMut(&T) -> T) -> Result<Tensor>where
T: Copy + 'static + TensorElement,
Applies a function f to each element of a non-contiguous tensor.
This method iterates through the tensor’s elements based on its strides and applies the given function.
§Arguments
f- A mutable closure that takes a reference to an element of typeTand returns a new element of typeT.
§Returns
A Result containing a new Tensor with the mapped elements, or an error if the operation fails.
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Sourcepub fn to_scalar<T: TensorElement + Copy>(&self) -> Result<T>
pub fn to_scalar<T: TensorElement + Copy>(&self) -> Result<T>
Sourcepub fn to_flat_vec<T: TensorElement + Copy>(&self) -> Result<Vec<T>>
pub fn to_flat_vec<T: TensorElement + Copy>(&self) -> Result<Vec<T>>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Sourcepub fn dot(&self, other: &Self) -> Result<f64>
pub fn dot(&self, other: &Self) -> Result<f64>
Compute dot product between two 1D tensors
§Arguments
other- The other tensor to compute dot product with
§Returns
Result<f64>- The scalar dot product result as f64 (safe for all numeric types)
§Errors
- Returns error if tensors are not 1D
- Returns error if vector lengths don’t match
- Returns error if dtypes don’t match
- Returns error if dtype is not supported
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Sourcepub fn matmul(&self, other: &Self) -> Result<Tensor>
pub fn matmul(&self, other: &Self) -> Result<Tensor>
Compute matrix multiplication between two 2D tensors
§Arguments
other- The other tensor to compute matrix multiplication with
§Returns
Result<Tensor>- The matrix multiplication result
§Errors
- Returns error if tensors are not 2D
- Returns error if matrix dimensions are incompatible
- Returns error if dtypes don’t match
- Returns error if dtype is not supported
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Sourcepub fn norm<D: Dim>(&self, dim: D, ord: f32) -> Result<Tensor>
pub fn norm<D: Dim>(&self, dim: D, ord: f32) -> Result<Tensor>
Compute various norms of a tensor, similar to PyTorch’s torch.linalg.norm.
Supports L1, L2, and Lp norms for floating-point tensors.
ord- Order of the norm. Supported values:None(default): L2 norm (Euclidean norm)Some(1.0): L1 norm (Manhattan norm) - uses backend asumSome(2.0): L2 norm (Euclidean norm) - uses backend nrm2
Sourcepub fn norm_keepdim<D: Dim>(&self, dim: D, ord: f32) -> Result<Tensor>
pub fn norm_keepdim<D: Dim>(&self, dim: D, ord: f32) -> Result<Tensor>
Compute various norms of a tensor, similar to PyTorch’s torch.linalg.norm,
but keeping the specified dimension(s).
Sourcepub fn norm_l1<D: Dim>(&self, dim: D) -> Result<Tensor>
pub fn norm_l1<D: Dim>(&self, dim: D) -> Result<Tensor>
Compute L1 norm (sum of absolute values) along the specified dimension
Sourcepub fn norm1_keepdim<D: Dim>(&self, dim: D) -> Result<Tensor>
pub fn norm1_keepdim<D: Dim>(&self, dim: D) -> Result<Tensor>
Compute L1 norm (sum of absolute values) along the specified dimension, keeping dimensions
Sourcepub fn norm_l2<D: Dim>(&self, dim: D) -> Result<Tensor>
pub fn norm_l2<D: Dim>(&self, dim: D) -> Result<Tensor>
Compute L2 norm (Euclidean norm) along the specified dimension
Sourcepub fn norm2_keepdim<D: Dim>(&self, dim: D) -> Result<Tensor>
pub fn norm2_keepdim<D: Dim>(&self, dim: D) -> Result<Tensor>
Compute L2 norm (Euclidean norm) along the specified dimension, keeping dimensions
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Sourcepub fn broadcast_to<D: Into<Shape>>(
&self,
target_shape: D,
) -> Result<TensorView<'_>>
pub fn broadcast_to<D: Into<Shape>>( &self, target_shape: D, ) -> Result<TensorView<'_>>
Broadcast tensor to new shape
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Sourcepub fn cat<D: Dim>(tensors: &[&Self], dim: D) -> Result<Tensor>
pub fn cat<D: Dim>(tensors: &[&Self], dim: D) -> Result<Tensor>
Concatenates a sequence of tensors along an existing dimension.
This function creates a new tensor by concatenating the input tensors along the specified dimension. All input tensors must have the same rank and data type, and all dimensions except the concatenation dimension must have the same size.
§Arguments
tensors- A slice of tensor references to concatenate. Must not be empty.dim- The dimension along which to concatenate. Must be a valid dimension index for the input tensors.
§Returns
A Result<Tensor> containing:
Ok(Tensor): A new tensor with the concatenated dataErr: If tensors have different ranks/shapes/dtypes, or if the dimension is invalid
§Examples
use slsl::Tensor;
// Concatenate 1D tensors
let tensor1 = Tensor::from_vec(vec![1, 2, 3], [3])?;
let tensor2 = Tensor::from_vec(vec![4, 5, 6], [3])?;
let tensor3 = Tensor::from_vec(vec![7, 8, 9], [3])?;
let tensors = vec![&tensor1, &tensor2, &tensor3];
// Concatenate along dimension 0
let concatenated = Tensor::cat(&tensors, 0)?;
assert_eq!(concatenated.dims(), [9]);
assert_eq!(concatenated.to_flat_vec::<i32>()?, vec![1, 2, 3, 4, 5, 6, 7, 8, 9]);
// Concatenate 2D tensors along dimension 0
let tensor_2d1 = Tensor::from_vec(vec![1, 2, 3, 4], [2, 2])?;
let tensor_2d2 = Tensor::from_vec(vec![5, 6, 7, 8], [2, 2])?;
let tensors_2d = vec![&tensor_2d1, &tensor_2d2];
let concatenated_2d = Tensor::cat(&tensors_2d, 0)?;
assert_eq!(concatenated_2d.dims(), [4, 2]);
// Concatenate along dimension 1
let concatenated_2d_dim1 = Tensor::cat(&tensors_2d, 1)?;
assert_eq!(concatenated_2d_dim1.dims(), [2, 4]);§Notes
- All input tensors must have identical ranks and data types
- All dimensions except the concatenation dimension must have the same size
- The concatenation dimension size equals the sum of all input tensor sizes in that dimension
- The function follows PyTorch’s
catbehavior - For out-of-bounds dimensions, the function will return an error
Concatenate tensors with any storage type (Tensor or TensorView)
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Sourcepub fn flatten<D1: Dim, D2: Dim>(
&self,
start_dim: D1,
end_dim: D2,
) -> Result<TensorView<'_>>
pub fn flatten<D1: Dim, D2: Dim>( &self, start_dim: D1, end_dim: D2, ) -> Result<TensorView<'_>>
Flattens a range of dimensions in the tensor into a single dimension.
This function creates a new view of the tensor where the specified range of dimensions is merged into a single dimension. The returned tensor shares the same underlying data as the original tensor, making this a zero-copy operation.
§Arguments
start_dim- The starting dimension to flatten (inclusive)end_dim- The ending dimension to flatten (inclusive)
Both dimensions must be valid indices for this tensor, and start_dim <= end_dim.
§Returns
A Result<TensorView> containing:
Ok(TensorView): A new view with the specified dimensions flattenedErr: If the dimension indices are invalid or out of bounds
§Examples
use slsl::Tensor;
// 3D tensor
let tensor = Tensor::from_vec(vec![1, 2, 3, 4, 5, 6, 7, 8], [2, 2, 2])?;
// Flatten dimensions 0 and 1
let flattened = tensor.flatten(0, 1)?;
assert_eq!(flattened.dims(), [4, 2]);
// Flatten dimensions 1 and 2
let flattened = tensor.flatten(1, 2)?;
assert_eq!(flattened.dims(), [2, 4]);
// Flatten all dimensions
let flattened = tensor.flatten_all()?;
assert_eq!(flattened.dims(), [8]);§Notes
- This operation is memory-efficient as it returns a view rather than copying data
- The flattened dimension size is the product of all dimensions in the range
- Strides are recalculated to maintain correct memory access patterns
- The function follows PyTorch’s
flattenbehavior - For invalid dimension ranges, the function will return an error
§See Also
Self::flatten_all: Flatten all dimensions into a single dimensionTensorView: The view type returned by this function
Sourcepub fn flatten_all(&self) -> Result<TensorView<'_>>
pub fn flatten_all(&self) -> Result<TensorView<'_>>
Flattens all dimensions of the tensor into a single dimension.
This is a convenience function that flattens the entire tensor into a 1D tensor.
It’s equivalent to calling flatten(0, self.rank() - 1).
§Returns
A Result<TensorView> containing:
Ok(TensorView): A new 1D view with all dimensions flattenedErr: If there’s an error during the flatten operation
§Examples
use slsl::Tensor;
// 3D tensor
let tensor = Tensor::from_vec(vec![1, 2, 3, 4, 5, 6, 7, 8], [2, 2, 2])?;
// Flatten all dimensions
let flattened = tensor.flatten_all()?;
assert_eq!(flattened.dims(), [8]);
assert_eq!(flattened.to_flat_vec::<i32>()?, vec![1, 2, 3, 4, 5, 6, 7, 8]);
// 2D tensor
let tensor_2d = Tensor::from_vec(vec![1, 2, 3, 4], [2, 2])?;
let flattened = tensor_2d.flatten_all()?;
assert_eq!(flattened.dims(), [4]);§Notes
- This operation is memory-efficient as it returns a view rather than copying data
- The resulting tensor will always have exactly one dimension
- This is equivalent to
self.flatten(0, self.rank() - 1)
§See Also
Self::flatten: Flatten a specific range of dimensionsTensorView: The view type returned by this function
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Sourcepub fn normalize<T: TensorElement + Float + Debug>(
&self,
min: T,
max: T,
) -> Result<Tensor>
pub fn normalize<T: TensorElement + Float + Debug>( &self, min: T, max: T, ) -> Result<Tensor>
Normalizes the tensor to the range [0, 1] using min-max normalization.
The formula used is: x_normalized = (x - min) / (max - min)
§Arguments
min- The minimum value in the original rangemax- The maximum value in the original range
§Errors
Returns an error if:
- The dtype of
minandmaxdoes not match the tensor’s dtype. minequalsmax(division by zero).minis greater thanmax.
§Examples
use slsl::Tensor;
// Normalize a 3D image tensor from [0, 255] to [0, 1]
let image_data = vec![
0.0, 128.0, 255.0, // Pixel 1, Channel R, G, B
64.0, 192.0, 32.0, // Pixel 2, Channel R, G, B
];
let tensor = Tensor::from_vec(image_data, [2, 3]).unwrap();
let normalized = tensor.normalize(0., 255.).unwrap();Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Sourcepub fn permute<D: Dims>(self, dims: D) -> Result<TensorBase<S>>
pub fn permute<D: Dims>(self, dims: D) -> Result<TensorBase<S>>
Permute tensor dimensions according to the given order
Sourcepub fn flip_dims(self) -> Result<TensorBase<S>>
pub fn flip_dims(self) -> Result<TensorBase<S>>
Flip tensor dimensions, reversing the order of dimensions For example: shape [1, 2, 3, 4] becomes [4, 3, 2, 1]
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Sourcepub fn split_at<D: Dim>(
&self,
dim: D,
index: usize,
) -> Result<(TensorView<'_>, TensorView<'_>)>
pub fn split_at<D: Dim>( &self, dim: D, index: usize, ) -> Result<(TensorView<'_>, TensorView<'_>)>
Split the tensor along dim at index, returning (left, right).
Rules:
indexmust be > 0 and < size_of(dim). Ifindex== 0 orindex== size, this returns an error.- Does not allocate; returns two views.
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Sourcepub fn squeeze<D: Dims>(&self, dims: D) -> Result<TensorView<'_>>
pub fn squeeze<D: Dims>(&self, dims: D) -> Result<TensorView<'_>>
Returns a new tensor with dimensions of size one removed from the specified positions.
This function creates a new view of the tensor with dimensions of size 1 removed from the specified positions. The returned tensor shares the same underlying data as the original tensor, making this a zero-copy operation.
§Arguments
dims- The dimensions to squeeze. Must be valid dimension indices for this tensor.- Can be a single dimension index, a slice of indices, or a range
- Only dimensions of size 1 will be removed
- Dimensions of size greater than 1 will be ignored
§Returns
A Result<TensorView> containing:
Ok(TensorView): A new view with the specified dimensions removedErr: If any dimension index is out of bounds
§Examples
use slsl::Tensor;
// 3D tensor with some dimensions of size 1
let tensor = Tensor::from_vec(vec![1, 2, 3, 4], [1, 4, 1])?;
// Squeeze specific dimensions
let squeezed = tensor.squeeze(0)?; // Remove dimension 0
assert_eq!(squeezed.dims(), [4, 1]);
let squeezed = tensor.squeeze([0, 2])?; // Remove dimensions 0 and 2
assert_eq!(squeezed.dims(), [4]);
// Squeeze all dimensions of size 1
let squeezed = tensor.squeeze_all()?;
assert_eq!(squeezed.dims(), [4]);
// 2D tensor with no dimensions of size 1
let tensor_2d = Tensor::from_vec(vec![1, 2, 3, 4], [2, 2])?;
let squeezed = tensor_2d.squeeze(0)?; // No effect
assert_eq!(squeezed.dims(), [2, 2]);§Notes
- This operation is memory-efficient as it returns a view rather than copying data
- Only dimensions of size 1 are removed; larger dimensions are preserved
- If all dimensions are removed, a scalar tensor (shape
[1]) is returned - The function follows PyTorch’s
squeezebehavior - For out-of-bounds dimensions, the function will return an error
§See Also
Self::unsqueeze: Add dimensions of size 1Self::squeeze_all: Remove all dimensions of size 1TensorView: The view type returned by this function
Sourcepub fn squeeze_all(&self) -> Result<TensorView<'_>>
pub fn squeeze_all(&self) -> Result<TensorView<'_>>
Returns a new tensor with all dimensions of size one removed.
This is a convenience function that removes all dimensions of size 1 from the tensor.
It’s equivalent to calling squeeze with all dimension indices.
§Returns
A Result<TensorView> containing:
Ok(TensorView): A new view with all size-1 dimensions removedErr: If there’s an error during the squeeze operation
§Examples
use slsl::Tensor;
// Tensor with multiple dimensions of size 1
let tensor = Tensor::from_vec(vec![1, 2, 3, 4], [1, 4, 1, 1])?;
// Remove all dimensions of size 1
let squeezed = tensor.squeeze_all()?;
assert_eq!(squeezed.dims(), [4]);
// Tensor with no dimensions of size 1
let tensor_2d = Tensor::from_vec(vec![1, 2, 3, 4], [2, 2])?;
let squeezed = tensor_2d.squeeze_all()?;
assert_eq!(squeezed.dims(), [2, 2]); // No change§Notes
- This operation is memory-efficient as it returns a view rather than copying data
- If all dimensions are of size 1, a scalar tensor (shape
[1]) is returned - This is equivalent to
self.squeeze(0..self.rank())
§See Also
Self::squeeze: Remove specific dimensions of size 1Self::unsqueeze: Add dimensions of size 1
Sourcepub fn unsqueeze<D: Dim>(&self, dim: D) -> Result<TensorView<'_>>
pub fn unsqueeze<D: Dim>(&self, dim: D) -> Result<TensorView<'_>>
Returns a new tensor with a dimension of size one inserted at the specified position.
This function creates a new view of the tensor with an additional dimension of size 1 inserted at the specified position. The returned tensor shares the same underlying data as the original tensor, making this a zero-copy operation.
§Arguments
dim- The position at which to insert the new dimension. Must be in the range[-rank-1, rank].- For a 1D tensor
[4], valid values are[-2, 1] - For a 2D tensor
[2, 2], valid values are[-3, 2] - Negative indices count from the end:
-1means the last position,-2means the second-to-last, etc.
- For a 1D tensor
§Returns
A Result<TensorView> containing:
Ok(TensorView): A new view with the inserted dimensionErr: If the dimension index is out of bounds
§Examples
use slsl::Tensor;
// 1D tensor
let tensor = Tensor::from_vec(vec![1, 2, 3, 4], [4])?;
// Insert at beginning (dimension 0)
let unsqueezed = tensor.unsqueeze(0)?;
assert_eq!(unsqueezed.dims(), [1, 4]);
// Insert at end (dimension 1)
let unsqueezed = tensor.unsqueeze(1)?;
assert_eq!(unsqueezed.dims(), [4, 1]);
// Using negative indices
let unsqueezed = tensor.unsqueeze(-1)?; // Same as unsqueeze(1)
assert_eq!(unsqueezed.dims(), [4, 1]);
let unsqueezed = tensor.unsqueeze(-2)?; // Same as unsqueeze(0)
assert_eq!(unsqueezed.dims(), [1, 4]);
// 2D tensor
let tensor_2d = Tensor::from_vec(vec![1, 2, 3, 4], [2, 2])?;
let unsqueezed = tensor_2d.unsqueeze(1)?;
assert_eq!(unsqueezed.dims(), [2, 1, 2]);§Notes
- This operation is memory-efficient as it returns a view rather than copying data
- The stride for the new dimension is set to 0 since its size is 1
- The function follows PyTorch’s
unsqueezebehavior for dimension indexing - For out-of-bounds dimensions, the function will return an error rather than silently inserting at the end, ensuring user intent is clear
§See Also
Self::squeeze: Remove dimensions of size 1TensorView: The view type returned by this function
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Sourcepub fn stack<D: Dim>(tensors: &[&Self], dim: D) -> Result<Tensor>
pub fn stack<D: Dim>(tensors: &[&Self], dim: D) -> Result<Tensor>
Stacks a sequence of tensors along a new dimension.
This function creates a new tensor by stacking the input tensors along a new dimension. All input tensors must have the same shape and data type. The resulting tensor will have one more dimension than the input tensors.
§Arguments
tensors- A slice of tensor references to stack. Must not be empty.dim- The dimension along which to stack. Can be in range[0, rank]whererankis the rank of the input tensors. A value ofrankwill insert the new dimension at the end.
§Returns
A Result<Tensor> containing:
Ok(Tensor): A new tensor with the stacked dataErr: If tensors have different shapes/dtypes, or if the dimension is invalid
§Examples
use slsl::Tensor;
// Stack 1D tensors
let tensor1 = Tensor::from_vec(vec![1, 2, 3], [3])?;
let tensor2 = Tensor::from_vec(vec![4, 5, 6], [3])?;
let tensor3 = Tensor::from_vec(vec![7, 8, 9], [3])?;
let tensors = vec![&tensor1, &tensor2, &tensor3];
// Stack along dimension 0 (beginning)
let stacked = Tensor::stack(&tensors, 0)?;
assert_eq!(stacked.dims(), [3, 3]);
assert_eq!(stacked.to_flat_vec::<i32>()?, vec![1, 2, 3, 4, 5, 6, 7, 8, 9]);
// Stack along dimension 1 (end)
let stacked = Tensor::stack(&tensors, 1)?;
assert_eq!(stacked.dims(), [3, 3]);
assert_eq!(stacked.to_flat_vec::<i32>()?, vec![1, 4, 7, 2, 5, 8, 3, 6, 9]);
// Stack 2D tensors
let tensor_2d1 = Tensor::from_vec(vec![1, 2, 3, 4], [2, 2])?;
let tensor_2d2 = Tensor::from_vec(vec![5, 6, 7, 8], [2, 2])?;
let tensors_2d = vec![&tensor_2d1, &tensor_2d2];
let stacked_2d = Tensor::stack(&tensors_2d, 0)?;
assert_eq!(stacked_2d.dims(), [2, 2, 2]);§Notes
- All input tensors must have identical shapes and data types
- The new dimension size equals the number of input tensors
- The function follows PyTorch’s
stackbehavior - For out-of-bounds dimensions, the new dimension is inserted at the end
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Sourcepub fn standardize<T: TensorElement + Float>(
&self,
mean: &[T],
std: &[T],
dim: impl Dim,
) -> Result<Tensor>
pub fn standardize<T: TensorElement + Float>( &self, mean: &[T], std: &[T], dim: impl Dim, ) -> Result<Tensor>
Standardizes the tensor by subtracting the mean and dividing by the standard deviation along a specified dimension.
§Arguments
mean- The mean(s) to subtract. Length must match the size of the channel dimension.std- The standard deviation(s) to divide by. Length must match the size of the channel dimension.dim- The dimension along which to apply standardization (e.g., channel dimension).
§Errors
Returns an error if:
meanorstdare empty.meanandstdhave different lengths.- The dtype of
meanandstddoes not match the tensor’s dtype. - Any value in
stdis zero. dimis out of bounds for the tensor’s rank.- Length of
mean/stddoesn’t match the size of the specified channel dimension.
§Examples
use slsl::Tensor;
// HWC format: Height x Width x Channels (dim = 2)
let hwc_data = vec![
10.0, 20.0, 30.0, // Pixel (0,0), Channels R, G, B
40.0, 50.0, 60.0, // Pixel (0,1), Channels R, G, B
70.0, 80.0, 90.0, // Pixel (1,0), Channels R, G, B
100.0, 110.0, 120.0, // Pixel (1,1), Channels R, G, B
];
let hwc_tensor = Tensor::from_vec(hwc_data, [2, 2, 3]).unwrap();
let mean_rgb = [65.0, 75.0, 85.0];
let std_rgb = [30.0, 30.0, 30.0];
let standardized_hwc = hwc_tensor.standardize(&mean_rgb, &std_rgb, 2).unwrap();
// CHW format: Channels x Height x Width (dim = 0)
let chw_data = vec![
10.0, 40.0, 70.0, 100.0, // Channel R: all pixels
20.0, 50.0, 80.0, 110.0, // Channel G: all pixels
30.0, 60.0, 90.0, 120.0, // Channel B: all pixels
];
let chw_tensor = Tensor::from_vec(chw_data, [3, 2, 2]).unwrap();
let standardized_chw = chw_tensor.standardize(&mean_rgb, &std_rgb, 0).unwrap();This function’s behavior is similar to PyTorch’s torchvision.transforms.functional.normalize.
For more details, refer to the PyTorch documentation:
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Sourcepub fn tile<D: Into<Shape>>(&self, repeats: D) -> Result<Tensor>
pub fn tile<D: Into<Shape>>(&self, repeats: D) -> Result<Tensor>
Constructs a tensor by repeating the elements of self according to the specified pattern.
This function creates a new tensor where each element of the input tensor is repeated
according to the repeats argument. The repeats argument specifies the number of
repetitions in each dimension.
§Arguments
repeats- An array specifying the number of repetitions for each dimension. Can be converted intoShape(e.g.,[2, 3],vec![2, 3], etc.)
§Behavior
-
Fewer dimensions in repeats: If
repeatshas fewer dimensions thanself, ones are prepended torepeatsuntil all dimensions are specified.- Example:
selfshape(8, 6, 4, 2),repeats[2, 2]→ treated as[1, 1, 2, 2]
- Example:
-
More dimensions in repeats: If
selfhas fewer dimensions thanrepeats,selfis treated as if it were unsqueezed at dimension zero until it has as many dimensions asrepeatsspecifies.- Example:
selfshape(4, 2),repeats[3, 3, 2, 2]→selftreated as(1, 1, 4, 2)
- Example:
§Returns
A Result<Tensor> containing:
Ok(Tensor): A new tensor with the repeated dataErr: If there’s an error during the tiling operation
§Examples
use slsl::Tensor;
// 1D tensor
let tensor = Tensor::from_vec(vec![1, 2, 3], [3])?;
// Repeat 2 times
let tiled = tensor.tile([2])?;
assert_eq!(tiled.dims(), [6]);
assert_eq!(tiled.to_flat_vec::<i32>()?, vec![1, 2, 3, 1, 2, 3]);
// 2D tensor
let tensor_2d = Tensor::from_vec(vec![1, 2, 3, 4], [2, 2])?;
// Repeat 2 times in each dimension
let tiled = tensor_2d.tile([2, 2])?;
assert_eq!(tiled.dims(), [4, 4]);
// Repeat with different counts
let tiled = tensor_2d.tile([3, 1])?;
assert_eq!(tiled.dims(), [6, 2]);§Notes
- This operation creates a new tensor with copied data (not a view)
- The resulting tensor size is the element-wise product of
self.shapeandrepeats - The function follows PyTorch’s
tilebehavior - All supported data types are handled automatically
§See Also
- Note:
repeatfunction is not yet implemented in this library - PyTorch tile: https://pytorch.org/docs/stable/generated/torch.tile.html
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
pub fn min_max_argmin_argmax<D: Dim + Clone>( &self, dim: D, ) -> Result<(Tensor, Tensor, Tensor, Tensor)>
pub fn min_max_argmin_argmax_keepdim<D: Dim + Clone>( &self, dim: D, ) -> Result<(Tensor, Tensor, Tensor, Tensor)>
pub fn min_max_argmin_argmax_impl<D: Dim + Clone>( &self, dim: D, keepdim: bool, ) -> Result<(Tensor, Tensor, Tensor, Tensor)>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Sourcepub fn can_reduce_over_last_dims(&self, dim_indices: &[usize]) -> bool
pub fn can_reduce_over_last_dims(&self, dim_indices: &[usize]) -> bool
Check if we can reduce over dimensions efficiently using contiguous memory access
Source§impl<'a> TensorBase<&'a Storage>
Implementation for TensorView - reuse Tensor logic
impl<'a> TensorBase<&'a Storage>
Implementation for TensorView - reuse Tensor logic
Sourcepub fn slice<S: IntoSliceElem + Copy>(&self, specs: S) -> TensorView<'a>
pub fn slice<S: IntoSliceElem + Copy>(&self, specs: S) -> TensorView<'a>
Creates a slice view of this tensor view.
This method provides zero-copy slicing capabilities for tensor views.
§Parameters
specs- The slice specification (indices, ranges, tuples, etc.)
§Returns
A new TensorView representing the sliced portion.
§Examples
use slsl::{s, Tensor};
let tensor = Tensor::zeros::<f32>([4, 6])?;
let view = tensor.view();
// Single index
let row = view.slice(1);
assert_eq!(row.shape().as_slice(), &[6]);
// Range slice using s! macro
let rows = view.slice(s![1..3]);
assert_eq!(rows.shape().as_slice(), &[2, 6]);Source§impl TensorBase<Storage>
impl TensorBase<Storage>
Sourcepub fn slice<S: IntoSliceElem + Copy>(&self, specs: S) -> TensorView<'_>
pub fn slice<S: IntoSliceElem + Copy>(&self, specs: S) -> TensorView<'_>
Creates a slice view of this tensor with optimized performance.
This method provides efficient tensor slicing with fast paths for common patterns. Creates a zero-copy view without memory allocation.
§Parameters
specs- The slice specification (index, range with s! macro, tuple, etc.)
§Returns
A TensorView representing the sliced portion of the tensor.
§Examples
use slsl::{s, Tensor};
let tensor = Tensor::zeros::<f32>([4, 6])?;
// Single index
let row = tensor.slice(1);
assert_eq!(row.shape().as_slice(), &[6]);
// Range slice using s! macro
let rows = tensor.slice(s![1..3]);
assert_eq!(rows.shape().as_slice(), &[2]);pub fn compute_slice( shape: &Shape, strides: &Stride, slice: SliceSpecs, ) -> (Shape, Stride, usize)
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Sourcepub fn softmax<D: Dim>(&self, dim: D) -> Result<Tensor>
pub fn softmax<D: Dim>(&self, dim: D) -> Result<Tensor>
Computes the softmax function along the specified dimension
The softmax function is defined as: softmax(x_i) = exp(x_i) / sum(exp(x_j)) for all j
This implementation uses the numerically stable version: softmax(x_i) = exp(x_i - max(x)) / sum(exp(x_j - max(x)))
§Arguments
dim- The dimension along which to compute the softmax
§Returns
A new tensor with the same shape and dtype as the input, containing the softmax values
§Examples
use slsl::Tensor;
let x = Tensor::from_vec(vec![1.0f32, 2.0, 3.0], [3]).unwrap();
let result = x.softmax(-1).unwrap();Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Source§impl<S: StorageTrait> TensorBase<S>
impl<S: StorageTrait> TensorBase<S>
Trait Implementations§
Source§impl<S1: StorageTrait, S2: StorageTrait> Add<&TensorBase<S2>> for &TensorBase<S1>
impl<S1: StorageTrait, S2: StorageTrait> Add<&TensorBase<S2>> for &TensorBase<S1>
Source§impl<S1: StorageTrait, S2: StorageTrait> Add<&TensorBase<S2>> for TensorBase<S1>
impl<S1: StorageTrait, S2: StorageTrait> Add<&TensorBase<S2>> for TensorBase<S1>
Source§impl<S1: StorageTrait, S2: StorageTrait> Add<TensorBase<S2>> for &TensorBase<S1>
impl<S1: StorageTrait, S2: StorageTrait> Add<TensorBase<S2>> for &TensorBase<S1>
Source§impl<S1: StorageTrait, S2: StorageTrait> Add<TensorBase<S2>> for TensorBase<S1>
impl<S1: StorageTrait, S2: StorageTrait> Add<TensorBase<S2>> for TensorBase<S1>
Source§impl<S: Clone + StorageTrait> Clone for TensorBase<S>
impl<S: Clone + StorageTrait> Clone for TensorBase<S>
Source§fn clone(&self) -> TensorBase<S>
fn clone(&self) -> TensorBase<S>
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read moreSource§impl<S1: StorageTrait, S2: StorageTrait> Div<&TensorBase<S2>> for &TensorBase<S1>
impl<S1: StorageTrait, S2: StorageTrait> Div<&TensorBase<S2>> for &TensorBase<S1>
Source§impl<S1: StorageTrait, S2: StorageTrait> Div<&TensorBase<S2>> for TensorBase<S1>
impl<S1: StorageTrait, S2: StorageTrait> Div<&TensorBase<S2>> for TensorBase<S1>
Source§impl<S1: StorageTrait, S2: StorageTrait> Div<TensorBase<S2>> for &TensorBase<S1>
impl<S1: StorageTrait, S2: StorageTrait> Div<TensorBase<S2>> for &TensorBase<S1>
Source§impl<S1: StorageTrait, S2: StorageTrait> Div<TensorBase<S2>> for TensorBase<S1>
impl<S1: StorageTrait, S2: StorageTrait> Div<TensorBase<S2>> for TensorBase<S1>
Source§impl<'a, S: StorageTrait> IntoIterator for &'a TensorBase<S>
impl<'a, S: StorageTrait> IntoIterator for &'a TensorBase<S>
Source§impl<S1: StorageTrait, S2: StorageTrait> Mul<&TensorBase<S2>> for &TensorBase<S1>
impl<S1: StorageTrait, S2: StorageTrait> Mul<&TensorBase<S2>> for &TensorBase<S1>
Source§impl<S1: StorageTrait, S2: StorageTrait> Mul<&TensorBase<S2>> for TensorBase<S1>
impl<S1: StorageTrait, S2: StorageTrait> Mul<&TensorBase<S2>> for TensorBase<S1>
Source§impl<S1: StorageTrait, S2: StorageTrait> Mul<TensorBase<S2>> for &TensorBase<S1>
impl<S1: StorageTrait, S2: StorageTrait> Mul<TensorBase<S2>> for &TensorBase<S1>
Source§impl<S1: StorageTrait, S2: StorageTrait> Mul<TensorBase<S2>> for TensorBase<S1>
impl<S1: StorageTrait, S2: StorageTrait> Mul<TensorBase<S2>> for TensorBase<S1>
Source§impl<S: StorageTrait> Neg for &TensorBase<S>
impl<S: StorageTrait> Neg for &TensorBase<S>
Source§impl<S: StorageTrait> Neg for TensorBase<S>
impl<S: StorageTrait> Neg for TensorBase<S>
Source§impl<S1: StorageTrait, S2: StorageTrait> Sub<&TensorBase<S2>> for &TensorBase<S1>
impl<S1: StorageTrait, S2: StorageTrait> Sub<&TensorBase<S2>> for &TensorBase<S1>
Source§impl<S1: StorageTrait, S2: StorageTrait> Sub<&TensorBase<S2>> for TensorBase<S1>
impl<S1: StorageTrait, S2: StorageTrait> Sub<&TensorBase<S2>> for TensorBase<S1>
Source§impl<S1: StorageTrait, S2: StorageTrait> Sub<TensorBase<S2>> for &TensorBase<S1>
impl<S1: StorageTrait, S2: StorageTrait> Sub<TensorBase<S2>> for &TensorBase<S1>
Source§impl<S1: StorageTrait, S2: StorageTrait> Sub<TensorBase<S2>> for TensorBase<S1>
impl<S1: StorageTrait, S2: StorageTrait> Sub<TensorBase<S2>> for TensorBase<S1>
impl<S: StorageTrait + Send> Send for TensorBase<S>
impl<S: StorageTrait + Sync> Sync for TensorBase<S>
Auto Trait Implementations§
impl<S> Freeze for TensorBase<S>where
S: Freeze,
impl<S> RefUnwindSafe for TensorBase<S>where
S: RefUnwindSafe,
impl<S> Unpin for TensorBase<S>where
S: Unpin,
impl<S> UnwindSafe for TensorBase<S>where
S: UnwindSafe,
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left is true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self into a Left variant of Either<Self, Self>
if into_left(&self) returns true.
Converts self into a Right variant of Either<Self, Self>
otherwise. Read more