Shape

Struct Shape 

Source
pub struct Shape {
    pub dims: Vec<usize>,
    pub size: usize,
    pub strides: Vec<usize>,
    pub layout: MemoryLayout,
}
Expand description

Represents the shape/dimensions of a tensor with stride tracking

This struct holds the dimensional information for a tensor, including the size of each dimension, memory strides, and layout information for efficient view operations and transformations. The shape system enables zero-copy tensor views and optimized memory access patterns.

§Key Features

  • Dimension Management: Multi-dimensional shape representation
  • Stride Calculation: Efficient memory access pattern computation
  • Layout Tracking: Contiguous, strided, and view layout types
  • Broadcasting: NumPy-compatible broadcasting rules
  • Memory Safety: Bounds checking and validation

§Performance Characteristics

  • Zero-Cost Layout: Layout information computed once and cached
  • Efficient Strides: Row-major stride calculation for optimal access
  • Memory Access: O(1) offset calculation for multi-dimensional indices
  • View Efficiency: Zero-copy view creation with minimal overhead

§Examples

use train_station::tensor::Shape;

let shape = Shape::new(vec![2, 3, 4]);
assert_eq!(shape.size, 24);
assert!(shape.is_contiguous());
assert_eq!(shape.strides(), &[12, 4, 1]);

Fields§

§dims: Vec<usize>

The dimensions of the tensor (e.g., [2, 3, 4] for a 2x3x4 tensor)

§size: usize

Total number of elements in the tensor

§strides: Vec<usize>

Memory strides for each dimension (elements between consecutive indices) For a contiguous tensor with shape [2, 3, 4], strides would be [12, 4, 1]

§layout: MemoryLayout

Memory layout type for optimization decisions

Implementations§

Source§

impl Shape

Source

pub fn new(dims: Vec<usize>) -> Self

Creates a new contiguous shape from a vector of dimensions

Computes the total size and contiguous strides for the given dimensions. The resulting shape uses row-major memory layout optimized for cache efficiency and SIMD operations.

§Arguments
  • dims - Vector of dimension sizes defining the tensor shape
§Returns

A new Shape with calculated size, contiguous strides, and contiguous layout

§Performance
  • Time Complexity: O(rank) for stride calculation
  • Memory: Single allocation for dimensions and strides
  • Optimization: Row-major layout for cache efficiency
§Examples
use train_station::tensor::Shape;

let shape = Shape::new(vec![2, 3, 4]);
assert_eq!(shape.size, 24);
assert!(shape.is_contiguous());
assert_eq!(shape.strides(), &[12, 4, 1]);
Source

pub fn with_strides(dims: Vec<usize>, strides: Vec<usize>) -> Self

Creates a new shape with custom strides

Creates a shape with user-defined strides for non-contiguous memory layouts. Automatically detects if the strides represent a contiguous layout and sets the appropriate layout type.

§Arguments
  • dims - Vector of dimension sizes defining the tensor shape
  • strides - Vector of memory strides for each dimension
§Returns

A new Shape with the given dimensions and strides, with layout type automatically determined

§Panics

Panics if dimensions and strides have different lengths

§Performance
  • Layout Detection: O(rank) comparison with contiguous strides
  • Memory: Single allocation for shape data
  • Optimization: Automatic layout type detection
§Examples
use train_station::tensor::Shape;

let shape = Shape::with_strides(vec![2, 3], vec![6, 2]);
assert_eq!(shape.size, 6);
assert!(!shape.is_contiguous());
assert_eq!(shape.strides(), &[6, 2]);
Source

pub fn as_view(dims: Vec<usize>, strides: Vec<usize>) -> Self

Creates a view shape (non-contiguous reference to existing tensor)

Creates a shape representing a view of existing tensor data with custom dimensions and strides. View shapes enable zero-copy tensor transformations by sharing memory with the original tensor.

§Arguments
  • dims - Vector of dimension sizes for the view
  • strides - Vector of memory strides for the view
§Returns

A new Shape marked as a view with the given dimensions and strides

§Panics

Panics if dimensions and strides have different lengths

§Performance
  • Zero-Copy: No data copying, only metadata creation
  • Memory Efficient: Shares memory with original tensor
  • View Optimization: Enables view-specific operation optimizations
§Examples
use train_station::tensor::Shape;

let view_shape = Shape::as_view(vec![2, 2], vec![4, 1]);
assert!(view_shape.is_view());
assert!(!view_shape.is_contiguous());
Source

pub fn rank(&self) -> usize

Returns the number of dimensions (rank) of the tensor

§Returns

The number of dimensions in the tensor shape

§Examples
use train_station::tensor::Shape;

let shape = Shape::new(vec![2, 3, 4]);
assert_eq!(shape.rank(), 3);
Source

pub fn is_contiguous(&self) -> bool

Checks if the tensor has contiguous memory layout

§Returns

true if the tensor data is stored contiguously in memory

§Examples
use train_station::tensor::Shape;

let shape = Shape::new(vec![2, 3, 4]);
assert!(shape.is_contiguous());
Source

pub fn is_view(&self) -> bool

Checks if the tensor is a view of another tensor

§Returns

true if this tensor is a view (non-contiguous reference)

§Examples
use train_station::tensor::Shape;

let view_shape = Shape::as_view(vec![2, 2], vec![4, 1]);
assert!(view_shape.is_view());
Source

pub fn stride(&self, dim: usize) -> usize

Gets the memory stride for a specific dimension

§Arguments
  • dim - The dimension index
§Returns

The memory stride for the given dimension

§Panics

Panics if dim is out of bounds

§Examples
use train_station::tensor::Shape;

let shape = Shape::new(vec![2, 3, 4]);
assert_eq!(shape.stride(0), 12);
assert_eq!(shape.stride(1), 4);
assert_eq!(shape.stride(2), 1);
Source

pub fn strides(&self) -> &[usize]

Gets all memory strides

§Returns

Reference to the stride vector

§Examples
use train_station::tensor::Shape;

let shape = Shape::new(vec![2, 3, 4]);
assert_eq!(shape.strides(), &[12, 4, 1]);
Source

pub fn layout(&self) -> &MemoryLayout

Gets the memory layout type

§Returns

Reference to the memory layout

§Implementation Details

This method returns the memory layout type which can be used for optimization decisions in tensor operations.

Source

pub fn offset(&self, indices: &[usize]) -> usize

Calculates the linear memory offset for given indices

Computes the linear memory offset for multi-dimensional tensor indices using the stored stride information. This enables efficient direct memory access for tensor operations.

§Arguments
  • indices - Vector of indices for each dimension
§Returns

Linear memory offset for the given multi-dimensional indices

§Panics

Panics if indices length doesn’t match tensor rank

§Performance
  • Time Complexity: O(rank) for offset calculation
  • Memory: No allocation, uses existing stride data
  • Optimization: Efficient dot product of indices and strides
§Examples
use train_station::tensor::Shape;

let shape = Shape::new(vec![2, 3, 4]);
let offset = shape.offset(&[1, 2, 3]);
assert_eq!(offset, 12 + 8 + 3); // 1*12 + 2*4 + 3*1
Source

pub fn is_broadcastable_with(&self, other: &Shape) -> bool

Checks if this shape is broadcastable with another shape

Determines if two shapes can be broadcast together according to NumPy broadcasting rules. Broadcasting enables element-wise operations between tensors with different shapes by expanding singleton dimensions.

§Arguments
  • other - The other shape to check broadcasting compatibility
§Returns

true if the shapes are broadcastable according to NumPy broadcasting rules

§Performance
  • Time Complexity: O(max(rank1, rank2)) for broadcasting check
  • Memory: No allocation, uses existing dimension data
  • Optimization: Right-aligned dimension comparison
§Broadcasting Rules
  • Dimensions are compared from right to left
  • Dimensions are compatible if they are equal, or one is 1
  • Missing dimensions are treated as 1
§Examples
use train_station::tensor::Shape;

let shape1 = Shape::new(vec![2, 3, 4]);
let shape2 = Shape::new(vec![1, 3, 4]);
assert!(shape1.is_broadcastable_with(&shape2));

let shape3 = Shape::new(vec![4]);
assert!(shape1.is_broadcastable_with(&shape3));

Trait Implementations§

Source§

impl Clone for Shape

Source§

fn clone(&self) -> Shape

Returns a duplicate of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl Debug for Shape

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl FromFieldValue for Shape

Source§

fn from_field_value( value: FieldValue, field_name: &str, ) -> SerializationResult<Self>

Convert FieldValue to Shape for deserialization

§Arguments
  • value - FieldValue containing shape object
  • field_name - Name of the field for error reporting
§Returns

Shape instance or error if invalid

Source§

impl PartialEq for Shape

Source§

fn eq(&self, other: &Shape) -> bool

Tests for self and other values to be equal, and is used by ==.
1.0.0 · Source§

fn ne(&self, other: &Rhs) -> bool

Tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
Source§

impl ToFieldValue for Shape

Source§

fn to_field_value(&self) -> FieldValue

Convert Shape to FieldValue for serialization

§Returns

Object containing all shape metadata

Source§

impl Eq for Shape

Source§

impl StructuralPartialEq for Shape

Auto Trait Implementations§

§

impl Freeze for Shape

§

impl RefUnwindSafe for Shape

§

impl Send for Shape

§

impl Sync for Shape

§

impl Unpin for Shape

§

impl UnwindSafe for Shape

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.