pub struct Shape {
pub dims: Vec<usize>,
pub size: usize,
pub strides: Vec<usize>,
pub layout: MemoryLayout,
}Expand description
Represents the shape/dimensions of a tensor with stride tracking
This struct holds the dimensional information for a tensor, including the size of each dimension, memory strides, and layout information for efficient view operations and transformations. The shape system enables zero-copy tensor views and optimized memory access patterns.
§Key Features
- Dimension Management: Multi-dimensional shape representation
- Stride Calculation: Efficient memory access pattern computation
- Layout Tracking: Contiguous, strided, and view layout types
- Broadcasting: NumPy-compatible broadcasting rules
- Memory Safety: Bounds checking and validation
§Performance Characteristics
- Zero-Cost Layout: Layout information computed once and cached
- Efficient Strides: Row-major stride calculation for optimal access
- Memory Access: O(1) offset calculation for multi-dimensional indices
- View Efficiency: Zero-copy view creation with minimal overhead
§Examples
use train_station::tensor::Shape;
let shape = Shape::new(vec![2, 3, 4]);
assert_eq!(shape.size, 24);
assert!(shape.is_contiguous());
assert_eq!(shape.strides(), &[12, 4, 1]);Fields§
§dims: Vec<usize>The dimensions of the tensor (e.g., [2, 3, 4] for a 2x3x4 tensor)
size: usizeTotal number of elements in the tensor
strides: Vec<usize>Memory strides for each dimension (elements between consecutive indices) For a contiguous tensor with shape [2, 3, 4], strides would be [12, 4, 1]
layout: MemoryLayoutMemory layout type for optimization decisions
Implementations§
Source§impl Shape
impl Shape
Sourcepub fn new(dims: Vec<usize>) -> Self
pub fn new(dims: Vec<usize>) -> Self
Creates a new contiguous shape from a vector of dimensions
Computes the total size and contiguous strides for the given dimensions. The resulting shape uses row-major memory layout optimized for cache efficiency and SIMD operations.
§Arguments
dims- Vector of dimension sizes defining the tensor shape
§Returns
A new Shape with calculated size, contiguous strides, and contiguous layout
§Performance
- Time Complexity: O(rank) for stride calculation
- Memory: Single allocation for dimensions and strides
- Optimization: Row-major layout for cache efficiency
§Examples
use train_station::tensor::Shape;
let shape = Shape::new(vec![2, 3, 4]);
assert_eq!(shape.size, 24);
assert!(shape.is_contiguous());
assert_eq!(shape.strides(), &[12, 4, 1]);Sourcepub fn with_strides(dims: Vec<usize>, strides: Vec<usize>) -> Self
pub fn with_strides(dims: Vec<usize>, strides: Vec<usize>) -> Self
Creates a new shape with custom strides
Creates a shape with user-defined strides for non-contiguous memory layouts. Automatically detects if the strides represent a contiguous layout and sets the appropriate layout type.
§Arguments
dims- Vector of dimension sizes defining the tensor shapestrides- Vector of memory strides for each dimension
§Returns
A new Shape with the given dimensions and strides, with layout type automatically determined
§Panics
Panics if dimensions and strides have different lengths
§Performance
- Layout Detection: O(rank) comparison with contiguous strides
- Memory: Single allocation for shape data
- Optimization: Automatic layout type detection
§Examples
use train_station::tensor::Shape;
let shape = Shape::with_strides(vec![2, 3], vec![6, 2]);
assert_eq!(shape.size, 6);
assert!(!shape.is_contiguous());
assert_eq!(shape.strides(), &[6, 2]);Sourcepub fn as_view(dims: Vec<usize>, strides: Vec<usize>) -> Self
pub fn as_view(dims: Vec<usize>, strides: Vec<usize>) -> Self
Creates a view shape (non-contiguous reference to existing tensor)
Creates a shape representing a view of existing tensor data with custom dimensions and strides. View shapes enable zero-copy tensor transformations by sharing memory with the original tensor.
§Arguments
dims- Vector of dimension sizes for the viewstrides- Vector of memory strides for the view
§Returns
A new Shape marked as a view with the given dimensions and strides
§Panics
Panics if dimensions and strides have different lengths
§Performance
- Zero-Copy: No data copying, only metadata creation
- Memory Efficient: Shares memory with original tensor
- View Optimization: Enables view-specific operation optimizations
§Examples
use train_station::tensor::Shape;
let view_shape = Shape::as_view(vec![2, 2], vec![4, 1]);
assert!(view_shape.is_view());
assert!(!view_shape.is_contiguous());Sourcepub fn is_contiguous(&self) -> bool
pub fn is_contiguous(&self) -> bool
Sourcepub fn stride(&self, dim: usize) -> usize
pub fn stride(&self, dim: usize) -> usize
Gets the memory stride for a specific dimension
§Arguments
dim- The dimension index
§Returns
The memory stride for the given dimension
§Panics
Panics if dim is out of bounds
§Examples
use train_station::tensor::Shape;
let shape = Shape::new(vec![2, 3, 4]);
assert_eq!(shape.stride(0), 12);
assert_eq!(shape.stride(1), 4);
assert_eq!(shape.stride(2), 1);Sourcepub fn offset(&self, indices: &[usize]) -> usize
pub fn offset(&self, indices: &[usize]) -> usize
Calculates the linear memory offset for given indices
Computes the linear memory offset for multi-dimensional tensor indices using the stored stride information. This enables efficient direct memory access for tensor operations.
§Arguments
indices- Vector of indices for each dimension
§Returns
Linear memory offset for the given multi-dimensional indices
§Panics
Panics if indices length doesn’t match tensor rank
§Performance
- Time Complexity: O(rank) for offset calculation
- Memory: No allocation, uses existing stride data
- Optimization: Efficient dot product of indices and strides
§Examples
use train_station::tensor::Shape;
let shape = Shape::new(vec![2, 3, 4]);
let offset = shape.offset(&[1, 2, 3]);
assert_eq!(offset, 12 + 8 + 3); // 1*12 + 2*4 + 3*1Sourcepub fn is_broadcastable_with(&self, other: &Shape) -> bool
pub fn is_broadcastable_with(&self, other: &Shape) -> bool
Checks if this shape is broadcastable with another shape
Determines if two shapes can be broadcast together according to NumPy broadcasting rules. Broadcasting enables element-wise operations between tensors with different shapes by expanding singleton dimensions.
§Arguments
other- The other shape to check broadcasting compatibility
§Returns
true if the shapes are broadcastable according to NumPy broadcasting rules
§Performance
- Time Complexity: O(max(rank1, rank2)) for broadcasting check
- Memory: No allocation, uses existing dimension data
- Optimization: Right-aligned dimension comparison
§Broadcasting Rules
- Dimensions are compared from right to left
- Dimensions are compatible if they are equal, or one is 1
- Missing dimensions are treated as 1
§Examples
use train_station::tensor::Shape;
let shape1 = Shape::new(vec![2, 3, 4]);
let shape2 = Shape::new(vec![1, 3, 4]);
assert!(shape1.is_broadcastable_with(&shape2));
let shape3 = Shape::new(vec![4]);
assert!(shape1.is_broadcastable_with(&shape3));