pub enum DataLayout<const D: usize> {
    Linear([Dimension; D]),
    NonLinear,
    Other,
}
Expand description

How the data in the tensor is laid out in memory.

Variants§

§

Linear([Dimension; D])

The data is laid out in linear storage in memory, such that we could take a slice over the entire data specified by our view_shape.

The D length array specifies the dimensions in the view_shape in the order of most significant dimension (in memory) to least.

In general, an array of dimension names in the same order as the view shape is big endian order (implying the order of dimensions in the view shape is most significant to least), and will be iterated through efficiently by TensorIterator, and an array of dimension names in reverse of the view shape is in little endian order (implying the order of dimensions in the view shape is least significant to most).

In memory, the data will have some order such that if we want repeatedly take 1 step through memory from the first value to the last there will be a most significant dimension that always goes up, through to a least significant dimension with the most rapidly varying index.

In most of Easy ML’s Tensors, the view_shape dimensions would already be in the order of most significant dimension to least (since Tensor stores its data in big endian order), so the list of dimension names will just increment match the order of the view shape.

For example, a tensor with a view shape of [("batch", 2), ("row", 2), ("column", 3)] that stores its data in most significant to least would be (indexed) like:

[
  (0,0,0), (0,0,1), (0,0,2),
  (0,1,0), (0,1,1), (0,1,2),
  (1,0,0), (1,0,1), (1,0,2),
  (1,1,0), (1,1,1), (1,1,2)
]

To take one step in memory, we would increment the right most dimension index (“column”), counting our way up through to the left most dimension index (“batch”). If we changed this tensor to [("column", 3), ("row", 2), ("batch", 2)] so that the view_shape was swapped to least significant dimension to most but the data remained in the same order, our tensor would still have a DataLayout of Linear(["batch", "row", "column"]), since the indexes in the transposed view_shape correspond to an actual memory layout that’s completely reversed. Alternatively, you could say we reversed the view shape but the memory layout never changed:

[
  (0,0,0), (1,0,0), (2,0,0),
  (0,1,0), (1,1,0), (2,1,0),
  (0,0,1), (1,0,1), (2,0,1),
  (0,1,1), (1,1,1), (2,1,1)
]

To take one step in memory, we now need to increment the left most dimension index (on the view shape) (“column”), counting our way in reverse to the right most dimension index (“batch”).

That ["batch", "row", "column"] is also exactly the order you would need to swap your dimensions on the view_shape to get back to most significant to least. A TensorAccess could reorder the tensor by this array order to get back to most significant to least ordering on the view_shape in order to iterate through the data efficiently.

§

NonLinear

The data is not laid out in linear storage in memory.

§

Other

The data is not laid out in a linear or non linear way, or we don’t know how it’s laid out.

Trait Implementations§

source§

impl<const D: usize> Clone for DataLayout<D>

source§

fn clone(&self) -> DataLayout<D>

Returns a copy of the value. Read more
1.0.0 · source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
source§

impl<const D: usize> Debug for DataLayout<D>

source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
source§

impl<const D: usize> PartialEq for DataLayout<D>

source§

fn eq(&self, other: &DataLayout<D>) -> bool

This method tests for self and other values to be equal, and is used by ==.
1.0.0 · source§

fn ne(&self, other: &Rhs) -> bool

This method tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
source§

impl<const D: usize> Eq for DataLayout<D>

source§

impl<const D: usize> StructuralPartialEq for DataLayout<D>

Auto Trait Implementations§

§

impl<const D: usize> Freeze for DataLayout<D>

§

impl<const D: usize> RefUnwindSafe for DataLayout<D>

§

impl<const D: usize> Send for DataLayout<D>

§

impl<const D: usize> Sync for DataLayout<D>

§

impl<const D: usize> Unpin for DataLayout<D>

§

impl<const D: usize> UnwindSafe for DataLayout<D>

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T> ToOwned for T
where T: Clone,

§

type Owned = T

The resulting type after obtaining ownership.
source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.