Enum easy_ml::tensors::views::DataLayout
source · pub enum DataLayout<const D: usize> {
Linear([Dimension; D]),
NonLinear,
Other,
}
Expand description
How the data in the tensor is laid out in memory.
Variants§
Linear([Dimension; D])
The data is laid out in linear storage in memory, such that we could take a slice over the
entire data specified by our view_shape
.
The D
length array specifies the dimensions in the view_shape
in the order of most
significant dimension (in memory) to least.
In general, an array of dimension names in the same order as the view shape is big endian order (implying the order of dimensions in the view shape is most significant to least), and will be iterated through efficiently by TensorIterator, and an array of dimension names in reverse of the view shape is in little endian order (implying the order of dimensions in the view shape is least significant to most).
In memory, the data will have some order such that if we want repeatedly take 1 step through memory from the first value to the last there will be a most significant dimension that always goes up, through to a least significant dimension with the most rapidly varying index.
In most of Easy ML’s Tensors, the view_shape
dimensions would already be in the order of
most significant dimension to least (since Tensor stores its data in big endian order),
so the list of dimension names will just increment match the order of the view shape.
For example, a tensor with a view shape of [("batch", 2), ("row", 2), ("column", 3)]
that
stores its data in most significant to least would be (indexed) like:
[
(0,0,0), (0,0,1), (0,0,2),
(0,1,0), (0,1,1), (0,1,2),
(1,0,0), (1,0,1), (1,0,2),
(1,1,0), (1,1,1), (1,1,2)
]
To take one step in memory, we would increment the right most dimension index (“column”),
counting our way up through to the left most dimension index (“batch”). If we changed
this tensor to [("column", 3), ("row", 2), ("batch", 2)]
so that the view_shape
was
swapped to least significant dimension to most but the data remained in the same order,
our tensor would still have a DataLayout of Linear(["batch", "row", "column"])
, since
the indexes in the transposed view_shape
correspond to an actual memory layout that’s
completely reversed. Alternatively, you could say we reversed the view shape but the
memory layout never changed:
[
(0,0,0), (1,0,0), (2,0,0),
(0,1,0), (1,1,0), (2,1,0),
(0,0,1), (1,0,1), (2,0,1),
(0,1,1), (1,1,1), (2,1,1)
]
To take one step in memory, we now need to increment the left most dimension index (on the view shape) (“column”), counting our way in reverse to the right most dimension index (“batch”).
That ["batch", "row", "column"]
is also exactly the order you would need to swap your
dimensions on the view_shape
to get back to most significant to least. A TensorAccess
could reorder the tensor by this array order to get back to most significant to least
ordering on the view_shape
in order to iterate through the data efficiently.
NonLinear
The data is not laid out in linear storage in memory.
Other
The data is not laid out in a linear or non linear way, or we don’t know how it’s laid out.
Trait Implementations§
source§impl<const D: usize> Clone for DataLayout<D>
impl<const D: usize> Clone for DataLayout<D>
source§fn clone(&self) -> DataLayout<D>
fn clone(&self) -> DataLayout<D>
1.0.0 · source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moresource§impl<const D: usize> Debug for DataLayout<D>
impl<const D: usize> Debug for DataLayout<D>
source§impl<const D: usize> PartialEq for DataLayout<D>
impl<const D: usize> PartialEq for DataLayout<D>
source§fn eq(&self, other: &DataLayout<D>) -> bool
fn eq(&self, other: &DataLayout<D>) -> bool
self
and other
values to be equal, and is used
by ==
.