pub unsafe trait TensorRef<T, const D: usize> {
    // Required methods
    fn get_reference(&self, indexes: [usize; D]) -> Option<&T>;
    fn view_shape(&self) -> [(Dimension, usize); D];
    unsafe fn get_reference_unchecked(&self, indexes: [usize; D]) -> &T;
    fn data_layout(&self) -> DataLayout<D>;
}
Expand description

A shared/immutable reference to a tensor (or a portion of it) of some type and number of dimensions.

§Indexing

A TensorRef has a shape of type [(Dimension, usize); D]. This defines the valid indexes along each dimension name and length pair from 0 inclusive to the length exclusive. If the shape was [("r", 2), ("c", 2)] the indexes used would be [0,0], [0,1], [1,0] and [1,1]. Although the dimension name in each pair is used for many high level APIs, for TensorRef the order of dimensions is used, and the indexes ([usize; D]) these trait methods are called with must be in the same order as the shape. In general some [("a", a), ("b", b), ("c", c)...] shape is indexed from [0,0,0...] through to [a - 1, b - 1, c - 1...], regardless of how the data is actually laid out in memory.

§Safety

In order to support returning references without bounds checking in a useful way, the implementing type is required to uphold several invariants that cannot be checked by the compiler.

1 - Any valid index as described in Indexing will yield a safe reference when calling get_reference_unchecked and get_reference_unchecked_mut.

2 - The view shape that defines which indexes are valid may not be changed by a shared reference to the TensorRef implementation. ie, the tensor may not be resized while a mutable reference is held to it, except by that reference.

3 - All dimension names in the view_shape must be unique.

4 - All dimension lengths in the view_shape must be non zero.

5 - data_layout must return values correctly as documented on DataLayout

Essentially, interior mutability causes problems, since code looping through the range of valid indexes in a TensorRef needs to be able to rely on that range of valid indexes not changing. This is trivially the case by default since a Tensor does not have any form of interior mutability, and therefore an iterator holding a shared reference to a Tensor prevents that tensor being resized. However, a type wrongly implementing TensorRef could introduce interior mutability by putting the Tensor in an Arc<Mutex<>> which would allow another thread to resize a tensor while an iterator was looping through previously valid indexes on a different thread. This is the same contract as NoInteriorMutability used in the matrix APIs.

Note that it is okay to be able to resize any TensorRef implementation if that always requires an exclusive reference to the TensorRef/Tensor, since the exclusivity prevents the above scenario.

Required Methods§

source

fn get_reference(&self, indexes: [usize; D]) -> Option<&T>

Gets a reference to the value at the index if the index is in range. Otherwise returns None.

source

fn view_shape(&self) -> [(Dimension, usize); D]

The shape this tensor has. See dimensions for an overview. The product of the lengths in the pairs define how many elements are in the tensor (or the portion of it that is visible).

source

unsafe fn get_reference_unchecked(&self, indexes: [usize; D]) -> &T

Gets a reference to the value at the index without doing any bounds checking. For a safe alternative see get_reference.

§Safety

Calling this method with an out-of-bounds index is undefined behavior even if the resulting reference is not used. Valid indexes are defined as in TensorRef.

source

fn data_layout(&self) -> DataLayout<D>

The way the data in this tensor is laid out in memory. In particular, Linear has several requirements on what is returned that must be upheld by implementations of this trait.

For a Tensor this would return DataLayout::Linear in the same order as the view_shape, since the data is in a single line and the view_shape is in most significant dimension to least. Many views however, create shapes that do not correspond to linear memory, either by combining non array data with tensors, or hiding dimensions. Similarly, a row major Tensor might be transposed to a column major TensorView, so the view shape could be reversed compared to the order of significance of the dimensions in memory.

In general, an array of dimension names matching the view shape order is big endian, and will be iterated through efficiently by TensorIterator, and an array of dimension names in reverse of the view shape is in little endian order.

The implementation of this trait must ensure that if it returns DataLayout::Linear the set of dimension names are returned in the order of most significant to least and match the set of the dimension names returned by view_shape.

Trait Implementations§

source§

impl<T, const D: usize> TensorRef<T, D> for Box<dyn TensorRef<T, D>>

A box of a dynamic TensorRef also implements TensorRef.

source§

fn get_reference(&self, indexes: [usize; D]) -> Option<&T>

Gets a reference to the value at the index if the index is in range. Otherwise returns None.
source§

fn view_shape(&self) -> [(Dimension, usize); D]

The shape this tensor has. See dimensions for an overview. The product of the lengths in the pairs define how many elements are in the tensor (or the portion of it that is visible).
source§

unsafe fn get_reference_unchecked(&self, indexes: [usize; D]) -> &T

Gets a reference to the value at the index without doing any bounds checking. For a safe alternative see get_reference. Read more
source§

fn data_layout(&self) -> DataLayout<D>

The way the data in this tensor is laid out in memory. In particular, Linear has several requirements on what is returned that must be upheld by implementations of this trait. Read more

Implementations on Foreign Types§

source§

impl<'source, T, S, const D: usize> TensorRef<T, D> for &'source S
where S: TensorRef<T, D>,

If some type implements TensorRef, then a reference to it implements TensorRef as well

source§

fn get_reference(&self, indexes: [usize; D]) -> Option<&T>

source§

fn view_shape(&self) -> [(Dimension, usize); D]

source§

unsafe fn get_reference_unchecked(&self, indexes: [usize; D]) -> &T

source§

fn data_layout(&self) -> DataLayout<D>

source§

impl<'source, T, S, const D: usize> TensorRef<T, D> for &'source mut S
where S: TensorRef<T, D>,

If some type implements TensorRef, then an exclusive reference to it implements TensorRef as well

source§

fn get_reference(&self, indexes: [usize; D]) -> Option<&T>

source§

fn view_shape(&self) -> [(Dimension, usize); D]

source§

unsafe fn get_reference_unchecked(&self, indexes: [usize; D]) -> &T

source§

fn data_layout(&self) -> DataLayout<D>

source§

impl<T, S, const D: usize> TensorRef<T, D> for Box<S>
where S: TensorRef<T, D>,

A box of a TensorRef also implements TensorRef.

source§

fn get_reference(&self, indexes: [usize; D]) -> Option<&T>

source§

fn view_shape(&self) -> [(Dimension, usize); D]

source§

unsafe fn get_reference_unchecked(&self, indexes: [usize; D]) -> &T

source§

fn data_layout(&self) -> DataLayout<D>

source§

impl<T, const D: usize> TensorRef<T, D> for Box<dyn TensorMut<T, D>>

A box of a dynamic TensorMut also implements TensorRef.

source§

fn get_reference(&self, indexes: [usize; D]) -> Option<&T>

source§

fn view_shape(&self) -> [(Dimension, usize); D]

source§

unsafe fn get_reference_unchecked(&self, indexes: [usize; D]) -> &T

source§

fn data_layout(&self) -> DataLayout<D>

source§

impl<T, const D: usize> TensorRef<T, D> for Box<dyn TensorRef<T, D>>

A box of a dynamic TensorRef also implements TensorRef.

source§

fn get_reference(&self, indexes: [usize; D]) -> Option<&T>

source§

fn view_shape(&self) -> [(Dimension, usize); D]

source§

unsafe fn get_reference_unchecked(&self, indexes: [usize; D]) -> &T

source§

fn data_layout(&self) -> DataLayout<D>

Implementors§

source§

impl<'a, T, S, const D: usize> TensorRef<(T, usize), D> for RecordTensor<'a, T, S, D>
where T: Primitive, S: TensorRef<(T, Index), D>,

RecordTensor implements TensorRef when the source does, returning references to the tuples of T and Index.

source§

impl<T, S1, S2> TensorRef<T, { $d + 1 }> for TensorStack<T, (S1, S2), 0>
where S1: TensorRef<T, 0>, S2: TensorRef<T, 0>,

source§

impl<T, S1, S2> TensorRef<T, { $d + 1 }> for TensorStack<T, (S1, S2), 1>
where S1: TensorRef<T, 1>, S2: TensorRef<T, 1>,

source§

impl<T, S1, S2> TensorRef<T, { $d + 1 }> for TensorStack<T, (S1, S2), 2>
where S1: TensorRef<T, 2>, S2: TensorRef<T, 2>,

source§

impl<T, S1, S2> TensorRef<T, { $d + 1 }> for TensorStack<T, (S1, S2), 3>
where S1: TensorRef<T, 3>, S2: TensorRef<T, 3>,

source§

impl<T, S1, S2> TensorRef<T, { $d + 1 }> for TensorStack<T, (S1, S2), 4>
where S1: TensorRef<T, 4>, S2: TensorRef<T, 4>,

source§

impl<T, S1, S2> TensorRef<T, { $d + 1 }> for TensorStack<T, (S1, S2), 5>
where S1: TensorRef<T, 5>, S2: TensorRef<T, 5>,

source§

impl<T, S1, S2, S3> TensorRef<T, { $d + 1 }> for TensorStack<T, (S1, S2, S3), 0>
where S1: TensorRef<T, 0>, S2: TensorRef<T, 0>, S3: TensorRef<T, 0>,

source§

impl<T, S1, S2, S3> TensorRef<T, { $d + 1 }> for TensorStack<T, (S1, S2, S3), 1>
where S1: TensorRef<T, 1>, S2: TensorRef<T, 1>, S3: TensorRef<T, 1>,

source§

impl<T, S1, S2, S3> TensorRef<T, { $d + 1 }> for TensorStack<T, (S1, S2, S3), 2>
where S1: TensorRef<T, 2>, S2: TensorRef<T, 2>, S3: TensorRef<T, 2>,

source§

impl<T, S1, S2, S3> TensorRef<T, { $d + 1 }> for TensorStack<T, (S1, S2, S3), 3>
where S1: TensorRef<T, 3>, S2: TensorRef<T, 3>, S3: TensorRef<T, 3>,

source§

impl<T, S1, S2, S3> TensorRef<T, { $d + 1 }> for TensorStack<T, (S1, S2, S3), 4>
where S1: TensorRef<T, 4>, S2: TensorRef<T, 4>, S3: TensorRef<T, 4>,

source§

impl<T, S1, S2, S3> TensorRef<T, { $d + 1 }> for TensorStack<T, (S1, S2, S3), 5>
where S1: TensorRef<T, 5>, S2: TensorRef<T, 5>, S3: TensorRef<T, 5>,

source§

impl<T, S1, S2, S3, S4> TensorRef<T, { $d + 1 }> for TensorStack<T, (S1, S2, S3, S4), 0>
where S1: TensorRef<T, 0>, S2: TensorRef<T, 0>, S3: TensorRef<T, 0>, S4: TensorRef<T, 0>,

source§

impl<T, S1, S2, S3, S4> TensorRef<T, { $d + 1 }> for TensorStack<T, (S1, S2, S3, S4), 1>
where S1: TensorRef<T, 1>, S2: TensorRef<T, 1>, S3: TensorRef<T, 1>, S4: TensorRef<T, 1>,

source§

impl<T, S1, S2, S3, S4> TensorRef<T, { $d + 1 }> for TensorStack<T, (S1, S2, S3, S4), 2>
where S1: TensorRef<T, 2>, S2: TensorRef<T, 2>, S3: TensorRef<T, 2>, S4: TensorRef<T, 2>,

source§

impl<T, S1, S2, S3, S4> TensorRef<T, { $d + 1 }> for TensorStack<T, (S1, S2, S3, S4), 3>
where S1: TensorRef<T, 3>, S2: TensorRef<T, 3>, S3: TensorRef<T, 3>, S4: TensorRef<T, 3>,

source§

impl<T, S1, S2, S3, S4> TensorRef<T, { $d + 1 }> for TensorStack<T, (S1, S2, S3, S4), 4>
where S1: TensorRef<T, 4>, S2: TensorRef<T, 4>, S3: TensorRef<T, 4>, S4: TensorRef<T, 4>,

source§

impl<T, S1, S2, S3, S4> TensorRef<T, { $d + 1 }> for TensorStack<T, (S1, S2, S3, S4), 5>
where S1: TensorRef<T, 5>, S2: TensorRef<T, 5>, S3: TensorRef<T, 5>, S4: TensorRef<T, 5>,

source§

impl<T, S1, S2, S3, S4, const D: usize> TensorRef<T, D> for TensorChain<T, (S1, S2, S3, S4), D>
where S1: TensorRef<T, D>, S2: TensorRef<T, D>, S3: TensorRef<T, D>, S4: TensorRef<T, D>,

source§

impl<T, S1, S2, S3, const D: usize> TensorRef<T, D> for TensorChain<T, (S1, S2, S3), D>
where S1: TensorRef<T, D>, S2: TensorRef<T, D>, S3: TensorRef<T, D>,

source§

impl<T, S1, S2, const D: usize> TensorRef<T, D> for TensorChain<T, (S1, S2), D>
where S1: TensorRef<T, D>, S2: TensorRef<T, D>,

source§

impl<T, S> TensorRef<T, { $d + $i }> for TensorExpansion<T, S, 0, 1>
where S: TensorRef<T, 0>,

source§

impl<T, S> TensorRef<T, { $d + $i }> for TensorExpansion<T, S, 0, 2>
where S: TensorRef<T, 0>,

source§

impl<T, S> TensorRef<T, { $d + $i }> for TensorExpansion<T, S, 0, 3>
where S: TensorRef<T, 0>,

source§

impl<T, S> TensorRef<T, { $d + $i }> for TensorExpansion<T, S, 0, 4>
where S: TensorRef<T, 0>,

source§

impl<T, S> TensorRef<T, { $d + $i }> for TensorExpansion<T, S, 0, 5>
where S: TensorRef<T, 0>,

source§

impl<T, S> TensorRef<T, { $d + $i }> for TensorExpansion<T, S, 0, 6>
where S: TensorRef<T, 0>,

source§

impl<T, S> TensorRef<T, { $d + $i }> for TensorExpansion<T, S, 1, 1>
where S: TensorRef<T, 1>,

source§

impl<T, S> TensorRef<T, { $d + $i }> for TensorExpansion<T, S, 1, 2>
where S: TensorRef<T, 1>,

source§

impl<T, S> TensorRef<T, { $d + $i }> for TensorExpansion<T, S, 1, 3>
where S: TensorRef<T, 1>,

source§

impl<T, S> TensorRef<T, { $d + $i }> for TensorExpansion<T, S, 1, 4>
where S: TensorRef<T, 1>,

source§

impl<T, S> TensorRef<T, { $d + $i }> for TensorExpansion<T, S, 1, 5>
where S: TensorRef<T, 1>,

source§

impl<T, S> TensorRef<T, { $d + $i }> for TensorExpansion<T, S, 2, 1>
where S: TensorRef<T, 2>,

source§

impl<T, S> TensorRef<T, { $d + $i }> for TensorExpansion<T, S, 2, 2>
where S: TensorRef<T, 2>,

source§

impl<T, S> TensorRef<T, { $d + $i }> for TensorExpansion<T, S, 2, 3>
where S: TensorRef<T, 2>,

source§

impl<T, S> TensorRef<T, { $d + $i }> for TensorExpansion<T, S, 2, 4>
where S: TensorRef<T, 2>,

source§

impl<T, S> TensorRef<T, { $d + $i }> for TensorExpansion<T, S, 3, 1>
where S: TensorRef<T, 3>,

source§

impl<T, S> TensorRef<T, { $d + $i }> for TensorExpansion<T, S, 3, 2>
where S: TensorRef<T, 3>,

source§

impl<T, S> TensorRef<T, { $d + $i }> for TensorExpansion<T, S, 3, 3>
where S: TensorRef<T, 3>,

source§

impl<T, S> TensorRef<T, { $d + $i }> for TensorExpansion<T, S, 4, 1>
where S: TensorRef<T, 4>,

source§

impl<T, S> TensorRef<T, { $d + $i }> for TensorExpansion<T, S, 4, 2>
where S: TensorRef<T, 4>,

source§

impl<T, S> TensorRef<T, { $d + $i }> for TensorExpansion<T, S, 5, 1>
where S: TensorRef<T, 5>,

source§

impl<T, S> TensorRef<T, { $d - $i }> for TensorIndex<T, S, 1, 1>
where S: TensorRef<T, 1>,

source§

impl<T, S> TensorRef<T, { $d - $i }> for TensorIndex<T, S, 2, 1>
where S: TensorRef<T, 2>,

source§

impl<T, S> TensorRef<T, { $d - $i }> for TensorIndex<T, S, 2, 2>
where S: TensorRef<T, 2>,

source§

impl<T, S> TensorRef<T, { $d - $i }> for TensorIndex<T, S, 3, 1>
where S: TensorRef<T, 3>,

source§

impl<T, S> TensorRef<T, { $d - $i }> for TensorIndex<T, S, 3, 2>
where S: TensorRef<T, 3>,

source§

impl<T, S> TensorRef<T, { $d - $i }> for TensorIndex<T, S, 3, 3>
where S: TensorRef<T, 3>,

source§

impl<T, S> TensorRef<T, { $d - $i }> for TensorIndex<T, S, 4, 1>
where S: TensorRef<T, 4>,

source§

impl<T, S> TensorRef<T, { $d - $i }> for TensorIndex<T, S, 4, 2>
where S: TensorRef<T, 4>,

source§

impl<T, S> TensorRef<T, { $d - $i }> for TensorIndex<T, S, 4, 3>
where S: TensorRef<T, 4>,

source§

impl<T, S> TensorRef<T, { $d - $i }> for TensorIndex<T, S, 4, 4>
where S: TensorRef<T, 4>,

source§

impl<T, S> TensorRef<T, { $d - $i }> for TensorIndex<T, S, 5, 1>
where S: TensorRef<T, 5>,

source§

impl<T, S> TensorRef<T, { $d - $i }> for TensorIndex<T, S, 5, 2>
where S: TensorRef<T, 5>,

source§

impl<T, S> TensorRef<T, { $d - $i }> for TensorIndex<T, S, 5, 3>
where S: TensorRef<T, 5>,

source§

impl<T, S> TensorRef<T, { $d - $i }> for TensorIndex<T, S, 5, 4>
where S: TensorRef<T, 5>,

source§

impl<T, S> TensorRef<T, { $d - $i }> for TensorIndex<T, S, 5, 5>
where S: TensorRef<T, 5>,

source§

impl<T, S> TensorRef<T, { $d - $i }> for TensorIndex<T, S, 6, 1>
where S: TensorRef<T, 6>,

source§

impl<T, S> TensorRef<T, { $d - $i }> for TensorIndex<T, S, 6, 2>
where S: TensorRef<T, 6>,

source§

impl<T, S> TensorRef<T, { $d - $i }> for TensorIndex<T, S, 6, 3>
where S: TensorRef<T, 6>,

source§

impl<T, S> TensorRef<T, { $d - $i }> for TensorIndex<T, S, 6, 4>
where S: TensorRef<T, 6>,

source§

impl<T, S> TensorRef<T, { $d - $i }> for TensorIndex<T, S, 6, 5>
where S: TensorRef<T, 6>,

source§

impl<T, S> TensorRef<T, { $d - $i }> for TensorIndex<T, S, 6, 6>
where S: TensorRef<T, 6>,

source§

impl<T, S, N> TensorRef<T, 2> for TensorRefMatrix<T, S, N>

source§

impl<T, S, const D: usize> TensorRef<T, D> for TensorAccess<T, S, D>
where S: TensorRef<T, D>,

A TensorAccess implements TensorRef, with the dimension order and indexing matching that of the TensorAccess shape.

source§

impl<T, S, const D: usize> TensorRef<T, D> for TensorTranspose<T, S, D>
where S: TensorRef<T, D>,

A TensorTranspose implements TensorRef, with the dimension order and indexing matching that of the TensorTranspose shape.

source§

impl<T, S, const D: usize> TensorRef<T, D> for TensorMask<T, S, D>
where S: TensorRef<T, D>,

A TensorMask implements TensorRef, with the dimension lengths reduced by the mask the the TensorMask was created with.

source§

impl<T, S, const D: usize> TensorRef<T, D> for TensorRange<T, S, D>
where S: TensorRef<T, D>,

A TensorRange implements TensorRef, with the dimension lengths reduced to the range the the TensorRange was created with.

source§

impl<T, S, const D: usize> TensorRef<T, D> for TensorRename<T, S, D>
where S: TensorRef<T, D>,

A TensorRename implements TensorRef, with the dimension names the TensorRename was created with overriding the dimension names in the original source.

source§

impl<T, S, const D: usize, const N: usize> TensorRef<T, D> for TensorChain<T, [S; N], D>
where S: TensorRef<T, D>,

source§

impl<T, S, const N: usize> TensorRef<T, { $d + 1 }> for TensorStack<T, [S; N], 0>
where S: TensorRef<T, 0>,

source§

impl<T, S, const N: usize> TensorRef<T, { $d + 1 }> for TensorStack<T, [S; N], 1>
where S: TensorRef<T, 1>,

source§

impl<T, S, const N: usize> TensorRef<T, { $d + 1 }> for TensorStack<T, [S; N], 2>
where S: TensorRef<T, 2>,

source§

impl<T, S, const N: usize> TensorRef<T, { $d + 1 }> for TensorStack<T, [S; N], 3>
where S: TensorRef<T, 3>,

source§

impl<T, S, const N: usize> TensorRef<T, { $d + 1 }> for TensorStack<T, [S; N], 4>
where S: TensorRef<T, 4>,

source§

impl<T, S, const N: usize> TensorRef<T, { $d + 1 }> for TensorStack<T, [S; N], 5>
where S: TensorRef<T, 5>,

source§

impl<T, const D: usize> TensorRef<T, D> for Tensor<T, D>

A Tensor implements TensorRef.