reshape

Function reshape 

Source
pub fn reshape<'a, I, R, T, B, D>(
    tensor: &'a TensorBase<Storage<R, T, B>, D>,
    shape: I,
) -> TensorBase<Storage<DataCow<'a, <B as DeviceRawAPI<T>>::Raw>, T, B>, Vec<usize>>
Expand description

Reshapes the given tensor to the specified shape.

Row/Column Major Notice

This function behaves differently on default orders (RowMajor and ColMajor) of device.

§Parameters

  • tensor: &TensorAny<R, T, B, D>

    • The input tensor to be reshaped.
  • axes: TryInto AxesIndex<isize>

    • Position in the expanded axes where the new axis (or axes) is placed.
    • Can be a single integer, or a list/tuple of integers.
    • Negative values are supported and indicate counting dimensions from the back.

§Returns

  • TensorCow<'a, T, B, IxD>

    • The reshaped tensor.

    • This function will try to avoid data cloning if possible.

      • If layout-compatible, a view will be returned.
      • If shape-not-compatible, an owned tensor will be returned, cloning the data.
      • Cow (Clone-on-Write) semantics is used for representing either view or owned tensor.

§Examples

In row-major order, to reshape a vector of (6, ) to a matrix of (2, 3):

use rstsr::prelude::*;
let mut device = DeviceCpu::default();
device.set_default_order(RowMajor);

let a = rt::arange((6, &device));
let a_reshaped = a.reshape([2, 3]);
let a_expected = rt::tensor_from_nested!(
    [[0, 1, 2], [3, 4, 5]],
    &device);
assert!(rt::allclose(&a_reshaped, &a_expected, None));

You can also use negative dimension, where -1 means “infer this dimension”:

// in this case, unspecified axes length is inferred as 6 / 3 = 2
let a_reshaped = a.reshape([3, -1]);
let a_expected = rt::tensor_from_nested!(
    [[0, 1], [2, 3], [4, 5]],
    &device);
assert!(rt::allclose(&a_reshaped, &a_expected, None));

§Ownership Semantics between reshape, into_shape and change_shape

into_shape and change_shape take ownership of the input tensor. They are important variants to this function reshape.

FunctionInput OwnershipOutput OwnershipCloning Condition
reshapeBorrowed
&TensorAny
View
TensorCow with DataCow::Ref
not cloned (layout-compatible)
Owned
TensorCow with DataCow::Owned
cloned (layout-not-compatible)
into_shapeOwned
Tensor
Owned
Tensor
not cloned (layout-compatible, input tensor owns data, input tensor is compact)
Owned
Tensor
cloned (otherwise)
Otherwise
TensorAny
Owned
Tensor
cloned (always)
change_shapeOwned
Tensor
Owned
TensorCow with DataCow::Owned
not cloned (layout-compatible, input tensor owns data, input tensor is compact)
Owned
TensorCow with DataCow::Owned
cloned (otherwise)
Otherwise
TensorAny
View
TensorCow with DataCow::Ref
not cloned (layout-compatible)
Owned
TensorCow with DataCow::Owned
cloned (layout-not-compatible)

§Tips on common compilation errors

You may encounter ownership problem when you try to assign a reshaped tensor like this:

let a = rt::arange((6, &device)).reshape([2, 3]);

The compiler may give an error like:

704 |    let a = rt::arange((6, &device)).reshape([2, 3]);
    |            ^^^^^^^^^^^^^^^^^^^^^^^^                - temporary value is freed at the end of this statement
    |            |
    |            creates a temporary value which is freed while still in use
705 |    println!("a: {:?}", a);
    |                        - borrow later used here
    |
help: consider using a `let` binding to create a longer lived value
    |
704 ~    let binding = rt::arange((6, &device));
705 ~    let a = binding.reshape([2, 3]);
    |

The suggestion by compiler is correct. However, you have another simpler way to solve this problem by using into_shape variant that takes ownership:

let a = rt::arange((6, &device)).into_shape([2, 3]);

§Notes of accordance

§To Python Array API Standard

This function corresponds to Python Array API Standard: reshape.

However, please note that this function does not implement the optional keyword copy as in the standard. copy keyword in the standard specifies whether to return a copy of the array data when the requested shape is not compatible with the original shape.

This function implements copy = None behavior in the standard, which means that it will return a view if possible, and return an owned tensor (cloning the data) if necessary.

To achieve similar functionality of optional keyword copy,

  • For copy = True case, you are recommended to

    • use into_shape, which always returns an owned tensor, cloning the data if necessary. But note the necessity of cloning depends on the layout, and RSTSR may still not explicitly perform cloning.
    • use to_contig, which always returns a contiguous owned tensor, cloning the data if necessary. But note that this function may still not explicitly perform cloning if the tensor is already contiguous.
    • use to_owned as associated method to give an owned tensor, which always perform cloning.
  • For copy = False case, you are recommended to

    • use utility function layout_reshapeable to check whether the layout is compatible with the new shape.

§To NumPy

This function corresponds to NumPy: reshape.

However, please note that this function does not implement the optional keyword order as in the NumPy version. order keyword in NumPy specifies the iteration order to read elements from the tensor to-be-reshaped.

This function uses the device’s default order to determine the layout of the reshaped tensor. You can check the device’s current default order by device.default_order. Also see the elaborated examples below.

To change the device’s default order, you can use

§Elaborated examples

§Difference between RowMajor and ColMajor

Tensor can be uniquely iterated (into a 1-dimension vector), for either row-major or column-major order.

Reshape operation does not change the iterated sequence of a tensor, by definition. In other words, the following code always holds true:

// note iteration order of associated method `iter` depends on `device.default_order()`

// let b = a.reshape(... SOME SHAPE ...);
let a_vec = a.iter().collect::<Vec<_>>();
let b_vec = b.iter().collect::<Vec<_>>();
assert_eq!(a_vec, b_vec); // iterated sequence is the same

For example, in row-major order, reshape a matrix of (2, 3) to (3, 2):

// set to row-major order
device.set_default_order(RowMajor);
// a: [[0, 1, 2], [3, 4, 5]]
// b: [[0, 1], [2, 3], [4, 5]]
// iterated sequence: [0, 1, 2, 3, 4, 5]
let a = rt::tensor_from_nested!([[0, 1, 2], [3, 4, 5]], &device);
let b = a.reshape([3, 2]);
let b_expected = rt::tensor_from_nested!([[0, 1], [2, 3], [4, 5]], &device);
assert!(rt::allclose(&b, &b_expected, None));
let a_vec = a.iter().cloned().collect::<Vec<_>>();
let b_vec = b.iter().cloned().collect::<Vec<_>>();
assert_eq!(a_vec, b_vec); // iterated sequence is the same
assert_eq!(a_vec, vec![0, 1, 2, 3, 4, 5]);

In the column-major order, reshape the same matrix of (2, 3) to (3, 2) will yield a different result:

// set to column-major order
device.set_default_order(ColMajor);
// a: [[0, 1, 2], [3, 4, 5]]
// b: [[0, 4], [3, 2], [1, 5]]
// iterated sequence: [0, 3, 1, 4, 2, 5]
let a = rt::tensor_from_nested!([[0, 1, 2], [3, 4, 5]], &device);
let b = a.reshape([3, 2]);
let b_expected = rt::tensor_from_nested!([[0, 4], [3, 2], [1, 5]], &device);
assert!(rt::allclose(&b, &b_expected, None));
let a_vec = a.iter().cloned().collect::<Vec<_>>();
let b_vec = b.iter().cloned().collect::<Vec<_>>();
assert_eq!(a_vec, b_vec); // iterated sequence is the same
assert_eq!(a_vec, vec![0, 3, 1, 4, 2, 5]);

§Occasions of data cloning

The following discussion assumes the tensor is in row-major order. Similar discussion applies to column-major order.

If the tensor to be reshaped is already in C-contiguous if the device is also row-major, or F-contiguous if the device is column-major, then the reshape operation can be performed without any data cloning.

Otherwise, whether data cloning is necessary depends. For example, consider a tensor of shape (4, 6, 9) but with non-contiguous strides:

// shape: (4, 6, 9), stride: (72, 9, 1), not c-contiguous
// contiguous situation: (4, [6, 9]), or say the last two dimensions are contiguous
let a = rt::arange((288, &device)).into_shape([4, 8, 9]).into_slice((.., 0..6, ..));
assert_eq!(a.shape(), &[4, 6, 9]);
assert_eq!(a.stride(), &[72, 9, 1]);
assert!(!a.c_contig());

Those cases will not require data cloning (returns a view, or DataCow::Ref internally):

// split a single dimension into multiple dimensions
assert!(!a.reshape([2, 2, 6, 9]).is_owned()); // (4, 6, 9) -> ([2, 2], 6, 9)
assert!(!a.reshape([4, 3, 2, 9]).is_owned()); // (4, 6, 9) -> (4, [3, 2], 9)
assert!(!a.reshape([4, 2, 3, 3, 3]).is_owned()); // (4, 6, 9) -> (4, [2, 3], [3, 3])

// merge contiguous dimensions into a single dimension
assert!(!a.reshape([4, 54]).is_owned()); // (4, 6, 9) -> (4, 6 * 9)

// merge contiguous dimensions and then split
assert!(!a.reshape([4, 3, 6, 3]).is_owned()); // (4, [6, 9]) -> (4, [3, 6, 3])

However, the following cases will require data cloning (returns an owned tensor, or DataCow::Owned internally):

assert!(a.reshape([24, 9]).is_owned()); // (4, 6, 9) -> (4 * 6, 9)
assert!(a.reshape(-1).is_owned()); // (4, 6, 9) -> (4 * 6 * 9)
assert!(a.reshape([12, 2, 9]).is_owned()); // (4, 6, 9) -> (4 * [3, 2], 9)

§See also

§Similar function from other crates/libraries

§Variants of this function