pub fn into_shape<'a, I, R, T, B, D>(
tensor: TensorBase<Storage<R, T, B>, D>,
shape: I,
) -> TensorBase<Storage<DataOwned<<B as DeviceRawAPI<T>>::Raw>, T, B>, Vec<usize>>where
I: TryInto<AxesIndex<isize>, Error = Error>,
R: DataAPI<Data = <B as DeviceRawAPI<T>>::Raw> + DataIntoCowAPI<'a>,
D: DimAPI,
T: Clone,
B: DeviceAPI<T> + DeviceRawAPI<MaybeUninit<T>> + DeviceCreationAnyAPI<T> + OpAssignArbitaryAPI<T, Vec<usize>, D> + OpAssignAPI<T, Vec<usize>>,
<B as DeviceRawAPI<T>>::Raw: Clone + 'a,Expand description
Reshapes the given tensor to the specified shape.
Row/Column Major Notice
This function behaves differently on default orders (RowMajor and ColMajor) of device.
§Parameters
-
tensor:TensorAny<R, T, B, D>- The input tensor to be reshaped.
- Ownership of input tensor is taken.
-
axes: TryIntoAxesIndex<isize>- Position in the expanded axes where the new axis (or axes) is placed.
- Can be a single integer, or a list/tuple of integers.
- Negative values are supported and indicate counting dimensions from the back.
§Returns
-
-
The reshaped tensor.
-
This function will try to avoid data cloning if possible, but with strict conditions:
- Layout-compatible after reshaping;
- Input tensor owns the underlying data (i.e., not a view);
- The input tensor is compact in memory (i.e., the underlying data does not have redundant elements; size of tensor exactly matches the length of underlying data).
-
This function is different to change_shape and reshape, in
that it takes ownership of the input tensor, and always returns an owned tensor.
§Examples
use rstsr::prelude::*;
let a = rt::arange(6).into_shape([2, 3]);§Elaborated examples
Here is some showcases that demonstrate when data cloning happens or not. All examples are row-major.
A first case is a tensor that is not fully contiguous (containing negative strides), but the tensor is compact (size of tensor is the same to the length of underlying data). In this case, if the new shape is compatible, no data cloning happens:
// shape: (4, 6, 9), stride: (-54, 9, 1), not c-contiguous
// contiguous situation: (4, [6, 9]); the first dimension is reversed
let a = rt::arange((216, &device)).into_shape([4, 6, 9]).into_flip(0);
let a_ptr = a.raw().as_ptr();
let b = a.into_shape([4, 54]);
let b_ptr = b.raw().as_ptr();
assert_eq!(a_ptr, b_ptr); // contiguous dims merged, no data clone happenedHowever, if the new shape is not compatible, data cloning will happen:
// shape: (4, 6, 9), stride: (-54, 9, 1), not c-contiguous
// contiguous situation: (4, [6, 9]); the first dimension is reversed
let a = rt::arange((216, &device)).into_shape([4, 6, 9]).into_flip(0);
let a_ptr = a.raw().as_ptr();
let b = a.into_shape([24, 9]);
let b_ptr = b.raw().as_ptr();
assert_ne!(a_ptr, b_ptr); // layout not compatible, data clone happenedAnother case is a tensor that is not compact (size of tensor is less than the length of underlying data). In this case, even if the new shape is compatible, data cloning will happen:
// shape: (4, 6, 9), stride: (72, 9, 1), not c-contiguous
// contiguous situation: (4, [6, 9]), or say the last two dimensions are contiguous
let a = rt::arange((288, &device)).into_shape([4, 8, 9]).into_slice((.., 0..6, ..));
let a_ptr = a.raw().as_ptr();
let b = a.into_shape([4, 54]);
let b_ptr = b.raw().as_ptr();
assert_ne!(a_ptr, b_ptr); // layout-compatible, but input tensor is not compact (216 < 288)§See also
Refer to reshape for more details and examples.