broadcast_arrays

Function broadcast_arrays 

Source
pub fn broadcast_arrays<R, T, B>(
    tensors: Vec<TensorAny<R, T, B, IxD>>,
) -> Vec<TensorAny<R, T, B, IxD>> 
where R: DataAPI<Data = B::Raw>, B: DeviceAPI<T>,
Expand description

Broadcasts any number of arrays against each other.

Row/Column Major Notice

This function behaves differently on default orders (RowMajor and ColMajor) of device.

§Parameters

  • tensors: Vec<TensorAny<R, T, B, IxD>>

    • The tensors to be broadcasted.
    • All tensors must be on the same device, and share the same ownerships.
    • This function takes ownership of the input tensors. If you want to obtain broadcasted views, you need to create a new vector of views first.
    • This function only accepts dynamic shape tensors (IxD).

§Returns

  • Vec<TensorAny<R, T, B, IxD>>

    • A vector of broadcasted tensors. Each tensor has the same shape after broadcasting.
    • The ownership of the underlying data is moved from the input tensors to the output tensors.
    • The tensors are typically not contiguous (with zero strides at the broadcasted axes). Writing values to broadcasted tensors is dangerous, but RSTSR will generally not panic on this behavior. Perform to_contig afterwards if requires owned contiguous tensors.

§Examples

The following example demonstrates how to use broadcast_arrays to broadcast two tensors:

use rstsr::prelude::*;
let mut device = DeviceCpu::default();
device.set_default_order(RowMajor);

let a = rt::asarray((vec![1, 2, 3], &device)).into_shape([3]);
let b = rt::asarray((vec![4, 5], &device)).into_shape([2, 1]);

let result = rt::broadcast_arrays(vec![a, b]);
let expected_a = rt::tensor_from_nested!(
    [[1, 2, 3],
     [1, 2, 3]],
    &device);
let expected_b = rt::tensor_from_nested!(
    [[4, 4, 4],
     [5, 5, 5]],
    &device);
assert!(rt::allclose!(&result[0], &expected_a));
assert!(rt::allclose!(&result[1], &expected_b));

Please note that the above code only works in RowMajor.

For ColMajor order, the broadcasting will fail, because the broadcasting rules are applied differently, shapes are incompatible. You need to make the following changes to let ColMajor case work:

let mut device = DeviceCpu::default();
device.set_default_order(ColMajor);
// Note shape of `a` changed from [3] to [1, 3]
let a = rt::asarray((vec![1, 2, 3], &device)).into_shape([1, 3]);
let b = rt::asarray((vec![4, 5], &device)).into_shape([2, 1]);

§Panics

  • Incompatible shapes to be broadcasted.
  • Tensors are on different devices.

§See also

§Similar function from other crates/libraries

  • to_broadcast: Broadcasts a single array to a specified shape.

§Variants of this function