Struct slipstream::vector::Vector
source · #[repr(C)]pub struct Vector<A, B, const S: usize>where
A: Align,
B: Repr,{ /* private fields */ }
Expand description
A vector type.
Vector types are mostly well aligned fixed sized arrays. Unlike the arrays, they have the usual numeric operators and several helpful methods implemented on them. They perform the operations „per lane“ independently and allow the CPU to parallelize the computations.
The types have convenient aliases ‒ for example u32x4
is an alias for
Vector<Align16, u32, 4>
and corresponds to [u32; 4]
(but aligned to 16 bytes).
While these can be operated as arrays (indexing, copying between slices, etc), it is better to perform operations on whole vectors at once.
The usual comparing operators don’t exist (<=
), but there are „per lane“ comparison operators
that return mask vectors ‒ vectors of boolean-like values. These can either be examined
manually, or fed into other operations on vectors, like blend
or
gather_load_masked
.
Examples
let a = i32x4::new([1, -2, 3, -4]);
let b = -a; // [-1, 2, -3, 4]
let positive = a.ge(i32x4::splat(1)); // Lane-wise a >= 1
// Will take from b where positive is true, from a otherwise
let abs = b.blend(a, positive);
assert_eq!(abs, i32x4::new([1, 2, 3, 4]));
Implementations§
source§impl<A, B, const S: usize> Vector<A, B, S>where
A: Align,
B: Repr,
impl<A, B, const S: usize> Vector<A, B, S>where A: Align, B: Repr,
sourcepub unsafe fn new_unchecked(input: *const B) -> Self
pub unsafe fn new_unchecked(input: *const B) -> Self
Loads the vector without doing bounds checks.
Safety
The pointed to memory must be valid in Self::LANES
consecutive cells ‒ eg. it must
contain a full array of the base types.
sourcepub fn new<I>(input: I) -> Selfwhere
I: AsRef<[B]>,
pub fn new<I>(input: I) -> Selfwhere I: AsRef<[B]>,
Loads the vector from correctly sized slice.
This loads the vector from correctly sized slice or anything that can be converted to it ‒ specifically, fixed sized arrays and other vectors work.
Example
let vec = (0..10).collect::<Vec<_>>();
let v1 = u32x4::new(&vec[0..4]);
let v2 = u32x4::new(v1);
let v3 = u32x4::new([2, 3, 4, 5]);
assert_eq!(v1 + v2 + v3, u32x4::new([2, 5, 8, 11]));
Panics
If the provided slice is of incompatible size.
sourcepub fn splat(value: B) -> Self
pub fn splat(value: B) -> Self
Produces a vector of all lanes set to the same value.
let v = f32x4::splat(1.2);
assert_eq!(v, f32x4::new([1.2, 1.2, 1.2, 1.2]));
sourcepub fn gather_load<I, Idx>(input: I, idx: Idx) -> Selfwhere
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
pub fn gather_load<I, Idx>(input: I, idx: Idx) -> Selfwhere I: AsRef<[B]>, Idx: AsRef<[usize]>,
Loads the vector from a slice by indexing it.
Unlike new
, this can load the vector from discontinuous parts of the slice, out of
order or multiple lanes from the same location. This flexibility comes at the cost of lower
performance (in particular, I’ve never seen this to get auto-vectorized even though a
gather instruction exists), therefore prefer new
where possible.
Examples
let input = (2..100).collect::<Vec<_>>();
let vec = u32x4::gather_load(&input, [3, 3, 1, 32]);
assert_eq!(vec, u32x4::new([5, 5, 3, 34]));
It is possible to use another vector as the indices:
let indices = usizex4::new([1, 2, 3, 4]) * usizex4::splat(2);
let input = (0..10).collect::<Vec<_>>();
let vec = u32x4::gather_load(&input, indices);
assert_eq!(vec, u32x4::new([2, 4, 6, 8]));
It is possible to use another vector as an input, allowing to narrow it down or shuffle.
let a = u32x4::new([1, 2, 3, 4]);
let b = u32x4::gather_load(a, [2, 0, 1, 3]);
assert_eq!(b, u32x4::new([3, 1, 2, 4]));
let c = u32x2::gather_load(a, [2, 2]);
assert_eq!(c, u32x2::new([3, 3]));
Panics
- If the
idx
slice doesn’t have the same length as the vector. - If any of the indices is out of bounds of the
input
.
sourcepub fn gather_load_masked<I, Idx, M, MB>(self, input: I, idx: Idx, mask: M) -> Selfwhere
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
pub fn gather_load_masked<I, Idx, M, MB>(self, input: I, idx: Idx, mask: M) -> Selfwhere I: AsRef<[B]>, Idx: AsRef<[usize]>, M: AsRef<[MB]>, MB: Mask,
Loads enabled lanes from a slice by indexing it.
This is similar to gather_load
. However, the loading of lanes is
enabled by a mask. If the corresponding lane mask is not set, the value is taken from
self
. In other words, if the mask is all-true, it is semantically equivalent to
gather_load
, expect with possible worse performance.
Examples
let input = (0..100).collect::<Vec<_>>();
let v = u32x4::default().gather_load_masked(
&input,
[1, 4, 2, 2],
[m32::TRUE, m32::FALSE, m32::FALSE, m32::TRUE]
);
assert_eq!(v, u32x4::new([1, 0, 0, 2]));
let left = u32x2::new([1, 2]);
let right = u32x2::new([3, 4]);
let idx = usizex4::new([0, 1, 0, 1]);
let mask = m32x4::new([m32::TRUE, m32::TRUE, m32::FALSE, m32::FALSE]);
let v = u32x4::default()
.gather_load_masked(left, idx, mask)
.gather_load_masked(right, idx, !mask);
assert_eq!(v, u32x4::new([1, 2, 3, 4]));
Panics
- If the
mask
or theidx
parameter is of different length than the vector. - If any of the active indices are out of bounds of
input
.
sourcepub fn store<O: AsMut<[B]>>(self, output: O)
pub fn store<O: AsMut<[B]>>(self, output: O)
Stores the content into a continuous slice of the correct length.
This is less general than scatter_store
, that one allows storing
to different parts of the slice.
The counterpart of this is new
.
Panics
If the length doesn’t match.
sourcepub fn scatter_store<O, Idx>(self, output: O, idx: Idx)where
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
pub fn scatter_store<O, Idx>(self, output: O, idx: Idx)where O: AsMut<[B]>, Idx: AsRef<[usize]>,
Store the vector into a slice by indexing it.
This is the inverse of gather_load
. It takes the lanes of the
vector and stores them into the slice into given indices.
If you want to store it into a continuous slice, it is potentially faster to do it using
the copy_from_slice
method or by store
:
let mut data = vec![0; 6];
let v = u32x4::new([1, 2, 3, 4]);
data[0..4].copy_from_slice(&v[..]);
assert_eq!(&data[..], &[1, 2, 3, 4, 0, 0]);
v.store(&mut data[..4]);
assert_eq!(&data[..], &[1, 2, 3, 4, 0, 0]);
Examples
let mut data = vec![0; 6];
let v = u32x4::new([1, 2, 3, 4]);
v.scatter_store(&mut data, [2, 5, 0, 1]);
assert_eq!(&data[..], &[3, 4, 1, 0, 0, 2]);
Warning
If multiple lanes are to be stored into the same slice element, it is not specified which of them will end up being stored. It is not UB to do so and it’ll always be one of them, however it may change between versions or even between compilation targets which.
This is to allow for potential different behaviour of different platforms.
Panics
- If the
idx
has a different length than the vector. - If any of the indices are out of bounds of
output
.
sourcepub fn scatter_store_masked<O, Idx, M, MB>(self, output: O, idx: Idx, mask: M)where
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
pub fn scatter_store_masked<O, Idx, M, MB>(self, output: O, idx: Idx, mask: M)where O: AsMut<[B]>, Idx: AsRef<[usize]>, M: AsRef<[MB]>, MB: Mask,
A masked version of scatter_store
.
This acts in the same way as scatter_store
, except lanes disabled by the mask
are not
stored anywhere.
Panics
- If the
idx
ormask
has a different length than the vector. - If any of the active indices are out of bounds of
output
.
sourcepub fn blend<M, MB>(self, other: Self, mask: M) -> Selfwhere
M: AsRef<[MB]>,
MB: Mask,
pub fn blend<M, MB>(self, other: Self, mask: M) -> Selfwhere M: AsRef<[MB]>, MB: Mask,
Blend self and other using mask.
Imports enabled lanes from other
, keeps disabled lanes from self
.
Examples
let odd = u32x4::new([1, 3, 5, 7]);
let even = u32x4::new([2, 4, 6, 8]);
let mask = m32x4::new([m32::TRUE, m32::FALSE, m32::TRUE, m32::FALSE]);
assert_eq!(odd.blend(even, mask), u32x4::new([2, 3, 6, 7]));
sourcepub fn maximum(self, other: Self) -> Selfwhere
B: PartialOrd,
pub fn maximum(self, other: Self) -> Selfwhere B: PartialOrd,
A lane-wise maximum.
Examples
let a = u32x4::new([1, 4, 2, 5]);
let b = u32x4::new([2, 3, 2, 6]);
assert_eq!(a.maximum(b), u32x4::new([2, 4, 2, 6]));
sourcepub fn minimum(self, other: Self) -> Selfwhere
B: PartialOrd,
pub fn minimum(self, other: Self) -> Selfwhere B: PartialOrd,
A lane-wise maximum.
Examples
let a = u32x4::new([1, 4, 2, 5]);
let b = u32x4::new([2, 3, 2, 6]);
assert_eq!(a.minimum(b), u32x4::new([1, 3, 2, 5]));
sourcepub fn horizontal_sum(self) -> Bwhere
B: Add<Output = B>,
pub fn horizontal_sum(self) -> Bwhere B: Add<Output = B>,
Sums the lanes together.
The additions are done in a tree manner: (a[0] + a[1]) + (a[2] + a[3])
.
Note that this is potentially a slow operation. Prefer to do as many operations on whole vectors and only at the very end perform the horizontal operation.
sourcepub fn horizontal_product(self) -> Bwhere
B: Mul<Output = B>,
pub fn horizontal_product(self) -> Bwhere B: Mul<Output = B>,
Multiplies all the lanes of the vector.
The multiplications are done in a tree manner: (a[0] * a[1]) * (a[2] * a[3])
.
Note that this is potentially a slow operation. Prefer to do as many operations on whole vectors and only at the very end perform the horizontal operation.
sourcepub fn lt(self, other: Self) -> <Self as Masked>::Maskwhere
B: PartialOrd,
pub fn lt(self, other: Self) -> <Self as Masked>::Maskwhere B: PartialOrd,
Lane-wise <
.
sourcepub fn gt(self, other: Self) -> <Self as Masked>::Maskwhere
B: PartialOrd,
pub fn gt(self, other: Self) -> <Self as Masked>::Maskwhere B: PartialOrd,
Lane-wise >
.
sourcepub fn le(self, other: Self) -> <Self as Masked>::Maskwhere
B: PartialOrd,
pub fn le(self, other: Self) -> <Self as Masked>::Maskwhere B: PartialOrd,
Lane-wise <=
.
sourcepub fn ge(self, other: Self) -> <Self as Masked>::Maskwhere
B: PartialOrd,
pub fn ge(self, other: Self) -> <Self as Masked>::Maskwhere B: PartialOrd,
Lane-wise >=
.
source§impl<A, B, const S: usize> Vector<A, B, S>where
A: Align,
B: Repr + Float,
impl<A, B, const S: usize> Vector<A, B, S>where A: Align, B: Repr + Float,
sourcepub fn mul_add(self, a: Self, b: Self) -> Self
pub fn mul_add(self, a: Self, b: Self) -> Self
Fused multiply-add. Computes (self * a) + b with only one rounding error, yielding a more accurate result than an unfused multiply-add.
Using mul_add can be more performant than an unfused multiply-add if the target architecture has a dedicated fma CPU instruction.
Methods from Deref<Target = [B; S]>§
1.57.0 · sourcepub fn as_slice(&self) -> &[T] ⓘ
pub fn as_slice(&self) -> &[T] ⓘ
Returns a slice containing the entire array. Equivalent to &s[..]
.
1.57.0 · sourcepub fn as_mut_slice(&mut self) -> &mut [T] ⓘ
pub fn as_mut_slice(&mut self) -> &mut [T] ⓘ
Returns a mutable slice containing the entire array. Equivalent to
&mut s[..]
.
sourcepub fn each_ref(&self) -> [&T; N]
🔬This is a nightly-only experimental API. (array_methods
)
pub fn each_ref(&self) -> [&T; N]
array_methods
)Borrows each element and returns an array of references with the same
size as self
.
Example
#![feature(array_methods)]
let floats = [3.1, 2.7, -1.0];
let float_refs: [&f64; 3] = floats.each_ref();
assert_eq!(float_refs, [&3.1, &2.7, &-1.0]);
This method is particularly useful if combined with other methods, like
map
. This way, you can avoid moving the original
array if its elements are not Copy
.
#![feature(array_methods)]
let strings = ["Ferris".to_string(), "♥".to_string(), "Rust".to_string()];
let is_ascii = strings.each_ref().map(|s| s.is_ascii());
assert_eq!(is_ascii, [true, false, true]);
// We can still access the original array: it has not been moved.
assert_eq!(strings.len(), 3);
sourcepub fn each_mut(&mut self) -> [&mut T; N]
🔬This is a nightly-only experimental API. (array_methods
)
pub fn each_mut(&mut self) -> [&mut T; N]
array_methods
)Borrows each element mutably and returns an array of mutable references
with the same size as self
.
Example
#![feature(array_methods)]
let mut floats = [3.1, 2.7, -1.0];
let float_refs: [&mut f64; 3] = floats.each_mut();
*float_refs[0] = 0.0;
assert_eq!(float_refs, [&mut 0.0, &mut 2.7, &mut -1.0]);
assert_eq!(floats, [0.0, 2.7, -1.0]);
sourcepub fn split_array_ref<const M: usize>(&self) -> (&[T; M], &[T])
🔬This is a nightly-only experimental API. (split_array
)
pub fn split_array_ref<const M: usize>(&self) -> (&[T; M], &[T])
split_array
)Divides one array reference into two at an index.
The first will contain all indices from [0, M)
(excluding
the index M
itself) and the second will contain all
indices from [M, N)
(excluding the index N
itself).
Panics
Panics if M > N
.
Examples
#![feature(split_array)]
let v = [1, 2, 3, 4, 5, 6];
{
let (left, right) = v.split_array_ref::<0>();
assert_eq!(left, &[]);
assert_eq!(right, &[1, 2, 3, 4, 5, 6]);
}
{
let (left, right) = v.split_array_ref::<2>();
assert_eq!(left, &[1, 2]);
assert_eq!(right, &[3, 4, 5, 6]);
}
{
let (left, right) = v.split_array_ref::<6>();
assert_eq!(left, &[1, 2, 3, 4, 5, 6]);
assert_eq!(right, &[]);
}
sourcepub fn split_array_mut<const M: usize>(&mut self) -> (&mut [T; M], &mut [T])
🔬This is a nightly-only experimental API. (split_array
)
pub fn split_array_mut<const M: usize>(&mut self) -> (&mut [T; M], &mut [T])
split_array
)Divides one mutable array reference into two at an index.
The first will contain all indices from [0, M)
(excluding
the index M
itself) and the second will contain all
indices from [M, N)
(excluding the index N
itself).
Panics
Panics if M > N
.
Examples
#![feature(split_array)]
let mut v = [1, 0, 3, 0, 5, 6];
let (left, right) = v.split_array_mut::<2>();
assert_eq!(left, &mut [1, 0][..]);
assert_eq!(right, &mut [3, 0, 5, 6]);
left[1] = 2;
right[1] = 4;
assert_eq!(v, [1, 2, 3, 4, 5, 6]);
sourcepub fn rsplit_array_ref<const M: usize>(&self) -> (&[T], &[T; M])
🔬This is a nightly-only experimental API. (split_array
)
pub fn rsplit_array_ref<const M: usize>(&self) -> (&[T], &[T; M])
split_array
)Divides one array reference into two at an index from the end.
The first will contain all indices from [0, N - M)
(excluding
the index N - M
itself) and the second will contain all
indices from [N - M, N)
(excluding the index N
itself).
Panics
Panics if M > N
.
Examples
#![feature(split_array)]
let v = [1, 2, 3, 4, 5, 6];
{
let (left, right) = v.rsplit_array_ref::<0>();
assert_eq!(left, &[1, 2, 3, 4, 5, 6]);
assert_eq!(right, &[]);
}
{
let (left, right) = v.rsplit_array_ref::<2>();
assert_eq!(left, &[1, 2, 3, 4]);
assert_eq!(right, &[5, 6]);
}
{
let (left, right) = v.rsplit_array_ref::<6>();
assert_eq!(left, &[]);
assert_eq!(right, &[1, 2, 3, 4, 5, 6]);
}
sourcepub fn rsplit_array_mut<const M: usize>(&mut self) -> (&mut [T], &mut [T; M])
🔬This is a nightly-only experimental API. (split_array
)
pub fn rsplit_array_mut<const M: usize>(&mut self) -> (&mut [T], &mut [T; M])
split_array
)Divides one mutable array reference into two at an index from the end.
The first will contain all indices from [0, N - M)
(excluding
the index N - M
itself) and the second will contain all
indices from [N - M, N)
(excluding the index N
itself).
Panics
Panics if M > N
.
Examples
#![feature(split_array)]
let mut v = [1, 0, 3, 0, 5, 6];
let (left, right) = v.rsplit_array_mut::<4>();
assert_eq!(left, &mut [1, 0]);
assert_eq!(right, &mut [3, 0, 5, 6][..]);
left[1] = 2;
right[1] = 4;
assert_eq!(v, [1, 2, 3, 4, 5, 6]);
Trait Implementations§
source§impl<A: Align, B: Add<Output = B> + Repr, const S: usize> Add<Vector<A, B, S>> for Vector<A, B, S>
impl<A: Align, B: Add<Output = B> + Repr, const S: usize> Add<Vector<A, B, S>> for Vector<A, B, S>
source§impl<A: Align, B: AddAssign + Repr, const S: usize> AddAssign<B> for Vector<A, B, S>
impl<A: Align, B: AddAssign + Repr, const S: usize> AddAssign<B> for Vector<A, B, S>
source§fn add_assign(&mut self, rhs: B)
fn add_assign(&mut self, rhs: B)
+=
operation. Read moresource§impl<A: Align, B: AddAssign + Repr, const S: usize> AddAssign<Vector<A, B, S>> for Vector<A, B, S>
impl<A: Align, B: AddAssign + Repr, const S: usize> AddAssign<Vector<A, B, S>> for Vector<A, B, S>
source§fn add_assign(&mut self, rhs: Self)
fn add_assign(&mut self, rhs: Self)
+=
operation. Read moresource§impl<A: Align, B: BitAnd<Output = B> + Repr, const S: usize> BitAnd<Vector<A, B, S>> for Vector<A, B, S>
impl<A: Align, B: BitAnd<Output = B> + Repr, const S: usize> BitAnd<Vector<A, B, S>> for Vector<A, B, S>
source§impl<A: Align, B: BitAndAssign + Repr, const S: usize> BitAndAssign<B> for Vector<A, B, S>
impl<A: Align, B: BitAndAssign + Repr, const S: usize> BitAndAssign<B> for Vector<A, B, S>
source§fn bitand_assign(&mut self, rhs: B)
fn bitand_assign(&mut self, rhs: B)
&=
operation. Read moresource§impl<A: Align, B: BitAndAssign + Repr, const S: usize> BitAndAssign<Vector<A, B, S>> for Vector<A, B, S>
impl<A: Align, B: BitAndAssign + Repr, const S: usize> BitAndAssign<Vector<A, B, S>> for Vector<A, B, S>
source§fn bitand_assign(&mut self, rhs: Self)
fn bitand_assign(&mut self, rhs: Self)
&=
operation. Read moresource§impl<A: Align, B: BitOr<Output = B> + Repr, const S: usize> BitOr<Vector<A, B, S>> for Vector<A, B, S>
impl<A: Align, B: BitOr<Output = B> + Repr, const S: usize> BitOr<Vector<A, B, S>> for Vector<A, B, S>
source§impl<A: Align, B: BitOrAssign + Repr, const S: usize> BitOrAssign<B> for Vector<A, B, S>
impl<A: Align, B: BitOrAssign + Repr, const S: usize> BitOrAssign<B> for Vector<A, B, S>
source§fn bitor_assign(&mut self, rhs: B)
fn bitor_assign(&mut self, rhs: B)
|=
operation. Read moresource§impl<A: Align, B: BitOrAssign + Repr, const S: usize> BitOrAssign<Vector<A, B, S>> for Vector<A, B, S>
impl<A: Align, B: BitOrAssign + Repr, const S: usize> BitOrAssign<Vector<A, B, S>> for Vector<A, B, S>
source§fn bitor_assign(&mut self, rhs: Self)
fn bitor_assign(&mut self, rhs: Self)
|=
operation. Read moresource§impl<A: Align, B: BitXor<Output = B> + Repr, const S: usize> BitXor<Vector<A, B, S>> for Vector<A, B, S>
impl<A: Align, B: BitXor<Output = B> + Repr, const S: usize> BitXor<Vector<A, B, S>> for Vector<A, B, S>
source§impl<A: Align, B: BitXorAssign + Repr, const S: usize> BitXorAssign<B> for Vector<A, B, S>
impl<A: Align, B: BitXorAssign + Repr, const S: usize> BitXorAssign<B> for Vector<A, B, S>
source§fn bitxor_assign(&mut self, rhs: B)
fn bitxor_assign(&mut self, rhs: B)
^=
operation. Read moresource§impl<A: Align, B: BitXorAssign + Repr, const S: usize> BitXorAssign<Vector<A, B, S>> for Vector<A, B, S>
impl<A: Align, B: BitXorAssign + Repr, const S: usize> BitXorAssign<Vector<A, B, S>> for Vector<A, B, S>
source§fn bitxor_assign(&mut self, rhs: Self)
fn bitxor_assign(&mut self, rhs: Self)
^=
operation. Read moresource§impl<A: Align, B: Div<Output = B> + Repr, const S: usize> Div<Vector<A, B, S>> for Vector<A, B, S>
impl<A: Align, B: Div<Output = B> + Repr, const S: usize> Div<Vector<A, B, S>> for Vector<A, B, S>
source§impl<A: Align, B: DivAssign + Repr, const S: usize> DivAssign<B> for Vector<A, B, S>
impl<A: Align, B: DivAssign + Repr, const S: usize> DivAssign<B> for Vector<A, B, S>
source§fn div_assign(&mut self, rhs: B)
fn div_assign(&mut self, rhs: B)
/=
operation. Read moresource§impl<A: Align, B: DivAssign + Repr, const S: usize> DivAssign<Vector<A, B, S>> for Vector<A, B, S>
impl<A: Align, B: DivAssign + Repr, const S: usize> DivAssign<Vector<A, B, S>> for Vector<A, B, S>
source§fn div_assign(&mut self, rhs: Self)
fn div_assign(&mut self, rhs: Self)
/=
operation. Read moresource§impl<I, A, B, const S: usize> Index<I> for Vector<A, B, S>where
A: Align,
B: Repr,
[B; S]: Index<I>,
impl<I, A, B, const S: usize> Index<I> for Vector<A, B, S>where A: Align, B: Repr, [B; S]: Index<I>,
source§impl<I, A, B, const S: usize> IndexMut<I> for Vector<A, B, S>where
A: Align,
B: Repr,
[B; S]: IndexMut<I>,
impl<I, A, B, const S: usize> IndexMut<I> for Vector<A, B, S>where A: Align, B: Repr, [B; S]: IndexMut<I>,
source§impl<A: Align, B: Mul<Output = B> + Repr, const S: usize> Mul<Vector<A, B, S>> for Vector<A, B, S>
impl<A: Align, B: Mul<Output = B> + Repr, const S: usize> Mul<Vector<A, B, S>> for Vector<A, B, S>
source§impl<A: Align, B: MulAssign + Repr, const S: usize> MulAssign<B> for Vector<A, B, S>
impl<A: Align, B: MulAssign + Repr, const S: usize> MulAssign<B> for Vector<A, B, S>
source§fn mul_assign(&mut self, rhs: B)
fn mul_assign(&mut self, rhs: B)
*=
operation. Read moresource§impl<A: Align, B: MulAssign + Repr, const S: usize> MulAssign<Vector<A, B, S>> for Vector<A, B, S>
impl<A: Align, B: MulAssign + Repr, const S: usize> MulAssign<Vector<A, B, S>> for Vector<A, B, S>
source§fn mul_assign(&mut self, rhs: Self)
fn mul_assign(&mut self, rhs: Self)
*=
operation. Read moresource§impl<A: Align, B: PartialEq + Repr, const S: usize> PartialEq<[B; S]> for Vector<A, B, S>
impl<A: Align, B: PartialEq + Repr, const S: usize> PartialEq<[B; S]> for Vector<A, B, S>
source§impl<A: Align, B: PartialEq + Repr, const S: usize> PartialEq<Vector<A, B, S>> for [B; S]
impl<A: Align, B: PartialEq + Repr, const S: usize> PartialEq<Vector<A, B, S>> for [B; S]
source§impl<A: Align, B: PartialEq + Repr, const S: usize> PartialEq<Vector<A, B, S>> for Vector<A, B, S>
impl<A: Align, B: PartialEq + Repr, const S: usize> PartialEq<Vector<A, B, S>> for Vector<A, B, S>
source§impl<A: Align, B: MulAssign + Repr, const S: usize> Product<Vector<A, B, S>> for Vector<A, B, S>
impl<A: Align, B: MulAssign + Repr, const S: usize> Product<Vector<A, B, S>> for Vector<A, B, S>
source§impl<A: Align, B: Rem<Output = B> + Repr, const S: usize> Rem<Vector<A, B, S>> for Vector<A, B, S>
impl<A: Align, B: Rem<Output = B> + Repr, const S: usize> Rem<Vector<A, B, S>> for Vector<A, B, S>
source§impl<A: Align, B: RemAssign + Repr, const S: usize> RemAssign<B> for Vector<A, B, S>
impl<A: Align, B: RemAssign + Repr, const S: usize> RemAssign<B> for Vector<A, B, S>
source§fn rem_assign(&mut self, rhs: B)
fn rem_assign(&mut self, rhs: B)
%=
operation. Read moresource§impl<A: Align, B: RemAssign + Repr, const S: usize> RemAssign<Vector<A, B, S>> for Vector<A, B, S>
impl<A: Align, B: RemAssign + Repr, const S: usize> RemAssign<Vector<A, B, S>> for Vector<A, B, S>
source§fn rem_assign(&mut self, rhs: Self)
fn rem_assign(&mut self, rhs: Self)
%=
operation. Read moresource§impl<A: Align, B: Shl<Output = B> + Repr, const S: usize> Shl<Vector<A, B, S>> for Vector<A, B, S>
impl<A: Align, B: Shl<Output = B> + Repr, const S: usize> Shl<Vector<A, B, S>> for Vector<A, B, S>
source§impl<A: Align, B: ShlAssign + Repr, const S: usize> ShlAssign<B> for Vector<A, B, S>
impl<A: Align, B: ShlAssign + Repr, const S: usize> ShlAssign<B> for Vector<A, B, S>
source§fn shl_assign(&mut self, rhs: B)
fn shl_assign(&mut self, rhs: B)
<<=
operation. Read moresource§impl<A: Align, B: ShlAssign + Repr, const S: usize> ShlAssign<Vector<A, B, S>> for Vector<A, B, S>
impl<A: Align, B: ShlAssign + Repr, const S: usize> ShlAssign<Vector<A, B, S>> for Vector<A, B, S>
source§fn shl_assign(&mut self, rhs: Self)
fn shl_assign(&mut self, rhs: Self)
<<=
operation. Read moresource§impl<A: Align, B: Shr<Output = B> + Repr, const S: usize> Shr<Vector<A, B, S>> for Vector<A, B, S>
impl<A: Align, B: Shr<Output = B> + Repr, const S: usize> Shr<Vector<A, B, S>> for Vector<A, B, S>
source§impl<A: Align, B: ShrAssign + Repr, const S: usize> ShrAssign<B> for Vector<A, B, S>
impl<A: Align, B: ShrAssign + Repr, const S: usize> ShrAssign<B> for Vector<A, B, S>
source§fn shr_assign(&mut self, rhs: B)
fn shr_assign(&mut self, rhs: B)
>>=
operation. Read moresource§impl<A: Align, B: ShrAssign + Repr, const S: usize> ShrAssign<Vector<A, B, S>> for Vector<A, B, S>
impl<A: Align, B: ShrAssign + Repr, const S: usize> ShrAssign<Vector<A, B, S>> for Vector<A, B, S>
source§fn shr_assign(&mut self, rhs: Self)
fn shr_assign(&mut self, rhs: Self)
>>=
operation. Read moresource§impl<A: Align, B: Sub<Output = B> + Repr, const S: usize> Sub<Vector<A, B, S>> for Vector<A, B, S>
impl<A: Align, B: Sub<Output = B> + Repr, const S: usize> Sub<Vector<A, B, S>> for Vector<A, B, S>
source§impl<A: Align, B: SubAssign + Repr, const S: usize> SubAssign<B> for Vector<A, B, S>
impl<A: Align, B: SubAssign + Repr, const S: usize> SubAssign<B> for Vector<A, B, S>
source§fn sub_assign(&mut self, rhs: B)
fn sub_assign(&mut self, rhs: B)
-=
operation. Read moresource§impl<A: Align, B: SubAssign + Repr, const S: usize> SubAssign<Vector<A, B, S>> for Vector<A, B, S>
impl<A: Align, B: SubAssign + Repr, const S: usize> SubAssign<Vector<A, B, S>> for Vector<A, B, S>
source§fn sub_assign(&mut self, rhs: Self)
fn sub_assign(&mut self, rhs: Self)
-=
operation. Read more