[−][src]Trait slipstream::Vector
A trait with common methods of the vector types.
The vector types (like u32x4
) don't have inherent methods on themselves. They implement
several traits (mostly arithmetics, bit operations, dereferencing to slices and indexing).
Further methods of all the vector types are on this trait.
It can also be used to describe multiple vector types at once ‒ for example Vector<Base = u32>
describes all the vectors that have u32
as their base type, be it u32x4
or
u32x16
.
Examples
let a = i32x4::new([1, -2, 3, -4]); let b = -a; // [-1, 2, -3, 4] let positive = a.ge(i32x4::splat(1)); // Lane-wise a >= 1 // Will take from b where positive is true, from a otherwise let abs = b.blend(a, positive); assert_eq!(abs, i32x4::new([1, 2, 3, 4]));
Associated Types
type Base: Repr
Type of one lane of the vector.
It's the u32
for u32x4
.
type Lanes: ArrayLength<Self::Base>
type Mask: AsRef<[<Self::Base as Repr>::Mask]>
The mask type for this vector.
Masks are vector types of boolean-like base types. They are used as results of lane-wise
comparisons like eq
and for enabling subsets of lanes for certain
operations, like blend
and
gather_load_masked
.
This associated types describes the native mask for the given vector. For example for
u32x4
it would be m32x4
. This is the type that the comparisons produce. While the
selection methods accept any mask type of the right number of lanes, using this type on
their input is expected to yield the best performance.
Associated Constants
const LANES: usize
Number of lanes of the vector.
This is similar to Lanes
, but as a constant instead of type.
Required methods
unsafe fn new_unchecked(input: *const Self::Base) -> Self
Load the vector without doing bounds checks.
Safety
The pointed to memory must be valid in Self::LANES
consecutive cells ‒ eg. it must
contain a full array of the base types.
fn splat(value: Self::Base) -> Self
Produces a vector of all lanes set to the same value.
let v = f32x4::splat(1.2); assert_eq!(v, f32x4::new([1.2, 1.2, 1.2, 1.2]));
fn gather_load<I, Idx>(input: I, idx: Idx) -> Self where
I: AsRef<[Self::Base]>,
Idx: AsRef<[usize]>,
I: AsRef<[Self::Base]>,
Idx: AsRef<[usize]>,
Loads the vector from a slice by indexing it.
Unlike the new
, this can load the vector from discontinuous parts of the slice, out of
order or multiple lanes from the same location. This flexibility comes at the cost of lower
performance (in particular, I've never seen this to get auto-vectorized even though a
gather instruction exists), therefore prefer new
where possible.
Examples
let input = (2..100).collect::<Vec<_>>(); let vec = u32x4::gather_load(&input, [3, 3, 1, 32]); assert_eq!(vec, u32x4::new([5, 5, 3, 34]));
It is possible to use another vector as the indices:
let indices = usizex4::new([1, 2, 3, 4]) * usizex4::splat(2); let input = (0..10).collect::<Vec<_>>(); let vec = u32x4::gather_load(&input, indices); assert_eq!(vec, u32x4::new([2, 4, 6, 8]));
It is possible to use another vector as an input, allowing to narrow it down or shuffle.
let a = u32x4::new([1, 2, 3, 4]); let b = u32x4::gather_load(a, [2, 0, 1, 3]); assert_eq!(b, u32x4::new([3, 1, 2, 4])); let c = u32x2::gather_load(a, [2, 2]); assert_eq!(c, u32x2::new([3, 3]));
Panics
- If the
idx
slice doesn't have the same length as the vector. - If any of the indices is out of bounds of the
input
.
fn gather_load_masked<I, Idx, M, MB>(self, input: I, idx: Idx, mask: M) -> Self where
I: AsRef<[Self::Base]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
I: AsRef<[Self::Base]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
Loads enabled lanes from a slice by indexing it.
This is similar to gather_load
. However, the loading of lanes is
enabled by a mask. If the corresponding lane mask is not set, the value is taken from
self
. In other words, if the mask is all-true, it is semantically equivalent to
gather_load
, expect with possible worse performance.
Examples
let input = (0..100).collect::<Vec<_>>(); let v = u32x4::default().gather_load_masked( &input, [1, 4, 2, 2], [m32::TRUE, m32::FALSE, m32::FALSE, m32::TRUE] ); assert_eq!(v, u32x4::new([1, 0, 0, 2]));
let left = u32x2::new([1, 2]); let right = u32x2::new([3, 4]); let idx = usizex4::new([0, 1, 0, 1]); let mask = m32x4::new([m32::TRUE, m32::TRUE, m32::FALSE, m32::FALSE]); let v = u32x4::default() .gather_load_masked(left, idx, mask) .gather_load_masked(right, idx, !mask); assert_eq!(v, u32x4::new([1, 2, 3, 4]));
Panics
- If the
mask
or theidx
parameter is of different length than the vector. - If any of the active indices are out of bounds of
input
.
fn scatter_store<O, Idx>(self, output: O, idx: Idx) where
O: AsMut<[Self::Base]>,
Idx: AsRef<[usize]>,
O: AsMut<[Self::Base]>,
Idx: AsRef<[usize]>,
Store the vector into a slice by indexing it.
This is the inverse of gather_load
. It takes the lanes of the
vector and stores them into the slice into given indices.
If you want to store it into a continuous slice, it is potentially faster to do it using
the copy_from_slice
method:
let mut data = vec![0; 6]; let v = u32x4::new([1, 2, 3, 4]); data[0..4].copy_from_slice(&v); assert_eq!(&data[..], &[1, 2, 3, 4, 0, 0]);
Examples
let mut data = vec![0; 6]; let v = u32x4::new([1, 2, 3, 4]); v.scatter_store(&mut data, [2, 5, 0, 1]); assert_eq!(&data[..], &[3, 4, 1, 0, 0, 2]);
Warning
If multiple lanes are to be stored into the same slice element, it is not specified which of them will end up being stored. It is not UB to do so and it'll always be one of them, however it may change between versions or even between compilation targets which.
This is to allow for potential different behaviour of different platforms.
Panics
- If the
idx
has a different length than the vector. - If any of the indices are out of bounds of
output
.
fn scatter_store_masked<O, Idx, M, MB>(self, output: O, idx: Idx, mask: M) where
O: AsMut<[Self::Base]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
O: AsMut<[Self::Base]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
A masked version of scatter_store
.
This acts in the same way as scatter_store
, except lanes disabled by the mask
are not
stored anywhere.
Panics
- If the
idx
ormask
has a different length than the vector. - If any of the active indices are out of bounds of
output
.
fn lt(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
Self::Base: PartialOrd,
Lane-wise <
.
fn gt(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
Self::Base: PartialOrd,
Lane-wise >
.
fn le(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
Self::Base: PartialOrd,
Lane-wise <=
.
fn ge(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
Self::Base: PartialOrd,
Lane-wise >=
.
fn eq(self, other: Self) -> Self::Mask where
Self::Base: PartialEq,
Self::Base: PartialEq,
Lane-wise ==
.
fn blend<M, MB>(self, other: Self, mask: M) -> Self where
M: AsRef<[MB]>,
MB: Mask,
M: AsRef<[MB]>,
MB: Mask,
Blend self and other using mask.
Imports enabled lanes from other
, keeps disabled lanes from self
.
Examples
let odd = u32x4::new([1, 3, 5, 7]); let even = u32x4::new([2, 4, 6, 8]); let mask = m32x4::new([m32::TRUE, m32::FALSE, m32::TRUE, m32::FALSE]); assert_eq!(odd.blend(even, mask), u32x4::new([2, 3, 6, 7]));
fn horizontal_sum(self) -> Self::Base where
Self::Base: Add<Output = Self::Base>,
Self::Base: Add<Output = Self::Base>,
Sums the lanes together.
The additions are done in a tree manner: (a[0] + a[1]) + (a[2] + a[3])
.
fn horizontal_product(self) -> Self::Base where
Self::Base: Mul<Output = Self::Base>,
Self::Base: Mul<Output = Self::Base>,
Multiplies all the lanes of the vector.
The multiplications are done in a tree manner: (a[0] * a[1]) * (a[2] * a[3])
.
Provided methods
fn new<I>(input: I) -> Self where
I: AsRef<[Self::Base]>,
I: AsRef<[Self::Base]>,
Loads the vector from correctly sized slice.
This loads the vector from correctly sized slice or anything that can be converted to it ‒ specifically, fixed sized arrays and other vectors work.
Example
let vec = (0..10).collect::<Vec<_>>(); let v1 = u32x4::new(&vec[0..4]); let v2 = u32x4::new(v1); let v3 = u32x4::new([2, 3, 4, 5]); assert_eq!(v1 + v2 + v3, u32x4::new([2, 5, 8, 11]));
Panics
If the provided slice is of incompatible size.
fn maximum(self, other: Self) -> Self where
Self::Base: PartialOrd,
Self::Base: PartialOrd,
A lane-wise maximum.
Examples
let a = u32x4::new([1, 4, 2, 5]); let b = u32x4::new([2, 3, 2, 6]); assert_eq!(a.maximum(b), u32x4::new([2, 4, 2, 6]));
fn minimum(self, other: Self) -> Self where
Self::Base: PartialOrd,
Self::Base: PartialOrd,
A lane-wise maximum.
Examples
let a = u32x4::new([1, 4, 2, 5]); let b = u32x4::new([2, 3, 2, 6]); assert_eq!(a.minimum(b), u32x4::new([1, 3, 2, 5]));
Implementors
impl<B, S> Vector for Packed1<B, S> where
B: Repr + 'static,
S: ArrayLength<B> + ArrayLength<B::Mask> + 'static,
<S as ArrayLength<B>>::ArrayType: Copy,
<S as ArrayLength<B::Mask>>::ArrayType: Copy,
[src]
B: Repr + 'static,
S: ArrayLength<B> + ArrayLength<B::Mask> + 'static,
<S as ArrayLength<B>>::ArrayType: Copy,
<S as ArrayLength<B::Mask>>::ArrayType: Copy,
type Base = B
type Lanes = S
type Mask = Packed1<B::Mask, S>
unsafe fn new_unchecked(input: *const B) -> Self
[src]
fn splat(value: B) -> Self
[src]
fn gather_load<I, Idx>(input: I, idx: Idx) -> Self where
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
[src]
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
fn gather_load_masked<I, Idx, M, MB>(self, input: I, idx: Idx, mask: M) -> Self where
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
[src]
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
fn scatter_store<O, Idx>(self, output: O, idx: Idx) where
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
[src]
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
fn scatter_store_masked<O, Idx, M, MB>(self, output: O, idx: Idx, mask: M) where
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
[src]
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
fn blend<M, MB>(self, other: Self, mask: M) -> Self where
M: AsRef<[MB]>,
MB: Mask,
[src]
M: AsRef<[MB]>,
MB: Mask,
fn horizontal_sum(self) -> B where
B: Add<Output = B>,
[src]
B: Add<Output = B>,
fn horizontal_product(self) -> B where
B: Mul<Output = B>,
[src]
B: Mul<Output = B>,
fn eq(self, other: Self) -> Self::Mask where
Self::Base: PartialEq,
[src]
Self::Base: PartialEq,
fn lt(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
[src]
Self::Base: PartialOrd,
fn gt(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
[src]
Self::Base: PartialOrd,
fn le(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
[src]
Self::Base: PartialOrd,
fn ge(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
[src]
Self::Base: PartialOrd,
impl<B, S> Vector for Packed2<B, S> where
B: Repr + 'static,
S: ArrayLength<B> + ArrayLength<B::Mask> + 'static,
<S as ArrayLength<B>>::ArrayType: Copy,
<S as ArrayLength<B::Mask>>::ArrayType: Copy,
[src]
B: Repr + 'static,
S: ArrayLength<B> + ArrayLength<B::Mask> + 'static,
<S as ArrayLength<B>>::ArrayType: Copy,
<S as ArrayLength<B::Mask>>::ArrayType: Copy,
type Base = B
type Lanes = S
type Mask = Packed2<B::Mask, S>
unsafe fn new_unchecked(input: *const B) -> Self
[src]
fn splat(value: B) -> Self
[src]
fn gather_load<I, Idx>(input: I, idx: Idx) -> Self where
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
[src]
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
fn gather_load_masked<I, Idx, M, MB>(self, input: I, idx: Idx, mask: M) -> Self where
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
[src]
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
fn scatter_store<O, Idx>(self, output: O, idx: Idx) where
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
[src]
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
fn scatter_store_masked<O, Idx, M, MB>(self, output: O, idx: Idx, mask: M) where
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
[src]
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
fn blend<M, MB>(self, other: Self, mask: M) -> Self where
M: AsRef<[MB]>,
MB: Mask,
[src]
M: AsRef<[MB]>,
MB: Mask,
fn horizontal_sum(self) -> B where
B: Add<Output = B>,
[src]
B: Add<Output = B>,
fn horizontal_product(self) -> B where
B: Mul<Output = B>,
[src]
B: Mul<Output = B>,
fn eq(self, other: Self) -> Self::Mask where
Self::Base: PartialEq,
[src]
Self::Base: PartialEq,
fn lt(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
[src]
Self::Base: PartialOrd,
fn gt(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
[src]
Self::Base: PartialOrd,
fn le(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
[src]
Self::Base: PartialOrd,
fn ge(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
[src]
Self::Base: PartialOrd,
impl<B, S> Vector for Packed4<B, S> where
B: Repr + 'static,
S: ArrayLength<B> + ArrayLength<B::Mask> + 'static,
<S as ArrayLength<B>>::ArrayType: Copy,
<S as ArrayLength<B::Mask>>::ArrayType: Copy,
[src]
B: Repr + 'static,
S: ArrayLength<B> + ArrayLength<B::Mask> + 'static,
<S as ArrayLength<B>>::ArrayType: Copy,
<S as ArrayLength<B::Mask>>::ArrayType: Copy,
type Base = B
type Lanes = S
type Mask = Packed4<B::Mask, S>
unsafe fn new_unchecked(input: *const B) -> Self
[src]
fn splat(value: B) -> Self
[src]
fn gather_load<I, Idx>(input: I, idx: Idx) -> Self where
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
[src]
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
fn gather_load_masked<I, Idx, M, MB>(self, input: I, idx: Idx, mask: M) -> Self where
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
[src]
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
fn scatter_store<O, Idx>(self, output: O, idx: Idx) where
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
[src]
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
fn scatter_store_masked<O, Idx, M, MB>(self, output: O, idx: Idx, mask: M) where
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
[src]
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
fn blend<M, MB>(self, other: Self, mask: M) -> Self where
M: AsRef<[MB]>,
MB: Mask,
[src]
M: AsRef<[MB]>,
MB: Mask,
fn horizontal_sum(self) -> B where
B: Add<Output = B>,
[src]
B: Add<Output = B>,
fn horizontal_product(self) -> B where
B: Mul<Output = B>,
[src]
B: Mul<Output = B>,
fn eq(self, other: Self) -> Self::Mask where
Self::Base: PartialEq,
[src]
Self::Base: PartialEq,
fn lt(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
[src]
Self::Base: PartialOrd,
fn gt(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
[src]
Self::Base: PartialOrd,
fn le(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
[src]
Self::Base: PartialOrd,
fn ge(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
[src]
Self::Base: PartialOrd,
impl<B, S> Vector for Packed8<B, S> where
B: Repr + 'static,
S: ArrayLength<B> + ArrayLength<B::Mask> + 'static,
<S as ArrayLength<B>>::ArrayType: Copy,
<S as ArrayLength<B::Mask>>::ArrayType: Copy,
[src]
B: Repr + 'static,
S: ArrayLength<B> + ArrayLength<B::Mask> + 'static,
<S as ArrayLength<B>>::ArrayType: Copy,
<S as ArrayLength<B::Mask>>::ArrayType: Copy,
type Base = B
type Lanes = S
type Mask = Packed8<B::Mask, S>
unsafe fn new_unchecked(input: *const B) -> Self
[src]
fn splat(value: B) -> Self
[src]
fn gather_load<I, Idx>(input: I, idx: Idx) -> Self where
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
[src]
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
fn gather_load_masked<I, Idx, M, MB>(self, input: I, idx: Idx, mask: M) -> Self where
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
[src]
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
fn scatter_store<O, Idx>(self, output: O, idx: Idx) where
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
[src]
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
fn scatter_store_masked<O, Idx, M, MB>(self, output: O, idx: Idx, mask: M) where
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
[src]
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
fn blend<M, MB>(self, other: Self, mask: M) -> Self where
M: AsRef<[MB]>,
MB: Mask,
[src]
M: AsRef<[MB]>,
MB: Mask,
fn horizontal_sum(self) -> B where
B: Add<Output = B>,
[src]
B: Add<Output = B>,
fn horizontal_product(self) -> B where
B: Mul<Output = B>,
[src]
B: Mul<Output = B>,
fn eq(self, other: Self) -> Self::Mask where
Self::Base: PartialEq,
[src]
Self::Base: PartialEq,
fn lt(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
[src]
Self::Base: PartialOrd,
fn gt(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
[src]
Self::Base: PartialOrd,
fn le(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
[src]
Self::Base: PartialOrd,
fn ge(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
[src]
Self::Base: PartialOrd,
impl<B, S> Vector for Packed16<B, S> where
B: Repr + 'static,
S: ArrayLength<B> + ArrayLength<B::Mask> + 'static,
<S as ArrayLength<B>>::ArrayType: Copy,
<S as ArrayLength<B::Mask>>::ArrayType: Copy,
[src]
B: Repr + 'static,
S: ArrayLength<B> + ArrayLength<B::Mask> + 'static,
<S as ArrayLength<B>>::ArrayType: Copy,
<S as ArrayLength<B::Mask>>::ArrayType: Copy,
type Base = B
type Lanes = S
type Mask = Packed16<B::Mask, S>
unsafe fn new_unchecked(input: *const B) -> Self
[src]
fn splat(value: B) -> Self
[src]
fn gather_load<I, Idx>(input: I, idx: Idx) -> Self where
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
[src]
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
fn gather_load_masked<I, Idx, M, MB>(self, input: I, idx: Idx, mask: M) -> Self where
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
[src]
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
fn scatter_store<O, Idx>(self, output: O, idx: Idx) where
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
[src]
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
fn scatter_store_masked<O, Idx, M, MB>(self, output: O, idx: Idx, mask: M) where
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
[src]
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
fn blend<M, MB>(self, other: Self, mask: M) -> Self where
M: AsRef<[MB]>,
MB: Mask,
[src]
M: AsRef<[MB]>,
MB: Mask,
fn horizontal_sum(self) -> B where
B: Add<Output = B>,
[src]
B: Add<Output = B>,
fn horizontal_product(self) -> B where
B: Mul<Output = B>,
[src]
B: Mul<Output = B>,
fn eq(self, other: Self) -> Self::Mask where
Self::Base: PartialEq,
[src]
Self::Base: PartialEq,
fn lt(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
[src]
Self::Base: PartialOrd,
fn gt(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
[src]
Self::Base: PartialOrd,
fn le(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
[src]
Self::Base: PartialOrd,
fn ge(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
[src]
Self::Base: PartialOrd,
impl<B, S> Vector for Packed32<B, S> where
B: Repr + 'static,
S: ArrayLength<B> + ArrayLength<B::Mask> + 'static,
<S as ArrayLength<B>>::ArrayType: Copy,
<S as ArrayLength<B::Mask>>::ArrayType: Copy,
[src]
B: Repr + 'static,
S: ArrayLength<B> + ArrayLength<B::Mask> + 'static,
<S as ArrayLength<B>>::ArrayType: Copy,
<S as ArrayLength<B::Mask>>::ArrayType: Copy,
type Base = B
type Lanes = S
type Mask = Packed32<B::Mask, S>
unsafe fn new_unchecked(input: *const B) -> Self
[src]
fn splat(value: B) -> Self
[src]
fn gather_load<I, Idx>(input: I, idx: Idx) -> Self where
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
[src]
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
fn gather_load_masked<I, Idx, M, MB>(self, input: I, idx: Idx, mask: M) -> Self where
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
[src]
I: AsRef<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
fn scatter_store<O, Idx>(self, output: O, idx: Idx) where
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
[src]
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
fn scatter_store_masked<O, Idx, M, MB>(self, output: O, idx: Idx, mask: M) where
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
[src]
O: AsMut<[B]>,
Idx: AsRef<[usize]>,
M: AsRef<[MB]>,
MB: Mask,
fn blend<M, MB>(self, other: Self, mask: M) -> Self where
M: AsRef<[MB]>,
MB: Mask,
[src]
M: AsRef<[MB]>,
MB: Mask,
fn horizontal_sum(self) -> B where
B: Add<Output = B>,
[src]
B: Add<Output = B>,
fn horizontal_product(self) -> B where
B: Mul<Output = B>,
[src]
B: Mul<Output = B>,
fn eq(self, other: Self) -> Self::Mask where
Self::Base: PartialEq,
[src]
Self::Base: PartialEq,
fn lt(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
[src]
Self::Base: PartialOrd,
fn gt(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
[src]
Self::Base: PartialOrd,
fn le(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
[src]
Self::Base: PartialOrd,
fn ge(self, other: Self) -> Self::Mask where
Self::Base: PartialOrd,
[src]
Self::Base: PartialOrd,