Skip to main content

X64V3Token

Struct X64V3Token 

Source
pub struct X64V3Token { /* private fields */ }
Expand description

Proof that AVX2 + FMA + BMI1/2 + F16C + LZCNT are available (x86-64-v3 level).

x86-64-v3 implies all of v2 plus: AVX, AVX2, FMA, BMI1, BMI2, F16C, LZCNT, MOVBE. This is the Haswell (2013) / Zen 1 (2017) baseline.

This is the most commonly targeted level for high-performance SIMD code.

Implementations§

Source§

impl X64V3Token

Source

pub fn v1(self) -> X64V1Token

Extract a X64V1Token — guaranteed because x86-64-v3 implies x86-64-v1.

Zero-cost: compiles away entirely.

Source

pub fn v2(self) -> X64V2Token

Extract a X64V2Token — guaranteed because x86-64-v3 implies x86-64-v2.

Zero-cost: compiles away entirely.

Source§

impl X64V3Token

Source

pub fn dangerously_disable_token_process_wide( disabled: bool, ) -> Result<(), CompileTimeGuaranteedError>

Disable this token process-wide for testing and benchmarking.

When disabled, summon() will return None even if the CPU supports the required features.

Returns Err when all required features are compile-time enabled (e.g., via -Ctarget-cpu=native), since the compiler has already elided the runtime checks.

Cascading: Also affects descendants:

  • X64V3CryptoToken
  • X64V4Token
  • X64V4xToken
  • Avx512Fp16Token
Source

pub fn manually_disabled() -> Result<bool, CompileTimeGuaranteedError>

Check if this token has been manually disabled process-wide.

Returns Err when all required features are compile-time enabled.

Trait Implementations§

Source§

impl Clone for X64V3Token

Source§

fn clone(&self) -> X64V3Token

Returns a duplicate of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl Debug for X64V3Token

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result<(), Error>

Formats the value using the given formatter. Read more
Source§

impl F32x16Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = [__m256; 2]

Platform-native SIMD representation.
Source§

fn splat(v: f32) -> [__m256; 2]

Broadcast scalar to all 16 lanes.
Source§

fn zero() -> [__m256; 2]

All lanes zero.
Source§

fn load(data: &[f32; 16]) -> [__m256; 2]

Load from an aligned array.
Source§

fn from_array(arr: [f32; 16]) -> [__m256; 2]

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: [__m256; 2], out: &mut [f32; 16])

Store to array.
Source§

fn to_array(repr: [__m256; 2]) -> [f32; 16]

Convert to array.
Source§

fn add(a: [__m256; 2], b: [__m256; 2]) -> [__m256; 2]

Lane-wise addition.
Source§

fn sub(a: [__m256; 2], b: [__m256; 2]) -> [__m256; 2]

Lane-wise subtraction.
Source§

fn mul(a: [__m256; 2], b: [__m256; 2]) -> [__m256; 2]

Lane-wise multiplication.
Source§

fn div(a: [__m256; 2], b: [__m256; 2]) -> [__m256; 2]

Lane-wise division.
Source§

fn neg(a: [__m256; 2]) -> [__m256; 2]

Lane-wise negation.
Source§

fn min(a: [__m256; 2], b: [__m256; 2]) -> [__m256; 2]

Lane-wise minimum.
Source§

fn max(a: [__m256; 2], b: [__m256; 2]) -> [__m256; 2]

Lane-wise maximum.
Source§

fn sqrt(a: [__m256; 2]) -> [__m256; 2]

Square root.
Source§

fn abs(a: [__m256; 2]) -> [__m256; 2]

Absolute value.
Source§

fn floor(a: [__m256; 2]) -> [__m256; 2]

Round toward negative infinity.
Source§

fn ceil(a: [__m256; 2]) -> [__m256; 2]

Round toward positive infinity.
Source§

fn round(a: [__m256; 2]) -> [__m256; 2]

Round to nearest integer.
Source§

fn mul_add(a: [__m256; 2], b: [__m256; 2], c: [__m256; 2]) -> [__m256; 2]

Fused multiply-add: a * b + c.
Source§

fn mul_sub(a: [__m256; 2], b: [__m256; 2], c: [__m256; 2]) -> [__m256; 2]

Fused multiply-sub: a * b - c.
Source§

fn reduce_add(a: [__m256; 2]) -> f32

Sum all 16 lanes.
Source§

fn reduce_min(a: [__m256; 2]) -> f32

Minimum across all 16 lanes.
Source§

fn reduce_max(a: [__m256; 2]) -> f32

Maximum across all 16 lanes.
Source§

fn rcp_approx(a: [__m256; 2]) -> [__m256; 2]

Fast reciprocal approximation (~12-bit precision where available).
Source§

fn rsqrt_approx(a: [__m256; 2]) -> [__m256; 2]

Fast reciprocal square root approximation (~12-bit precision where available).
Source§

fn simd_eq(a: [__m256; 2], b: [__m256; 2]) -> [__m256; 2]

Lane-wise equality.
Source§

fn simd_ne(a: [__m256; 2], b: [__m256; 2]) -> [__m256; 2]

Lane-wise inequality.
Source§

fn simd_lt(a: [__m256; 2], b: [__m256; 2]) -> [__m256; 2]

Lane-wise less-than.
Source§

fn simd_le(a: [__m256; 2], b: [__m256; 2]) -> [__m256; 2]

Lane-wise less-than-or-equal.
Source§

fn simd_gt(a: [__m256; 2], b: [__m256; 2]) -> [__m256; 2]

Lane-wise greater-than.
Source§

fn simd_ge(a: [__m256; 2], b: [__m256; 2]) -> [__m256; 2]

Lane-wise greater-than-or-equal.
Source§

fn blend( mask: [__m256; 2], if_true: [__m256; 2], if_false: [__m256; 2], ) -> [__m256; 2]

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn not(a: [__m256; 2]) -> [__m256; 2]

Bitwise NOT.
Source§

fn bitand(a: [__m256; 2], b: [__m256; 2]) -> [__m256; 2]

Bitwise AND.
Source§

fn bitor(a: [__m256; 2], b: [__m256; 2]) -> [__m256; 2]

Bitwise OR.
Source§

fn bitxor(a: [__m256; 2], b: [__m256; 2]) -> [__m256; 2]

Bitwise XOR.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

fn recip(a: Self::Repr) -> Self::Repr

Precise reciprocal (Newton-Raphson from rcp_approx).
Source§

fn rsqrt(a: Self::Repr) -> Self::Repr

Precise reciprocal square root (Newton-Raphson from rsqrt_approx).
Source§

impl F32x16Convert for X64V3Token

Available on x86-64 only.
Source§

fn bitcast_f32_to_i32(a: [__m256; 2]) -> [__m256i; 2]

Bitcast f32x16 to i32x16 (reinterpret bits, no conversion).
Source§

fn bitcast_i32_to_f32(a: [__m256i; 2]) -> [__m256; 2]

Bitcast i32x16 to f32x16 (reinterpret bits, no conversion).
Source§

fn convert_f32_to_i32(a: [__m256; 2]) -> [__m256i; 2]

Convert f32x16 to i32x16 with truncation toward zero.
Source§

fn convert_f32_to_i32_round(a: [__m256; 2]) -> [__m256i; 2]

Convert f32x16 to i32x16 with rounding to nearest.
Source§

fn convert_i32_to_f32(a: [__m256i; 2]) -> [__m256; 2]

Convert i32x16 to f32x16.
Source§

impl F32x4Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = __m128

Platform-native SIMD representation.
Source§

fn splat(v: f32) -> __m128

Broadcast scalar to all 4 lanes.
Source§

fn zero() -> __m128

All lanes zero.
Source§

fn load(data: &[f32; 4]) -> __m128

Load from an aligned array.
Source§

fn from_array(arr: [f32; 4]) -> __m128

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: __m128, out: &mut [f32; 4])

Store to array.
Source§

fn to_array(repr: __m128) -> [f32; 4]

Convert to array.
Source§

fn add(a: __m128, b: __m128) -> __m128

Lane-wise addition.
Source§

fn sub(a: __m128, b: __m128) -> __m128

Lane-wise subtraction.
Source§

fn mul(a: __m128, b: __m128) -> __m128

Lane-wise multiplication.
Source§

fn div(a: __m128, b: __m128) -> __m128

Lane-wise division.
Source§

fn neg(a: __m128) -> __m128

Lane-wise negation.
Source§

fn min(a: __m128, b: __m128) -> __m128

Lane-wise minimum.
Source§

fn max(a: __m128, b: __m128) -> __m128

Lane-wise maximum.
Source§

fn sqrt(a: __m128) -> __m128

Square root.
Source§

fn abs(a: __m128) -> __m128

Absolute value.
Source§

fn floor(a: __m128) -> __m128

Round toward negative infinity.
Source§

fn ceil(a: __m128) -> __m128

Round toward positive infinity.
Source§

fn round(a: __m128) -> __m128

Round to nearest integer.
Source§

fn mul_add(a: __m128, b: __m128, c: __m128) -> __m128

Fused multiply-add: a * b + c.
Source§

fn mul_sub(a: __m128, b: __m128, c: __m128) -> __m128

Fused multiply-sub: a * b - c.
Source§

fn simd_eq(a: __m128, b: __m128) -> __m128

Lane-wise equality.
Source§

fn simd_ne(a: __m128, b: __m128) -> __m128

Lane-wise inequality.
Source§

fn simd_lt(a: __m128, b: __m128) -> __m128

Lane-wise less-than.
Source§

fn simd_le(a: __m128, b: __m128) -> __m128

Lane-wise less-than-or-equal.
Source§

fn simd_gt(a: __m128, b: __m128) -> __m128

Lane-wise greater-than.
Source§

fn simd_ge(a: __m128, b: __m128) -> __m128

Lane-wise greater-than-or-equal.
Source§

fn blend(mask: __m128, if_true: __m128, if_false: __m128) -> __m128

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn reduce_add(a: __m128) -> f32

Sum all 4 lanes.
Source§

fn reduce_min(a: __m128) -> f32

Minimum across all 4 lanes.
Source§

fn reduce_max(a: __m128) -> f32

Maximum across all 4 lanes.
Source§

fn rcp_approx(a: __m128) -> __m128

Fast reciprocal approximation (~12-bit precision where available). Read more
Source§

fn rsqrt_approx(a: __m128) -> __m128

Fast reciprocal square root approximation (~12-bit precision where available). Read more
Source§

fn not(a: __m128) -> __m128

Bitwise NOT.
Source§

fn bitand(a: __m128, b: __m128) -> __m128

Bitwise AND.
Source§

fn bitor(a: __m128, b: __m128) -> __m128

Bitwise OR.
Source§

fn bitxor(a: __m128, b: __m128) -> __m128

Bitwise XOR.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

fn recip(a: Self::Repr) -> Self::Repr

Precise reciprocal (Newton-Raphson from rcp_approx).
Source§

fn rsqrt(a: Self::Repr) -> Self::Repr

Precise reciprocal square root (Newton-Raphson from rsqrt_approx).
Source§

impl F32x4Convert for X64V3Token

Available on x86-64 only.
Source§

fn bitcast_f32_to_i32(a: __m128) -> __m128i

Bitcast f32x4 to i32x4 (reinterpret bits, no conversion).
Source§

fn bitcast_i32_to_f32(a: __m128i) -> __m128

Bitcast i32x4 to f32x4 (reinterpret bits, no conversion).
Source§

fn convert_f32_to_i32(a: __m128) -> __m128i

Convert f32x4 to i32x4 with truncation toward zero.
Source§

fn convert_f32_to_i32_round(a: __m128) -> __m128i

Convert f32x4 to i32x4 with rounding to nearest.
Source§

fn convert_i32_to_f32(a: __m128i) -> __m128

Convert i32x4 to f32x4.
Source§

impl F32x8Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = __m256

Platform-native SIMD representation.
Source§

fn splat(v: f32) -> __m256

Broadcast scalar to all 8 lanes.
Source§

fn zero() -> __m256

All lanes zero.
Source§

fn load(data: &[f32; 8]) -> __m256

Load from an aligned array.
Source§

fn from_array(arr: [f32; 8]) -> __m256

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: __m256, out: &mut [f32; 8])

Store to array.
Source§

fn to_array(repr: __m256) -> [f32; 8]

Convert to array.
Source§

fn add(a: __m256, b: __m256) -> __m256

Lane-wise addition.
Source§

fn sub(a: __m256, b: __m256) -> __m256

Lane-wise subtraction.
Source§

fn mul(a: __m256, b: __m256) -> __m256

Lane-wise multiplication.
Source§

fn div(a: __m256, b: __m256) -> __m256

Lane-wise division.
Source§

fn neg(a: __m256) -> __m256

Lane-wise negation.
Source§

fn min(a: __m256, b: __m256) -> __m256

Lane-wise minimum.
Source§

fn max(a: __m256, b: __m256) -> __m256

Lane-wise maximum.
Source§

fn sqrt(a: __m256) -> __m256

Square root.
Source§

fn abs(a: __m256) -> __m256

Absolute value.
Source§

fn floor(a: __m256) -> __m256

Round toward negative infinity.
Source§

fn ceil(a: __m256) -> __m256

Round toward positive infinity.
Source§

fn round(a: __m256) -> __m256

Round to nearest integer.
Source§

fn mul_add(a: __m256, b: __m256, c: __m256) -> __m256

Fused multiply-add: a * b + c.
Source§

fn mul_sub(a: __m256, b: __m256, c: __m256) -> __m256

Fused multiply-sub: a * b - c.
Source§

fn simd_eq(a: __m256, b: __m256) -> __m256

Lane-wise equality.
Source§

fn simd_ne(a: __m256, b: __m256) -> __m256

Lane-wise inequality.
Source§

fn simd_lt(a: __m256, b: __m256) -> __m256

Lane-wise less-than.
Source§

fn simd_le(a: __m256, b: __m256) -> __m256

Lane-wise less-than-or-equal.
Source§

fn simd_gt(a: __m256, b: __m256) -> __m256

Lane-wise greater-than.
Source§

fn simd_ge(a: __m256, b: __m256) -> __m256

Lane-wise greater-than-or-equal.
Source§

fn blend(mask: __m256, if_true: __m256, if_false: __m256) -> __m256

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn reduce_add(a: __m256) -> f32

Sum all 8 lanes.
Source§

fn reduce_min(a: __m256) -> f32

Minimum across all 8 lanes.
Source§

fn reduce_max(a: __m256) -> f32

Maximum across all 8 lanes.
Source§

fn rcp_approx(a: __m256) -> __m256

Fast reciprocal approximation (~12-bit precision where available). Read more
Source§

fn rsqrt_approx(a: __m256) -> __m256

Fast reciprocal square root approximation (~12-bit precision where available). Read more
Source§

fn not(a: __m256) -> __m256

Bitwise NOT.
Source§

fn bitand(a: __m256, b: __m256) -> __m256

Bitwise AND.
Source§

fn bitor(a: __m256, b: __m256) -> __m256

Bitwise OR.
Source§

fn bitxor(a: __m256, b: __m256) -> __m256

Bitwise XOR.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

fn recip(a: Self::Repr) -> Self::Repr

Precise reciprocal (Newton-Raphson from rcp_approx).
Source§

fn rsqrt(a: Self::Repr) -> Self::Repr

Precise reciprocal square root (Newton-Raphson from rsqrt_approx).
Source§

impl F32x8Convert for X64V3Token

Available on x86-64 only.
Source§

fn bitcast_f32_to_i32(a: __m256) -> __m256i

Bitcast f32x8 to i32x8 (reinterpret bits, no conversion).
Source§

fn bitcast_i32_to_f32(a: __m256i) -> __m256

Bitcast i32x8 to f32x8 (reinterpret bits, no conversion).
Source§

fn convert_f32_to_i32(a: __m256) -> __m256i

Convert f32x8 to i32x8 with truncation toward zero.
Source§

fn convert_f32_to_i32_round(a: __m256) -> __m256i

Convert f32x8 to i32x8 with rounding to nearest.
Source§

fn convert_i32_to_f32(a: __m256i) -> __m256

Convert i32x8 to f32x8.
Source§

impl F64x2Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = __m128d

Platform-native SIMD representation.
Source§

fn splat(v: f64) -> __m128d

Broadcast scalar to all 2 lanes.
Source§

fn zero() -> __m128d

All lanes zero.
Source§

fn load(data: &[f64; 2]) -> __m128d

Load from an aligned array.
Source§

fn from_array(arr: [f64; 2]) -> __m128d

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: __m128d, out: &mut [f64; 2])

Store to array.
Source§

fn to_array(repr: __m128d) -> [f64; 2]

Convert to array.
Source§

fn add(a: __m128d, b: __m128d) -> __m128d

Lane-wise addition.
Source§

fn sub(a: __m128d, b: __m128d) -> __m128d

Lane-wise subtraction.
Source§

fn mul(a: __m128d, b: __m128d) -> __m128d

Lane-wise multiplication.
Source§

fn div(a: __m128d, b: __m128d) -> __m128d

Lane-wise division.
Source§

fn neg(a: __m128d) -> __m128d

Lane-wise negation.
Source§

fn min(a: __m128d, b: __m128d) -> __m128d

Lane-wise minimum.
Source§

fn max(a: __m128d, b: __m128d) -> __m128d

Lane-wise maximum.
Source§

fn sqrt(a: __m128d) -> __m128d

Square root.
Source§

fn abs(a: __m128d) -> __m128d

Absolute value.
Source§

fn floor(a: __m128d) -> __m128d

Round toward negative infinity.
Source§

fn ceil(a: __m128d) -> __m128d

Round toward positive infinity.
Source§

fn round(a: __m128d) -> __m128d

Round to nearest integer.
Source§

fn mul_add(a: __m128d, b: __m128d, c: __m128d) -> __m128d

Fused multiply-add: a * b + c.
Source§

fn mul_sub(a: __m128d, b: __m128d, c: __m128d) -> __m128d

Fused multiply-sub: a * b - c.
Source§

fn simd_eq(a: __m128d, b: __m128d) -> __m128d

Lane-wise equality.
Source§

fn simd_ne(a: __m128d, b: __m128d) -> __m128d

Lane-wise inequality.
Source§

fn simd_lt(a: __m128d, b: __m128d) -> __m128d

Lane-wise less-than.
Source§

fn simd_le(a: __m128d, b: __m128d) -> __m128d

Lane-wise less-than-or-equal.
Source§

fn simd_gt(a: __m128d, b: __m128d) -> __m128d

Lane-wise greater-than.
Source§

fn simd_ge(a: __m128d, b: __m128d) -> __m128d

Lane-wise greater-than-or-equal.
Source§

fn blend(mask: __m128d, if_true: __m128d, if_false: __m128d) -> __m128d

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn reduce_add(a: __m128d) -> f64

Sum all 2 lanes.
Source§

fn reduce_min(a: __m128d) -> f64

Minimum across all 2 lanes.
Source§

fn reduce_max(a: __m128d) -> f64

Maximum across all 2 lanes.
Source§

fn not(a: __m128d) -> __m128d

Bitwise NOT.
Source§

fn bitand(a: __m128d, b: __m128d) -> __m128d

Bitwise AND.
Source§

fn bitor(a: __m128d, b: __m128d) -> __m128d

Bitwise OR.
Source§

fn bitxor(a: __m128d, b: __m128d) -> __m128d

Bitwise XOR.
Source§

fn rcp_approx(a: Self::Repr) -> Self::Repr

Fast reciprocal approximation (~12-bit precision where available). Read more
Source§

fn rsqrt_approx(a: Self::Repr) -> Self::Repr

Fast reciprocal square root approximation (~12-bit precision where available). Read more
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

fn recip(a: Self::Repr) -> Self::Repr

Precise reciprocal (Newton-Raphson from rcp_approx).
Source§

fn rsqrt(a: Self::Repr) -> Self::Repr

Precise reciprocal square root (Newton-Raphson from rsqrt_approx).
Source§

impl F64x4Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = __m256d

Platform-native SIMD representation.
Source§

fn splat(v: f64) -> __m256d

Broadcast scalar to all 4 lanes.
Source§

fn zero() -> __m256d

All lanes zero.
Source§

fn load(data: &[f64; 4]) -> __m256d

Load from an aligned array.
Source§

fn from_array(arr: [f64; 4]) -> __m256d

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: __m256d, out: &mut [f64; 4])

Store to array.
Source§

fn to_array(repr: __m256d) -> [f64; 4]

Convert to array.
Source§

fn add(a: __m256d, b: __m256d) -> __m256d

Lane-wise addition.
Source§

fn sub(a: __m256d, b: __m256d) -> __m256d

Lane-wise subtraction.
Source§

fn mul(a: __m256d, b: __m256d) -> __m256d

Lane-wise multiplication.
Source§

fn div(a: __m256d, b: __m256d) -> __m256d

Lane-wise division.
Source§

fn neg(a: __m256d) -> __m256d

Lane-wise negation.
Source§

fn min(a: __m256d, b: __m256d) -> __m256d

Lane-wise minimum.
Source§

fn max(a: __m256d, b: __m256d) -> __m256d

Lane-wise maximum.
Source§

fn sqrt(a: __m256d) -> __m256d

Square root.
Source§

fn abs(a: __m256d) -> __m256d

Absolute value.
Source§

fn floor(a: __m256d) -> __m256d

Round toward negative infinity.
Source§

fn ceil(a: __m256d) -> __m256d

Round toward positive infinity.
Source§

fn round(a: __m256d) -> __m256d

Round to nearest integer.
Source§

fn mul_add(a: __m256d, b: __m256d, c: __m256d) -> __m256d

Fused multiply-add: a * b + c.
Source§

fn mul_sub(a: __m256d, b: __m256d, c: __m256d) -> __m256d

Fused multiply-sub: a * b - c.
Source§

fn simd_eq(a: __m256d, b: __m256d) -> __m256d

Lane-wise equality.
Source§

fn simd_ne(a: __m256d, b: __m256d) -> __m256d

Lane-wise inequality.
Source§

fn simd_lt(a: __m256d, b: __m256d) -> __m256d

Lane-wise less-than.
Source§

fn simd_le(a: __m256d, b: __m256d) -> __m256d

Lane-wise less-than-or-equal.
Source§

fn simd_gt(a: __m256d, b: __m256d) -> __m256d

Lane-wise greater-than.
Source§

fn simd_ge(a: __m256d, b: __m256d) -> __m256d

Lane-wise greater-than-or-equal.
Source§

fn blend(mask: __m256d, if_true: __m256d, if_false: __m256d) -> __m256d

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn reduce_add(a: __m256d) -> f64

Sum all 4 lanes.
Source§

fn reduce_min(a: __m256d) -> f64

Minimum across all 4 lanes.
Source§

fn reduce_max(a: __m256d) -> f64

Maximum across all 4 lanes.
Source§

fn not(a: __m256d) -> __m256d

Bitwise NOT.
Source§

fn bitand(a: __m256d, b: __m256d) -> __m256d

Bitwise AND.
Source§

fn bitor(a: __m256d, b: __m256d) -> __m256d

Bitwise OR.
Source§

fn bitxor(a: __m256d, b: __m256d) -> __m256d

Bitwise XOR.
Source§

fn rcp_approx(a: Self::Repr) -> Self::Repr

Fast reciprocal approximation (~12-bit precision where available). Read more
Source§

fn rsqrt_approx(a: Self::Repr) -> Self::Repr

Fast reciprocal square root approximation (~12-bit precision where available). Read more
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

fn recip(a: Self::Repr) -> Self::Repr

Precise reciprocal (Newton-Raphson from rcp_approx).
Source§

fn rsqrt(a: Self::Repr) -> Self::Repr

Precise reciprocal square root (Newton-Raphson from rsqrt_approx).
Source§

impl F64x8Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = [__m256d; 2]

Platform-native SIMD representation.
Source§

fn splat(v: f64) -> [__m256d; 2]

Broadcast scalar to all 8 lanes.
Source§

fn zero() -> [__m256d; 2]

All lanes zero.
Source§

fn load(data: &[f64; 8]) -> [__m256d; 2]

Load from an aligned array.
Source§

fn from_array(arr: [f64; 8]) -> [__m256d; 2]

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: [__m256d; 2], out: &mut [f64; 8])

Store to array.
Source§

fn to_array(repr: [__m256d; 2]) -> [f64; 8]

Convert to array.
Source§

fn add(a: [__m256d; 2], b: [__m256d; 2]) -> [__m256d; 2]

Lane-wise addition.
Source§

fn sub(a: [__m256d; 2], b: [__m256d; 2]) -> [__m256d; 2]

Lane-wise subtraction.
Source§

fn mul(a: [__m256d; 2], b: [__m256d; 2]) -> [__m256d; 2]

Lane-wise multiplication.
Source§

fn div(a: [__m256d; 2], b: [__m256d; 2]) -> [__m256d; 2]

Lane-wise division.
Source§

fn neg(a: [__m256d; 2]) -> [__m256d; 2]

Lane-wise negation.
Source§

fn min(a: [__m256d; 2], b: [__m256d; 2]) -> [__m256d; 2]

Lane-wise minimum.
Source§

fn max(a: [__m256d; 2], b: [__m256d; 2]) -> [__m256d; 2]

Lane-wise maximum.
Source§

fn sqrt(a: [__m256d; 2]) -> [__m256d; 2]

Square root.
Source§

fn abs(a: [__m256d; 2]) -> [__m256d; 2]

Absolute value.
Source§

fn floor(a: [__m256d; 2]) -> [__m256d; 2]

Round toward negative infinity.
Source§

fn ceil(a: [__m256d; 2]) -> [__m256d; 2]

Round toward positive infinity.
Source§

fn round(a: [__m256d; 2]) -> [__m256d; 2]

Round to nearest integer.
Source§

fn mul_add(a: [__m256d; 2], b: [__m256d; 2], c: [__m256d; 2]) -> [__m256d; 2]

Fused multiply-add: a * b + c.
Source§

fn mul_sub(a: [__m256d; 2], b: [__m256d; 2], c: [__m256d; 2]) -> [__m256d; 2]

Fused multiply-sub: a * b - c.
Source§

fn reduce_add(a: [__m256d; 2]) -> f64

Sum all 8 lanes.
Source§

fn reduce_min(a: [__m256d; 2]) -> f64

Minimum across all 8 lanes.
Source§

fn reduce_max(a: [__m256d; 2]) -> f64

Maximum across all 8 lanes.
Source§

fn rcp_approx(a: [__m256d; 2]) -> [__m256d; 2]

Fast reciprocal approximation (~12-bit precision where available).
Source§

fn rsqrt_approx(a: [__m256d; 2]) -> [__m256d; 2]

Fast reciprocal square root approximation (~12-bit precision where available).
Source§

fn simd_eq(a: [__m256d; 2], b: [__m256d; 2]) -> [__m256d; 2]

Lane-wise equality.
Source§

fn simd_ne(a: [__m256d; 2], b: [__m256d; 2]) -> [__m256d; 2]

Lane-wise inequality.
Source§

fn simd_lt(a: [__m256d; 2], b: [__m256d; 2]) -> [__m256d; 2]

Lane-wise less-than.
Source§

fn simd_le(a: [__m256d; 2], b: [__m256d; 2]) -> [__m256d; 2]

Lane-wise less-than-or-equal.
Source§

fn simd_gt(a: [__m256d; 2], b: [__m256d; 2]) -> [__m256d; 2]

Lane-wise greater-than.
Source§

fn simd_ge(a: [__m256d; 2], b: [__m256d; 2]) -> [__m256d; 2]

Lane-wise greater-than-or-equal.
Source§

fn blend( mask: [__m256d; 2], if_true: [__m256d; 2], if_false: [__m256d; 2], ) -> [__m256d; 2]

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn not(a: [__m256d; 2]) -> [__m256d; 2]

Bitwise NOT.
Source§

fn bitand(a: [__m256d; 2], b: [__m256d; 2]) -> [__m256d; 2]

Bitwise AND.
Source§

fn bitor(a: [__m256d; 2], b: [__m256d; 2]) -> [__m256d; 2]

Bitwise OR.
Source§

fn bitxor(a: [__m256d; 2], b: [__m256d; 2]) -> [__m256d; 2]

Bitwise XOR.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

fn recip(a: Self::Repr) -> Self::Repr

Precise reciprocal (Newton-Raphson from rcp_approx).
Source§

fn rsqrt(a: Self::Repr) -> Self::Repr

Precise reciprocal square root (Newton-Raphson from rsqrt_approx).
Source§

impl I16x16Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = __m256i

Platform-native SIMD representation.
Source§

fn splat(v: i16) -> __m256i

Broadcast scalar to all 16 lanes.
Source§

fn zero() -> __m256i

All lanes zero.
Source§

fn load(data: &[i16; 16]) -> __m256i

Load from an aligned array.
Source§

fn from_array(arr: [i16; 16]) -> __m256i

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: __m256i, out: &mut [i16; 16])

Store to array.
Source§

fn to_array(repr: __m256i) -> [i16; 16]

Convert to array.
Source§

fn add(a: __m256i, b: __m256i) -> __m256i

Lane-wise addition (wrapping).
Source§

fn sub(a: __m256i, b: __m256i) -> __m256i

Lane-wise subtraction (wrapping).
Source§

fn mul(a: __m256i, b: __m256i) -> __m256i

Lane-wise multiplication (low 16 bits of product).
Source§

fn neg(a: __m256i) -> __m256i

Lane-wise negation.
Source§

fn min(a: __m256i, b: __m256i) -> __m256i

Lane-wise minimum.
Source§

fn max(a: __m256i, b: __m256i) -> __m256i

Lane-wise maximum.
Source§

fn abs(a: __m256i) -> __m256i

Lane-wise absolute value.
Source§

fn simd_eq(a: __m256i, b: __m256i) -> __m256i

Lane-wise equality.
Source§

fn simd_ne(a: __m256i, b: __m256i) -> __m256i

Lane-wise inequality.
Source§

fn simd_lt(a: __m256i, b: __m256i) -> __m256i

Lane-wise less-than.
Source§

fn simd_le(a: __m256i, b: __m256i) -> __m256i

Lane-wise less-than-or-equal.
Source§

fn simd_gt(a: __m256i, b: __m256i) -> __m256i

Lane-wise greater-than.
Source§

fn simd_ge(a: __m256i, b: __m256i) -> __m256i

Lane-wise greater-than-or-equal.
Source§

fn blend(mask: __m256i, if_true: __m256i, if_false: __m256i) -> __m256i

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn reduce_add(a: __m256i) -> i16

Sum all 16 lanes (wrapping).
Source§

fn not(a: __m256i) -> __m256i

Bitwise NOT.
Source§

fn bitand(a: __m256i, b: __m256i) -> __m256i

Bitwise AND.
Source§

fn bitor(a: __m256i, b: __m256i) -> __m256i

Bitwise OR.
Source§

fn bitxor(a: __m256i, b: __m256i) -> __m256i

Bitwise XOR.
Source§

fn shl_const<const N: i32>(a: __m256i) -> __m256i

Shift left by constant.
Source§

fn shr_logical_const<const N: i32>(a: __m256i) -> __m256i

Logical shift right by constant (zero-filling).
Source§

fn shr_arithmetic_const<const N: i32>(a: __m256i) -> __m256i

Arithmetic shift right by constant (sign-extending).
Source§

fn all_true(a: __m256i) -> bool

True if all lanes have their sign bit set (all-1s mask).
Source§

fn any_true(a: __m256i) -> bool

True if any lane has its sign bit set.
Source§

fn bitmask(a: __m256i) -> u32

Extract the high bit of each lane as a bitmask.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

impl I16x16Bitcast for X64V3Token

Available on x86-64 only.
Source§

fn bitcast_i16_to_u16(a: __m256i) -> __m256i

Bitcast i16x16 to u16x16 (reinterpret bits).
Source§

fn bitcast_u16_to_i16(a: __m256i) -> __m256i

Bitcast u16x16 to i16x16 (reinterpret bits).
Source§

impl I16x32Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = [__m256i; 2]

Platform-native SIMD representation.
Source§

fn splat(v: i16) -> [__m256i; 2]

Broadcast scalar to all 32 lanes.
Source§

fn zero() -> [__m256i; 2]

All lanes zero.
Source§

fn load(data: &[i16; 32]) -> [__m256i; 2]

Load from an aligned array.
Source§

fn from_array(arr: [i16; 32]) -> [__m256i; 2]

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: [__m256i; 2], out: &mut [i16; 32])

Store to array.
Source§

fn to_array(repr: [__m256i; 2]) -> [i16; 32]

Convert to array.
Source§

fn add(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise addition.
Source§

fn sub(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise subtraction.
Source§

fn mul(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise multiplication (low bits of product).
Source§

fn neg(a: [__m256i; 2]) -> [__m256i; 2]

Lane-wise negation.
Source§

fn min(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise minimum.
Source§

fn max(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise maximum.
Source§

fn abs(a: [__m256i; 2]) -> [__m256i; 2]

Lane-wise absolute value.
Source§

fn reduce_add(a: [__m256i; 2]) -> i16

Sum all 32 lanes.
Source§

fn shl_const<const N: i32>(a: [__m256i; 2]) -> [__m256i; 2]

Shift left by constant.
Source§

fn shr_arithmetic_const<const N: i32>(a: [__m256i; 2]) -> [__m256i; 2]

Arithmetic shift right by constant (sign-extending).
Source§

fn shr_logical_const<const N: i32>(a: [__m256i; 2]) -> [__m256i; 2]

Logical shift right by constant (zero-filling).
Source§

fn all_true(a: [__m256i; 2]) -> bool

True if all lanes have their sign bit set (all-1s mask).
Source§

fn any_true(a: [__m256i; 2]) -> bool

True if any lane has its sign bit set (any all-1s mask lane).
Source§

fn bitmask(a: [__m256i; 2]) -> u64

Extract the high bit of each lane as a bitmask.
Source§

fn simd_eq(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise equality.
Source§

fn simd_ne(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise inequality.
Source§

fn simd_lt(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise less-than.
Source§

fn simd_le(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise less-than-or-equal.
Source§

fn simd_gt(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise greater-than.
Source§

fn simd_ge(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise greater-than-or-equal.
Source§

fn blend( mask: [__m256i; 2], if_true: [__m256i; 2], if_false: [__m256i; 2], ) -> [__m256i; 2]

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn not(a: [__m256i; 2]) -> [__m256i; 2]

Bitwise NOT.
Source§

fn bitand(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Bitwise AND.
Source§

fn bitor(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Bitwise OR.
Source§

fn bitxor(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Bitwise XOR.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

impl I16x8Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = __m128i

Platform-native SIMD representation.
Source§

fn splat(v: i16) -> __m128i

Broadcast scalar to all 8 lanes.
Source§

fn zero() -> __m128i

All lanes zero.
Source§

fn load(data: &[i16; 8]) -> __m128i

Load from an aligned array.
Source§

fn from_array(arr: [i16; 8]) -> __m128i

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: __m128i, out: &mut [i16; 8])

Store to array.
Source§

fn to_array(repr: __m128i) -> [i16; 8]

Convert to array.
Source§

fn add(a: __m128i, b: __m128i) -> __m128i

Lane-wise addition (wrapping).
Source§

fn sub(a: __m128i, b: __m128i) -> __m128i

Lane-wise subtraction (wrapping).
Source§

fn mul(a: __m128i, b: __m128i) -> __m128i

Lane-wise multiplication (low 16 bits of product).
Source§

fn neg(a: __m128i) -> __m128i

Lane-wise negation.
Source§

fn min(a: __m128i, b: __m128i) -> __m128i

Lane-wise minimum.
Source§

fn max(a: __m128i, b: __m128i) -> __m128i

Lane-wise maximum.
Source§

fn abs(a: __m128i) -> __m128i

Lane-wise absolute value.
Source§

fn simd_eq(a: __m128i, b: __m128i) -> __m128i

Lane-wise equality.
Source§

fn simd_ne(a: __m128i, b: __m128i) -> __m128i

Lane-wise inequality.
Source§

fn simd_lt(a: __m128i, b: __m128i) -> __m128i

Lane-wise less-than.
Source§

fn simd_le(a: __m128i, b: __m128i) -> __m128i

Lane-wise less-than-or-equal.
Source§

fn simd_gt(a: __m128i, b: __m128i) -> __m128i

Lane-wise greater-than.
Source§

fn simd_ge(a: __m128i, b: __m128i) -> __m128i

Lane-wise greater-than-or-equal.
Source§

fn blend(mask: __m128i, if_true: __m128i, if_false: __m128i) -> __m128i

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn reduce_add(a: __m128i) -> i16

Sum all 8 lanes (wrapping).
Source§

fn not(a: __m128i) -> __m128i

Bitwise NOT.
Source§

fn bitand(a: __m128i, b: __m128i) -> __m128i

Bitwise AND.
Source§

fn bitor(a: __m128i, b: __m128i) -> __m128i

Bitwise OR.
Source§

fn bitxor(a: __m128i, b: __m128i) -> __m128i

Bitwise XOR.
Source§

fn shl_const<const N: i32>(a: __m128i) -> __m128i

Shift left by constant.
Source§

fn shr_logical_const<const N: i32>(a: __m128i) -> __m128i

Logical shift right by constant (zero-filling).
Source§

fn shr_arithmetic_const<const N: i32>(a: __m128i) -> __m128i

Arithmetic shift right by constant (sign-extending).
Source§

fn all_true(a: __m128i) -> bool

True if all lanes have their sign bit set (all-1s mask).
Source§

fn any_true(a: __m128i) -> bool

True if any lane has its sign bit set.
Source§

fn bitmask(a: __m128i) -> u32

Extract the high bit of each lane as a bitmask.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

impl I16x8Bitcast for X64V3Token

Available on x86-64 only.
Source§

fn bitcast_i16_to_u16(a: __m128i) -> __m128i

Bitcast i16x8 to u16x8 (reinterpret bits).
Source§

fn bitcast_u16_to_i16(a: __m128i) -> __m128i

Bitcast u16x8 to i16x8 (reinterpret bits).
Source§

impl I32x16Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = [__m256i; 2]

Platform-native SIMD representation.
Source§

fn splat(v: i32) -> [__m256i; 2]

Broadcast scalar to all 16 lanes.
Source§

fn zero() -> [__m256i; 2]

All lanes zero.
Source§

fn load(data: &[i32; 16]) -> [__m256i; 2]

Load from an aligned array.
Source§

fn from_array(arr: [i32; 16]) -> [__m256i; 2]

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: [__m256i; 2], out: &mut [i32; 16])

Store to array.
Source§

fn to_array(repr: [__m256i; 2]) -> [i32; 16]

Convert to array.
Source§

fn add(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise addition.
Source§

fn sub(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise subtraction.
Source§

fn mul(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise multiplication (low bits of product).
Source§

fn neg(a: [__m256i; 2]) -> [__m256i; 2]

Lane-wise negation.
Source§

fn min(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise minimum.
Source§

fn max(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise maximum.
Source§

fn abs(a: [__m256i; 2]) -> [__m256i; 2]

Lane-wise absolute value.
Source§

fn reduce_add(a: [__m256i; 2]) -> i32

Sum all 16 lanes.
Source§

fn shl_const<const N: i32>(a: [__m256i; 2]) -> [__m256i; 2]

Shift left by constant.
Source§

fn shr_arithmetic_const<const N: i32>(a: [__m256i; 2]) -> [__m256i; 2]

Arithmetic shift right by constant (sign-extending).
Source§

fn shr_logical_const<const N: i32>(a: [__m256i; 2]) -> [__m256i; 2]

Logical shift right by constant (zero-filling).
Source§

fn all_true(a: [__m256i; 2]) -> bool

True if all lanes have their sign bit set (all-1s mask).
Source§

fn any_true(a: [__m256i; 2]) -> bool

True if any lane has its sign bit set (any all-1s mask lane).
Source§

fn bitmask(a: [__m256i; 2]) -> u64

Extract the high bit of each lane as a bitmask.
Source§

fn simd_eq(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise equality.
Source§

fn simd_ne(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise inequality.
Source§

fn simd_lt(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise less-than.
Source§

fn simd_le(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise less-than-or-equal.
Source§

fn simd_gt(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise greater-than.
Source§

fn simd_ge(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise greater-than-or-equal.
Source§

fn blend( mask: [__m256i; 2], if_true: [__m256i; 2], if_false: [__m256i; 2], ) -> [__m256i; 2]

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn not(a: [__m256i; 2]) -> [__m256i; 2]

Bitwise NOT.
Source§

fn bitand(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Bitwise AND.
Source§

fn bitor(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Bitwise OR.
Source§

fn bitxor(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Bitwise XOR.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

impl I32x4Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = __m128i

Platform-native SIMD representation.
Source§

fn splat(v: i32) -> __m128i

Broadcast scalar to all 4 lanes.
Source§

fn zero() -> __m128i

All lanes zero.
Source§

fn load(data: &[i32; 4]) -> __m128i

Load from an aligned array.
Source§

fn from_array(arr: [i32; 4]) -> __m128i

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: __m128i, out: &mut [i32; 4])

Store to array.
Source§

fn to_array(repr: __m128i) -> [i32; 4]

Convert to array.
Source§

fn add(a: __m128i, b: __m128i) -> __m128i

Lane-wise addition.
Source§

fn sub(a: __m128i, b: __m128i) -> __m128i

Lane-wise subtraction.
Source§

fn mul(a: __m128i, b: __m128i) -> __m128i

Lane-wise multiplication (low 32 bits of each 32x32 product).
Source§

fn neg(a: __m128i) -> __m128i

Lane-wise negation.
Source§

fn min(a: __m128i, b: __m128i) -> __m128i

Lane-wise minimum.
Source§

fn max(a: __m128i, b: __m128i) -> __m128i

Lane-wise maximum.
Source§

fn abs(a: __m128i) -> __m128i

Lane-wise absolute value.
Source§

fn simd_eq(a: __m128i, b: __m128i) -> __m128i

Lane-wise equality.
Source§

fn simd_ne(a: __m128i, b: __m128i) -> __m128i

Lane-wise inequality.
Source§

fn simd_lt(a: __m128i, b: __m128i) -> __m128i

Lane-wise less-than.
Source§

fn simd_le(a: __m128i, b: __m128i) -> __m128i

Lane-wise less-than-or-equal.
Source§

fn simd_gt(a: __m128i, b: __m128i) -> __m128i

Lane-wise greater-than.
Source§

fn simd_ge(a: __m128i, b: __m128i) -> __m128i

Lane-wise greater-than-or-equal.
Source§

fn blend(mask: __m128i, if_true: __m128i, if_false: __m128i) -> __m128i

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn reduce_add(a: __m128i) -> i32

Sum all 4 lanes.
Source§

fn not(a: __m128i) -> __m128i

Bitwise NOT.
Source§

fn bitand(a: __m128i, b: __m128i) -> __m128i

Bitwise AND.
Source§

fn bitor(a: __m128i, b: __m128i) -> __m128i

Bitwise OR.
Source§

fn bitxor(a: __m128i, b: __m128i) -> __m128i

Bitwise XOR.
Source§

fn shl_const<const N: i32>(a: __m128i) -> __m128i

Shift left by constant.
Source§

fn shr_arithmetic_const<const N: i32>(a: __m128i) -> __m128i

Arithmetic shift right by constant (sign-extending).
Source§

fn shr_logical_const<const N: i32>(a: __m128i) -> __m128i

Logical shift right by constant (zero-filling).
Source§

fn all_true(a: __m128i) -> bool

True if all lanes have their sign bit set (all-1s mask).
Source§

fn any_true(a: __m128i) -> bool

True if any lane has its sign bit set (any all-1s mask lane).
Source§

fn bitmask(a: __m128i) -> u32

Extract the high bit of each 32-bit lane as a bitmask.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

impl I32x8Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = __m256i

Platform-native SIMD representation.
Source§

fn splat(v: i32) -> __m256i

Broadcast scalar to all 8 lanes.
Source§

fn zero() -> __m256i

All lanes zero.
Source§

fn load(data: &[i32; 8]) -> __m256i

Load from an aligned array.
Source§

fn from_array(arr: [i32; 8]) -> __m256i

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: __m256i, out: &mut [i32; 8])

Store to array.
Source§

fn to_array(repr: __m256i) -> [i32; 8]

Convert to array.
Source§

fn add(a: __m256i, b: __m256i) -> __m256i

Lane-wise addition.
Source§

fn sub(a: __m256i, b: __m256i) -> __m256i

Lane-wise subtraction.
Source§

fn mul(a: __m256i, b: __m256i) -> __m256i

Lane-wise multiplication (low 32 bits of each 32x32 product).
Source§

fn neg(a: __m256i) -> __m256i

Lane-wise negation.
Source§

fn min(a: __m256i, b: __m256i) -> __m256i

Lane-wise minimum.
Source§

fn max(a: __m256i, b: __m256i) -> __m256i

Lane-wise maximum.
Source§

fn abs(a: __m256i) -> __m256i

Lane-wise absolute value.
Source§

fn simd_eq(a: __m256i, b: __m256i) -> __m256i

Lane-wise equality.
Source§

fn simd_ne(a: __m256i, b: __m256i) -> __m256i

Lane-wise inequality.
Source§

fn simd_lt(a: __m256i, b: __m256i) -> __m256i

Lane-wise less-than.
Source§

fn simd_le(a: __m256i, b: __m256i) -> __m256i

Lane-wise less-than-or-equal.
Source§

fn simd_gt(a: __m256i, b: __m256i) -> __m256i

Lane-wise greater-than.
Source§

fn simd_ge(a: __m256i, b: __m256i) -> __m256i

Lane-wise greater-than-or-equal.
Source§

fn blend(mask: __m256i, if_true: __m256i, if_false: __m256i) -> __m256i

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn reduce_add(a: __m256i) -> i32

Sum all 8 lanes.
Source§

fn not(a: __m256i) -> __m256i

Bitwise NOT.
Source§

fn bitand(a: __m256i, b: __m256i) -> __m256i

Bitwise AND.
Source§

fn bitor(a: __m256i, b: __m256i) -> __m256i

Bitwise OR.
Source§

fn bitxor(a: __m256i, b: __m256i) -> __m256i

Bitwise XOR.
Source§

fn shl_const<const N: i32>(a: __m256i) -> __m256i

Shift left by constant.
Source§

fn shr_arithmetic_const<const N: i32>(a: __m256i) -> __m256i

Arithmetic shift right by constant (sign-extending).
Source§

fn shr_logical_const<const N: i32>(a: __m256i) -> __m256i

Logical shift right by constant (zero-filling).
Source§

fn all_true(a: __m256i) -> bool

True if all lanes have their sign bit set (all-1s mask).
Source§

fn any_true(a: __m256i) -> bool

True if any lane has its sign bit set (any all-1s mask lane).
Source§

fn bitmask(a: __m256i) -> u32

Extract the high bit of each 32-bit lane as a bitmask.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

impl I64x2Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = __m128i

Platform-native SIMD representation.
Source§

fn splat(v: i64) -> __m128i

Broadcast scalar to all 2 lanes.
Source§

fn zero() -> __m128i

All lanes zero.
Source§

fn load(data: &[i64; 2]) -> __m128i

Load from an aligned array.
Source§

fn from_array(arr: [i64; 2]) -> __m128i

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: __m128i, out: &mut [i64; 2])

Store to array.
Source§

fn to_array(repr: __m128i) -> [i64; 2]

Convert to array.
Source§

fn add(a: __m128i, b: __m128i) -> __m128i

Lane-wise addition.
Source§

fn sub(a: __m128i, b: __m128i) -> __m128i

Lane-wise subtraction.
Source§

fn neg(a: __m128i) -> __m128i

Lane-wise negation.
Source§

fn min(a: __m128i, b: __m128i) -> __m128i

Lane-wise minimum.
Source§

fn max(a: __m128i, b: __m128i) -> __m128i

Lane-wise maximum.
Source§

fn abs(a: __m128i) -> __m128i

Lane-wise absolute value.
Source§

fn simd_eq(a: __m128i, b: __m128i) -> __m128i

Lane-wise equality.
Source§

fn simd_ne(a: __m128i, b: __m128i) -> __m128i

Lane-wise inequality.
Source§

fn simd_lt(a: __m128i, b: __m128i) -> __m128i

Lane-wise less-than.
Source§

fn simd_le(a: __m128i, b: __m128i) -> __m128i

Lane-wise less-than-or-equal.
Source§

fn simd_gt(a: __m128i, b: __m128i) -> __m128i

Lane-wise greater-than.
Source§

fn simd_ge(a: __m128i, b: __m128i) -> __m128i

Lane-wise greater-than-or-equal.
Source§

fn blend(mask: __m128i, if_true: __m128i, if_false: __m128i) -> __m128i

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn reduce_add(a: __m128i) -> i64

Sum all 2 lanes.
Source§

fn not(a: __m128i) -> __m128i

Bitwise NOT.
Source§

fn bitand(a: __m128i, b: __m128i) -> __m128i

Bitwise AND.
Source§

fn bitor(a: __m128i, b: __m128i) -> __m128i

Bitwise OR.
Source§

fn bitxor(a: __m128i, b: __m128i) -> __m128i

Bitwise XOR.
Source§

fn shl_const<const N: i32>(a: __m128i) -> __m128i

Shift left by constant.
Source§

fn shr_arithmetic_const<const N: i32>(a: __m128i) -> __m128i

Arithmetic shift right by constant (sign-extending).
Source§

fn shr_logical_const<const N: i32>(a: __m128i) -> __m128i

Logical shift right by constant (zero-filling).
Source§

fn all_true(a: __m128i) -> bool

True if all lanes have their sign bit set (all-1s mask).
Source§

fn any_true(a: __m128i) -> bool

True if any lane has its sign bit set (any all-1s mask lane).
Source§

fn bitmask(a: __m128i) -> u32

Extract the high bit of each 64-bit lane as a bitmask.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

impl I64x2Bitcast for X64V3Token

Available on x86-64 only.
Source§

fn bitcast_i64_to_f64(a: __m128i) -> __m128d

Bitcast i64x2 to f64x2 (reinterpret bits, no conversion).
Source§

fn bitcast_f64_to_i64(a: __m128d) -> __m128i

Bitcast f64x2 to i64x2 (reinterpret bits, no conversion).
Source§

impl I64x4Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = __m256i

Platform-native SIMD representation.
Source§

fn splat(v: i64) -> __m256i

Broadcast scalar to all 4 lanes.
Source§

fn zero() -> __m256i

All lanes zero.
Source§

fn load(data: &[i64; 4]) -> __m256i

Load from an aligned array.
Source§

fn from_array(arr: [i64; 4]) -> __m256i

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: __m256i, out: &mut [i64; 4])

Store to array.
Source§

fn to_array(repr: __m256i) -> [i64; 4]

Convert to array.
Source§

fn add(a: __m256i, b: __m256i) -> __m256i

Lane-wise addition.
Source§

fn sub(a: __m256i, b: __m256i) -> __m256i

Lane-wise subtraction.
Source§

fn neg(a: __m256i) -> __m256i

Lane-wise negation.
Source§

fn min(a: __m256i, b: __m256i) -> __m256i

Lane-wise minimum.
Source§

fn max(a: __m256i, b: __m256i) -> __m256i

Lane-wise maximum.
Source§

fn abs(a: __m256i) -> __m256i

Lane-wise absolute value.
Source§

fn simd_eq(a: __m256i, b: __m256i) -> __m256i

Lane-wise equality.
Source§

fn simd_ne(a: __m256i, b: __m256i) -> __m256i

Lane-wise inequality.
Source§

fn simd_lt(a: __m256i, b: __m256i) -> __m256i

Lane-wise less-than.
Source§

fn simd_le(a: __m256i, b: __m256i) -> __m256i

Lane-wise less-than-or-equal.
Source§

fn simd_gt(a: __m256i, b: __m256i) -> __m256i

Lane-wise greater-than.
Source§

fn simd_ge(a: __m256i, b: __m256i) -> __m256i

Lane-wise greater-than-or-equal.
Source§

fn blend(mask: __m256i, if_true: __m256i, if_false: __m256i) -> __m256i

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn reduce_add(a: __m256i) -> i64

Sum all 4 lanes.
Source§

fn not(a: __m256i) -> __m256i

Bitwise NOT.
Source§

fn bitand(a: __m256i, b: __m256i) -> __m256i

Bitwise AND.
Source§

fn bitor(a: __m256i, b: __m256i) -> __m256i

Bitwise OR.
Source§

fn bitxor(a: __m256i, b: __m256i) -> __m256i

Bitwise XOR.
Source§

fn shl_const<const N: i32>(a: __m256i) -> __m256i

Shift left by constant.
Source§

fn shr_arithmetic_const<const N: i32>(a: __m256i) -> __m256i

Arithmetic shift right by constant (sign-extending).
Source§

fn shr_logical_const<const N: i32>(a: __m256i) -> __m256i

Logical shift right by constant (zero-filling).
Source§

fn all_true(a: __m256i) -> bool

True if all lanes have their sign bit set (all-1s mask).
Source§

fn any_true(a: __m256i) -> bool

True if any lane has its sign bit set (any all-1s mask lane).
Source§

fn bitmask(a: __m256i) -> u32

Extract the high bit of each 64-bit lane as a bitmask.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

impl I64x4Bitcast for X64V3Token

Available on x86-64 only.
Source§

fn bitcast_i64_to_f64(a: __m256i) -> __m256d

Bitcast i64x4 to f64x4 (reinterpret bits, no conversion).
Source§

fn bitcast_f64_to_i64(a: __m256d) -> __m256i

Bitcast f64x4 to i64x4 (reinterpret bits, no conversion).
Source§

impl I64x8Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = [__m256i; 2]

Platform-native SIMD representation.
Source§

fn splat(v: i64) -> [__m256i; 2]

Broadcast scalar to all 8 lanes.
Source§

fn zero() -> [__m256i; 2]

All lanes zero.
Source§

fn load(data: &[i64; 8]) -> [__m256i; 2]

Load from an aligned array.
Source§

fn from_array(arr: [i64; 8]) -> [__m256i; 2]

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: [__m256i; 2], out: &mut [i64; 8])

Store to array.
Source§

fn to_array(repr: [__m256i; 2]) -> [i64; 8]

Convert to array.
Source§

fn add(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise addition.
Source§

fn sub(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise subtraction.
Source§

fn neg(a: [__m256i; 2]) -> [__m256i; 2]

Lane-wise negation.
Source§

fn min(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise minimum.
Source§

fn max(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise maximum.
Source§

fn abs(a: [__m256i; 2]) -> [__m256i; 2]

Lane-wise absolute value.
Source§

fn reduce_add(a: [__m256i; 2]) -> i64

Sum all 8 lanes.
Source§

fn shl_const<const N: i32>(a: [__m256i; 2]) -> [__m256i; 2]

Shift left by constant.
Source§

fn shr_arithmetic_const<const N: i32>(a: [__m256i; 2]) -> [__m256i; 2]

Arithmetic shift right by constant (sign-extending).
Source§

fn shr_logical_const<const N: i32>(a: [__m256i; 2]) -> [__m256i; 2]

Logical shift right by constant (zero-filling).
Source§

fn all_true(a: [__m256i; 2]) -> bool

True if all lanes have their sign bit set (all-1s mask).
Source§

fn any_true(a: [__m256i; 2]) -> bool

True if any lane has its sign bit set (any all-1s mask lane).
Source§

fn bitmask(a: [__m256i; 2]) -> u64

Extract the high bit of each lane as a bitmask.
Source§

fn simd_eq(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise equality.
Source§

fn simd_ne(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise inequality.
Source§

fn simd_lt(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise less-than.
Source§

fn simd_le(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise less-than-or-equal.
Source§

fn simd_gt(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise greater-than.
Source§

fn simd_ge(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise greater-than-or-equal.
Source§

fn blend( mask: [__m256i; 2], if_true: [__m256i; 2], if_false: [__m256i; 2], ) -> [__m256i; 2]

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn not(a: [__m256i; 2]) -> [__m256i; 2]

Bitwise NOT.
Source§

fn bitand(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Bitwise AND.
Source§

fn bitor(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Bitwise OR.
Source§

fn bitxor(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Bitwise XOR.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

impl I8x16Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = __m128i

Platform-native SIMD representation.
Source§

fn splat(v: i8) -> __m128i

Broadcast scalar to all 16 lanes.
Source§

fn zero() -> __m128i

All lanes zero.
Source§

fn load(data: &[i8; 16]) -> __m128i

Load from an aligned array.
Source§

fn from_array(arr: [i8; 16]) -> __m128i

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: __m128i, out: &mut [i8; 16])

Store to array.
Source§

fn to_array(repr: __m128i) -> [i8; 16]

Convert to array.
Source§

fn add(a: __m128i, b: __m128i) -> __m128i

Lane-wise addition (wrapping).
Source§

fn sub(a: __m128i, b: __m128i) -> __m128i

Lane-wise subtraction (wrapping).
Source§

fn neg(a: __m128i) -> __m128i

Lane-wise negation.
Source§

fn min(a: __m128i, b: __m128i) -> __m128i

Lane-wise minimum.
Source§

fn max(a: __m128i, b: __m128i) -> __m128i

Lane-wise maximum.
Source§

fn abs(a: __m128i) -> __m128i

Lane-wise absolute value.
Source§

fn simd_eq(a: __m128i, b: __m128i) -> __m128i

Lane-wise equality.
Source§

fn simd_ne(a: __m128i, b: __m128i) -> __m128i

Lane-wise inequality.
Source§

fn simd_lt(a: __m128i, b: __m128i) -> __m128i

Lane-wise less-than.
Source§

fn simd_le(a: __m128i, b: __m128i) -> __m128i

Lane-wise less-than-or-equal.
Source§

fn simd_gt(a: __m128i, b: __m128i) -> __m128i

Lane-wise greater-than.
Source§

fn simd_ge(a: __m128i, b: __m128i) -> __m128i

Lane-wise greater-than-or-equal.
Source§

fn blend(mask: __m128i, if_true: __m128i, if_false: __m128i) -> __m128i

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn reduce_add(a: __m128i) -> i8

Sum all 16 lanes (wrapping).
Source§

fn not(a: __m128i) -> __m128i

Bitwise NOT.
Source§

fn bitand(a: __m128i, b: __m128i) -> __m128i

Bitwise AND.
Source§

fn bitor(a: __m128i, b: __m128i) -> __m128i

Bitwise OR.
Source§

fn bitxor(a: __m128i, b: __m128i) -> __m128i

Bitwise XOR.
Source§

fn shl_const<const N: i32>(a: __m128i) -> __m128i

Shift left by constant.
Source§

fn shr_logical_const<const N: i32>(a: __m128i) -> __m128i

Logical shift right by constant (zero-filling).
Source§

fn shr_arithmetic_const<const N: i32>(a: __m128i) -> __m128i

Arithmetic shift right by constant (sign-extending).
Source§

fn all_true(a: __m128i) -> bool

True if all lanes have their sign bit set (all-1s mask).
Source§

fn any_true(a: __m128i) -> bool

True if any lane has its sign bit set.
Source§

fn bitmask(a: __m128i) -> u32

Extract the high bit of each lane as a bitmask.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

impl I8x16Bitcast for X64V3Token

Available on x86-64 only.
Source§

fn bitcast_i8_to_u8(a: __m128i) -> __m128i

Bitcast i8x16 to u8x16 (reinterpret bits).
Source§

fn bitcast_u8_to_i8(a: __m128i) -> __m128i

Bitcast u8x16 to i8x16 (reinterpret bits).
Source§

impl I8x32Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = __m256i

Platform-native SIMD representation.
Source§

fn splat(v: i8) -> __m256i

Broadcast scalar to all 32 lanes.
Source§

fn zero() -> __m256i

All lanes zero.
Source§

fn load(data: &[i8; 32]) -> __m256i

Load from an aligned array.
Source§

fn from_array(arr: [i8; 32]) -> __m256i

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: __m256i, out: &mut [i8; 32])

Store to array.
Source§

fn to_array(repr: __m256i) -> [i8; 32]

Convert to array.
Source§

fn add(a: __m256i, b: __m256i) -> __m256i

Lane-wise addition (wrapping).
Source§

fn sub(a: __m256i, b: __m256i) -> __m256i

Lane-wise subtraction (wrapping).
Source§

fn neg(a: __m256i) -> __m256i

Lane-wise negation.
Source§

fn min(a: __m256i, b: __m256i) -> __m256i

Lane-wise minimum.
Source§

fn max(a: __m256i, b: __m256i) -> __m256i

Lane-wise maximum.
Source§

fn abs(a: __m256i) -> __m256i

Lane-wise absolute value.
Source§

fn simd_eq(a: __m256i, b: __m256i) -> __m256i

Lane-wise equality.
Source§

fn simd_ne(a: __m256i, b: __m256i) -> __m256i

Lane-wise inequality.
Source§

fn simd_lt(a: __m256i, b: __m256i) -> __m256i

Lane-wise less-than.
Source§

fn simd_le(a: __m256i, b: __m256i) -> __m256i

Lane-wise less-than-or-equal.
Source§

fn simd_gt(a: __m256i, b: __m256i) -> __m256i

Lane-wise greater-than.
Source§

fn simd_ge(a: __m256i, b: __m256i) -> __m256i

Lane-wise greater-than-or-equal.
Source§

fn blend(mask: __m256i, if_true: __m256i, if_false: __m256i) -> __m256i

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn reduce_add(a: __m256i) -> i8

Sum all 32 lanes (wrapping).
Source§

fn not(a: __m256i) -> __m256i

Bitwise NOT.
Source§

fn bitand(a: __m256i, b: __m256i) -> __m256i

Bitwise AND.
Source§

fn bitor(a: __m256i, b: __m256i) -> __m256i

Bitwise OR.
Source§

fn bitxor(a: __m256i, b: __m256i) -> __m256i

Bitwise XOR.
Source§

fn shl_const<const N: i32>(a: __m256i) -> __m256i

Shift left by constant.
Source§

fn shr_logical_const<const N: i32>(a: __m256i) -> __m256i

Logical shift right by constant (zero-filling).
Source§

fn shr_arithmetic_const<const N: i32>(a: __m256i) -> __m256i

Arithmetic shift right by constant (sign-extending).
Source§

fn all_true(a: __m256i) -> bool

True if all lanes have their sign bit set (all-1s mask).
Source§

fn any_true(a: __m256i) -> bool

True if any lane has its sign bit set.
Source§

fn bitmask(a: __m256i) -> u32

Extract the high bit of each lane as a bitmask.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

impl I8x32Bitcast for X64V3Token

Available on x86-64 only.
Source§

fn bitcast_i8_to_u8(a: __m256i) -> __m256i

Bitcast i8x32 to u8x32 (reinterpret bits).
Source§

fn bitcast_u8_to_i8(a: __m256i) -> __m256i

Bitcast u8x32 to i8x32 (reinterpret bits).
Source§

impl I8x64Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = [__m256i; 2]

Platform-native SIMD representation.
Source§

fn splat(v: i8) -> [__m256i; 2]

Broadcast scalar to all 64 lanes.
Source§

fn zero() -> [__m256i; 2]

All lanes zero.
Source§

fn load(data: &[i8; 64]) -> [__m256i; 2]

Load from an aligned array.
Source§

fn from_array(arr: [i8; 64]) -> [__m256i; 2]

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: [__m256i; 2], out: &mut [i8; 64])

Store to array.
Source§

fn to_array(repr: [__m256i; 2]) -> [i8; 64]

Convert to array.
Source§

fn add(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise addition.
Source§

fn sub(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise subtraction.
Source§

fn neg(a: [__m256i; 2]) -> [__m256i; 2]

Lane-wise negation.
Source§

fn min(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise minimum.
Source§

fn max(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise maximum.
Source§

fn abs(a: [__m256i; 2]) -> [__m256i; 2]

Lane-wise absolute value.
Source§

fn reduce_add(a: [__m256i; 2]) -> i8

Sum all 64 lanes.
Source§

fn shl_const<const N: i32>(a: [__m256i; 2]) -> [__m256i; 2]

Shift left by constant.
Source§

fn shr_arithmetic_const<const N: i32>(a: [__m256i; 2]) -> [__m256i; 2]

Arithmetic shift right by constant (sign-extending).
Source§

fn shr_logical_const<const N: i32>(a: [__m256i; 2]) -> [__m256i; 2]

Logical shift right by constant (zero-filling).
Source§

fn all_true(a: [__m256i; 2]) -> bool

True if all lanes have their sign bit set (all-1s mask).
Source§

fn any_true(a: [__m256i; 2]) -> bool

True if any lane has its sign bit set (any all-1s mask lane).
Source§

fn bitmask(a: [__m256i; 2]) -> u64

Extract the high bit of each lane as a bitmask.
Source§

fn simd_eq(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise equality.
Source§

fn simd_ne(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise inequality.
Source§

fn simd_lt(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise less-than.
Source§

fn simd_le(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise less-than-or-equal.
Source§

fn simd_gt(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise greater-than.
Source§

fn simd_ge(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise greater-than-or-equal.
Source§

fn blend( mask: [__m256i; 2], if_true: [__m256i; 2], if_false: [__m256i; 2], ) -> [__m256i; 2]

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn not(a: [__m256i; 2]) -> [__m256i; 2]

Bitwise NOT.
Source§

fn bitand(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Bitwise AND.
Source§

fn bitor(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Bitwise OR.
Source§

fn bitxor(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Bitwise XOR.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

impl IntoConcreteToken for X64V3Token

Source§

fn as_x64v3(self) -> Option<X64V3Token>

Try to cast to X64V3Token.
Source§

fn as_x64v1(self) -> Option<X64V1Token>

Returns Some(self) if this is exactly X64V1Token, None otherwise. Read more
Source§

fn as_x64v2(self) -> Option<X64V2Token>

Try to cast to X64V2Token.
Source§

fn as_x64_crypto(self) -> Option<X64CryptoToken>

Try to cast to X64CryptoToken.
Source§

fn as_x64v3_crypto(self) -> Option<X64V3CryptoToken>

Try to cast to X64V3CryptoToken.
Source§

fn as_x64v4(self) -> Option<X64V4Token>

Try to cast to X64V4Token.
Source§

fn as_x64v4x(self) -> Option<X64V4xToken>

Try to cast to X64V4xToken.
Source§

fn as_avx512_fp16(self) -> Option<Avx512Fp16Token>

Try to cast to Avx512Fp16Token.
Source§

fn as_neon(self) -> Option<NeonToken>

Try to cast to NeonToken.
Source§

fn as_neon_aes(self) -> Option<NeonAesToken>

Try to cast to NeonAesToken.
Source§

fn as_neon_sha3(self) -> Option<NeonSha3Token>

Try to cast to NeonSha3Token.
Source§

fn as_neon_crc(self) -> Option<NeonCrcToken>

Try to cast to NeonCrcToken.
Source§

fn as_arm_v2(self) -> Option<Arm64V2Token>

Try to cast to Arm64V2Token.
Source§

fn as_arm_v3(self) -> Option<Arm64V3Token>

Try to cast to Arm64V3Token.
Source§

fn as_wasm128(self) -> Option<Wasm128Token>

Try to cast to Wasm128Token.
Source§

fn as_wasm128_relaxed(self) -> Option<Wasm128RelaxedToken>

Try to cast to Wasm128RelaxedToken.
Source§

fn as_scalar(self) -> Option<ScalarToken>

Try to cast to ScalarToken.
Source§

impl SimdToken for X64V3Token

Source§

const NAME: &'static str = "x86-64-v3"

Human-readable name for diagnostics and error messages.
Source§

const TARGET_FEATURES: &'static str = "sse,sse2,sse3,ssse3,sse4.1,sse4.2,popcnt,cmpxchg16b,avx,avx2,fma,bmi1,bmi2,f16c,lzcnt,movbe"

Comma-delimited target features (e.g., "sse,sse2,avx2,fma,bmi1,bmi2,f16c,lzcnt"). Read more
Source§

const ENABLE_TARGET_FEATURES: &'static str = "-Ctarget-feature=+sse,+sse2,+sse3,+ssse3,+sse4.1,+sse4.2,+popcnt,+cmpxchg16b,+avx,+avx2,+fma,+bmi1,+bmi2,+f16c,+lzcnt,+movbe"

RUSTFLAGS to enable these features at compile time. Read more
Source§

const DISABLE_TARGET_FEATURES: &'static str = "-Ctarget-feature=-sse,-sse2,-sse3,-ssse3,-sse4.1,-sse4.2,-popcnt,-cmpxchg16b,-avx,-avx2,-fma,-bmi1,-bmi2,-f16c,-lzcnt,-movbe"

RUSTFLAGS to disable these features at compile time. Read more
Source§

fn compiled_with() -> Option<bool>

Check if this binary was compiled with the required target features enabled. Read more
Source§

fn summon() -> Option<X64V3Token>

Attempt to create a token with runtime feature detection. Read more
Source§

unsafe fn forge_token_dangerously() -> X64V3Token

👎Deprecated since 0.5.0:

Pass tokens through from summon() instead of forging

Create a token without any checks. Read more
Source§

fn name(&self) -> &'static str

Returns the human-readable name for this token. Read more
Source§

fn guaranteed() -> Option<bool>

👎Deprecated since 0.6.0:

Use compiled_with() instead

Deprecated alias for compiled_with().
Source§

fn attempt() -> Option<Self>

Attempt to create a token with runtime feature detection. Read more
Source§

impl SimdTypes for X64V3Token

Available on x86-64 only.
Source§

const F32_LANES: usize = 8

Number of f32 lanes in the F32 type
Source§

const F64_LANES: usize = 4

Number of f64 lanes in the F64 type
Source§

const I32_LANES: usize = 8

Number of i32 lanes in the I32 type
Source§

type F32 = f32x8<X64V3Token>

32-bit floating point vector (e.g., f32x8 for AVX2)
Source§

type F64 = f64x4<X64V3Token>

64-bit floating point vector (e.g., f64x4 for AVX2)
Source§

type I8 = i8x32<X64V3Token>

8-bit signed integer vector
Source§

type I16 = i16x16<X64V3Token>

16-bit signed integer vector
Source§

type I32 = i32x8<X64V3Token>

32-bit signed integer vector
Source§

type I64 = i64x4<X64V3Token>

64-bit signed integer vector
Source§

type U8 = u8x32<X64V3Token>

8-bit unsigned integer vector
Source§

type U16 = u16x16<X64V3Token>

16-bit unsigned integer vector
Source§

type U32 = u32x8<X64V3Token>

32-bit unsigned integer vector
Source§

type U64 = u64x4<X64V3Token>

64-bit unsigned integer vector
Source§

impl U16x16Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = __m256i

Platform-native SIMD representation.
Source§

fn splat(v: u16) -> __m256i

Broadcast scalar to all 16 lanes.
Source§

fn zero() -> __m256i

All lanes zero.
Source§

fn load(data: &[u16; 16]) -> __m256i

Load from an aligned array.
Source§

fn from_array(arr: [u16; 16]) -> __m256i

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: __m256i, out: &mut [u16; 16])

Store to array.
Source§

fn to_array(repr: __m256i) -> [u16; 16]

Convert to array.
Source§

fn add(a: __m256i, b: __m256i) -> __m256i

Lane-wise addition (wrapping).
Source§

fn sub(a: __m256i, b: __m256i) -> __m256i

Lane-wise subtraction (wrapping).
Source§

fn mul(a: __m256i, b: __m256i) -> __m256i

Lane-wise multiplication (low 16 bits of product).
Source§

fn min(a: __m256i, b: __m256i) -> __m256i

Lane-wise minimum.
Source§

fn max(a: __m256i, b: __m256i) -> __m256i

Lane-wise maximum.
Source§

fn simd_eq(a: __m256i, b: __m256i) -> __m256i

Lane-wise equality.
Source§

fn simd_ne(a: __m256i, b: __m256i) -> __m256i

Lane-wise inequality.
Source§

fn simd_gt(a: __m256i, b: __m256i) -> __m256i

Lane-wise greater-than.
Source§

fn simd_lt(a: __m256i, b: __m256i) -> __m256i

Lane-wise less-than.
Source§

fn simd_le(a: __m256i, b: __m256i) -> __m256i

Lane-wise less-than-or-equal.
Source§

fn simd_ge(a: __m256i, b: __m256i) -> __m256i

Lane-wise greater-than-or-equal.
Source§

fn blend(mask: __m256i, if_true: __m256i, if_false: __m256i) -> __m256i

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn reduce_add(a: __m256i) -> u16

Sum all 16 lanes (wrapping).
Source§

fn not(a: __m256i) -> __m256i

Bitwise NOT.
Source§

fn bitand(a: __m256i, b: __m256i) -> __m256i

Bitwise AND.
Source§

fn bitor(a: __m256i, b: __m256i) -> __m256i

Bitwise OR.
Source§

fn bitxor(a: __m256i, b: __m256i) -> __m256i

Bitwise XOR.
Source§

fn shl_const<const N: i32>(a: __m256i) -> __m256i

Shift left by constant.
Source§

fn shr_logical_const<const N: i32>(a: __m256i) -> __m256i

Logical shift right by constant (zero-filling).
Source§

fn all_true(a: __m256i) -> bool

True if all lanes have their sign bit set (all-1s mask).
Source§

fn any_true(a: __m256i) -> bool

True if any lane has its sign bit set.
Source§

fn bitmask(a: __m256i) -> u32

Extract the high bit of each lane as a bitmask.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

impl U16x32Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = [__m256i; 2]

Platform-native SIMD representation.
Source§

fn splat(v: u16) -> [__m256i; 2]

Broadcast scalar to all 32 lanes.
Source§

fn zero() -> [__m256i; 2]

All lanes zero.
Source§

fn load(data: &[u16; 32]) -> [__m256i; 2]

Load from an aligned array.
Source§

fn from_array(arr: [u16; 32]) -> [__m256i; 2]

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: [__m256i; 2], out: &mut [u16; 32])

Store to array.
Source§

fn to_array(repr: [__m256i; 2]) -> [u16; 32]

Convert to array.
Source§

fn add(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise addition.
Source§

fn sub(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise subtraction.
Source§

fn mul(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise multiplication (low bits of product).
Source§

fn neg(a: [__m256i; 2]) -> [__m256i; 2]

Lane-wise negation.
Source§

fn min(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise minimum.
Source§

fn max(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise maximum.
Source§

fn reduce_add(a: [__m256i; 2]) -> u16

Sum all 32 lanes.
Source§

fn shl_const<const N: i32>(a: [__m256i; 2]) -> [__m256i; 2]

Shift left by constant.
Source§

fn shr_arithmetic_const<const N: i32>(a: [__m256i; 2]) -> [__m256i; 2]

Arithmetic shift right by constant (sign-extending).
Source§

fn shr_logical_const<const N: i32>(a: [__m256i; 2]) -> [__m256i; 2]

Logical shift right by constant (zero-filling).
Source§

fn all_true(a: [__m256i; 2]) -> bool

True if all lanes have their sign bit set (all-1s mask).
Source§

fn any_true(a: [__m256i; 2]) -> bool

True if any lane has its sign bit set (any all-1s mask lane).
Source§

fn bitmask(a: [__m256i; 2]) -> u64

Extract the high bit of each lane as a bitmask.
Source§

fn simd_eq(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise equality.
Source§

fn simd_ne(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise inequality.
Source§

fn simd_lt(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise less-than.
Source§

fn simd_le(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise less-than-or-equal.
Source§

fn simd_gt(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise greater-than.
Source§

fn simd_ge(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise greater-than-or-equal.
Source§

fn blend( mask: [__m256i; 2], if_true: [__m256i; 2], if_false: [__m256i; 2], ) -> [__m256i; 2]

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn not(a: [__m256i; 2]) -> [__m256i; 2]

Bitwise NOT.
Source§

fn bitand(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Bitwise AND.
Source§

fn bitor(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Bitwise OR.
Source§

fn bitxor(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Bitwise XOR.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

impl U16x8Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = __m128i

Platform-native SIMD representation.
Source§

fn splat(v: u16) -> __m128i

Broadcast scalar to all 8 lanes.
Source§

fn zero() -> __m128i

All lanes zero.
Source§

fn load(data: &[u16; 8]) -> __m128i

Load from an aligned array.
Source§

fn from_array(arr: [u16; 8]) -> __m128i

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: __m128i, out: &mut [u16; 8])

Store to array.
Source§

fn to_array(repr: __m128i) -> [u16; 8]

Convert to array.
Source§

fn add(a: __m128i, b: __m128i) -> __m128i

Lane-wise addition (wrapping).
Source§

fn sub(a: __m128i, b: __m128i) -> __m128i

Lane-wise subtraction (wrapping).
Source§

fn mul(a: __m128i, b: __m128i) -> __m128i

Lane-wise multiplication (low 16 bits of product).
Source§

fn min(a: __m128i, b: __m128i) -> __m128i

Lane-wise minimum.
Source§

fn max(a: __m128i, b: __m128i) -> __m128i

Lane-wise maximum.
Source§

fn simd_eq(a: __m128i, b: __m128i) -> __m128i

Lane-wise equality.
Source§

fn simd_ne(a: __m128i, b: __m128i) -> __m128i

Lane-wise inequality.
Source§

fn simd_gt(a: __m128i, b: __m128i) -> __m128i

Lane-wise greater-than.
Source§

fn simd_lt(a: __m128i, b: __m128i) -> __m128i

Lane-wise less-than.
Source§

fn simd_le(a: __m128i, b: __m128i) -> __m128i

Lane-wise less-than-or-equal.
Source§

fn simd_ge(a: __m128i, b: __m128i) -> __m128i

Lane-wise greater-than-or-equal.
Source§

fn blend(mask: __m128i, if_true: __m128i, if_false: __m128i) -> __m128i

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn reduce_add(a: __m128i) -> u16

Sum all 8 lanes (wrapping).
Source§

fn not(a: __m128i) -> __m128i

Bitwise NOT.
Source§

fn bitand(a: __m128i, b: __m128i) -> __m128i

Bitwise AND.
Source§

fn bitor(a: __m128i, b: __m128i) -> __m128i

Bitwise OR.
Source§

fn bitxor(a: __m128i, b: __m128i) -> __m128i

Bitwise XOR.
Source§

fn shl_const<const N: i32>(a: __m128i) -> __m128i

Shift left by constant.
Source§

fn shr_logical_const<const N: i32>(a: __m128i) -> __m128i

Logical shift right by constant (zero-filling).
Source§

fn all_true(a: __m128i) -> bool

True if all lanes have their sign bit set (all-1s mask).
Source§

fn any_true(a: __m128i) -> bool

True if any lane has its sign bit set.
Source§

fn bitmask(a: __m128i) -> u32

Extract the high bit of each lane as a bitmask.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

impl U32x16Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = [__m256i; 2]

Platform-native SIMD representation.
Source§

fn splat(v: u32) -> [__m256i; 2]

Broadcast scalar to all 16 lanes.
Source§

fn zero() -> [__m256i; 2]

All lanes zero.
Source§

fn load(data: &[u32; 16]) -> [__m256i; 2]

Load from an aligned array.
Source§

fn from_array(arr: [u32; 16]) -> [__m256i; 2]

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: [__m256i; 2], out: &mut [u32; 16])

Store to array.
Source§

fn to_array(repr: [__m256i; 2]) -> [u32; 16]

Convert to array.
Source§

fn add(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise addition.
Source§

fn sub(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise subtraction.
Source§

fn mul(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise multiplication (low bits of product).
Source§

fn neg(a: [__m256i; 2]) -> [__m256i; 2]

Lane-wise negation.
Source§

fn min(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise minimum.
Source§

fn max(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise maximum.
Source§

fn reduce_add(a: [__m256i; 2]) -> u32

Sum all 16 lanes.
Source§

fn shl_const<const N: i32>(a: [__m256i; 2]) -> [__m256i; 2]

Shift left by constant.
Source§

fn shr_arithmetic_const<const N: i32>(a: [__m256i; 2]) -> [__m256i; 2]

Arithmetic shift right by constant (sign-extending).
Source§

fn shr_logical_const<const N: i32>(a: [__m256i; 2]) -> [__m256i; 2]

Logical shift right by constant (zero-filling).
Source§

fn all_true(a: [__m256i; 2]) -> bool

True if all lanes have their sign bit set (all-1s mask).
Source§

fn any_true(a: [__m256i; 2]) -> bool

True if any lane has its sign bit set (any all-1s mask lane).
Source§

fn bitmask(a: [__m256i; 2]) -> u64

Extract the high bit of each lane as a bitmask.
Source§

fn simd_eq(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise equality.
Source§

fn simd_ne(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise inequality.
Source§

fn simd_lt(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise less-than.
Source§

fn simd_le(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise less-than-or-equal.
Source§

fn simd_gt(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise greater-than.
Source§

fn simd_ge(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise greater-than-or-equal.
Source§

fn blend( mask: [__m256i; 2], if_true: [__m256i; 2], if_false: [__m256i; 2], ) -> [__m256i; 2]

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn not(a: [__m256i; 2]) -> [__m256i; 2]

Bitwise NOT.
Source§

fn bitand(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Bitwise AND.
Source§

fn bitor(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Bitwise OR.
Source§

fn bitxor(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Bitwise XOR.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

impl U32x4Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = __m128i

Platform-native SIMD representation.
Source§

fn splat(v: u32) -> __m128i

Broadcast scalar to all 4 lanes.
Source§

fn zero() -> __m128i

All lanes zero.
Source§

fn load(data: &[u32; 4]) -> __m128i

Load from an aligned array.
Source§

fn from_array(arr: [u32; 4]) -> __m128i

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: __m128i, out: &mut [u32; 4])

Store to array.
Source§

fn to_array(repr: __m128i) -> [u32; 4]

Convert to array.
Source§

fn add(a: __m128i, b: __m128i) -> __m128i

Lane-wise addition (wrapping).
Source§

fn sub(a: __m128i, b: __m128i) -> __m128i

Lane-wise subtraction (wrapping).
Source§

fn mul(a: __m128i, b: __m128i) -> __m128i

Lane-wise multiplication (low 32 bits of each 32x32 product).
Source§

fn min(a: __m128i, b: __m128i) -> __m128i

Lane-wise unsigned minimum.
Source§

fn max(a: __m128i, b: __m128i) -> __m128i

Lane-wise unsigned maximum.
Source§

fn simd_eq(a: __m128i, b: __m128i) -> __m128i

Lane-wise equality.
Source§

fn simd_ne(a: __m128i, b: __m128i) -> __m128i

Lane-wise inequality.
Source§

fn simd_gt(a: __m128i, b: __m128i) -> __m128i

Lane-wise unsigned greater-than.
Source§

fn simd_lt(a: __m128i, b: __m128i) -> __m128i

Lane-wise unsigned less-than.
Source§

fn simd_le(a: __m128i, b: __m128i) -> __m128i

Lane-wise unsigned less-than-or-equal.
Source§

fn simd_ge(a: __m128i, b: __m128i) -> __m128i

Lane-wise unsigned greater-than-or-equal.
Source§

fn blend(mask: __m128i, if_true: __m128i, if_false: __m128i) -> __m128i

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn reduce_add(a: __m128i) -> u32

Sum all 4 lanes (wrapping).
Source§

fn not(a: __m128i) -> __m128i

Bitwise NOT.
Source§

fn bitand(a: __m128i, b: __m128i) -> __m128i

Bitwise AND.
Source§

fn bitor(a: __m128i, b: __m128i) -> __m128i

Bitwise OR.
Source§

fn bitxor(a: __m128i, b: __m128i) -> __m128i

Bitwise XOR.
Source§

fn shl_const<const N: i32>(a: __m128i) -> __m128i

Shift left by constant.
Source§

fn shr_logical_const<const N: i32>(a: __m128i) -> __m128i

Logical shift right by constant (zero-filling).
Source§

fn all_true(a: __m128i) -> bool

True if all lanes have their sign bit set (all-1s mask).
Source§

fn any_true(a: __m128i) -> bool

True if any lane has its sign bit set (any all-1s mask lane).
Source§

fn bitmask(a: __m128i) -> u32

Extract the high bit of each 32-bit lane as a bitmask.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi (unsigned comparison).
Source§

impl U32x4Bitcast for X64V3Token

Available on x86-64 only.
Source§

fn bitcast_u32_to_i32(a: __m128i) -> __m128i

Bitcast u32x4 to i32x4 (reinterpret bits, no conversion).
Source§

fn bitcast_i32_to_u32(a: __m128i) -> __m128i

Bitcast i32x4 to u32x4 (reinterpret bits, no conversion).
Source§

impl U32x8Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = __m256i

Platform-native SIMD representation.
Source§

fn splat(v: u32) -> __m256i

Broadcast scalar to all 8 lanes.
Source§

fn zero() -> __m256i

All lanes zero.
Source§

fn load(data: &[u32; 8]) -> __m256i

Load from an aligned array.
Source§

fn from_array(arr: [u32; 8]) -> __m256i

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: __m256i, out: &mut [u32; 8])

Store to array.
Source§

fn to_array(repr: __m256i) -> [u32; 8]

Convert to array.
Source§

fn add(a: __m256i, b: __m256i) -> __m256i

Lane-wise addition (wrapping).
Source§

fn sub(a: __m256i, b: __m256i) -> __m256i

Lane-wise subtraction (wrapping).
Source§

fn mul(a: __m256i, b: __m256i) -> __m256i

Lane-wise multiplication (low 32 bits of each 32x32 product).
Source§

fn min(a: __m256i, b: __m256i) -> __m256i

Lane-wise unsigned minimum.
Source§

fn max(a: __m256i, b: __m256i) -> __m256i

Lane-wise unsigned maximum.
Source§

fn simd_eq(a: __m256i, b: __m256i) -> __m256i

Lane-wise equality.
Source§

fn simd_ne(a: __m256i, b: __m256i) -> __m256i

Lane-wise inequality.
Source§

fn simd_gt(a: __m256i, b: __m256i) -> __m256i

Lane-wise unsigned greater-than.
Source§

fn simd_lt(a: __m256i, b: __m256i) -> __m256i

Lane-wise unsigned less-than.
Source§

fn simd_le(a: __m256i, b: __m256i) -> __m256i

Lane-wise unsigned less-than-or-equal.
Source§

fn simd_ge(a: __m256i, b: __m256i) -> __m256i

Lane-wise unsigned greater-than-or-equal.
Source§

fn blend(mask: __m256i, if_true: __m256i, if_false: __m256i) -> __m256i

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn reduce_add(a: __m256i) -> u32

Sum all 8 lanes (wrapping).
Source§

fn not(a: __m256i) -> __m256i

Bitwise NOT.
Source§

fn bitand(a: __m256i, b: __m256i) -> __m256i

Bitwise AND.
Source§

fn bitor(a: __m256i, b: __m256i) -> __m256i

Bitwise OR.
Source§

fn bitxor(a: __m256i, b: __m256i) -> __m256i

Bitwise XOR.
Source§

fn shl_const<const N: i32>(a: __m256i) -> __m256i

Shift left by constant.
Source§

fn shr_logical_const<const N: i32>(a: __m256i) -> __m256i

Logical shift right by constant (zero-filling).
Source§

fn all_true(a: __m256i) -> bool

True if all lanes have their sign bit set (all-1s mask).
Source§

fn any_true(a: __m256i) -> bool

True if any lane has its sign bit set (any all-1s mask lane).
Source§

fn bitmask(a: __m256i) -> u32

Extract the high bit of each 32-bit lane as a bitmask.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi (unsigned comparison).
Source§

impl U32x8Bitcast for X64V3Token

Available on x86-64 only.
Source§

fn bitcast_u32_to_i32(a: __m256i) -> __m256i

Bitcast u32x8 to i32x8 (reinterpret bits, no conversion).
Source§

fn bitcast_i32_to_u32(a: __m256i) -> __m256i

Bitcast i32x8 to u32x8 (reinterpret bits, no conversion).
Source§

impl U64x2Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = __m128i

Platform-native SIMD representation.
Source§

fn splat(v: u64) -> __m128i

Broadcast scalar to all 2 lanes.
Source§

fn zero() -> __m128i

All lanes zero.
Source§

fn load(data: &[u64; 2]) -> __m128i

Load from an aligned array.
Source§

fn from_array(arr: [u64; 2]) -> __m128i

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: __m128i, out: &mut [u64; 2])

Store to array.
Source§

fn to_array(repr: __m128i) -> [u64; 2]

Convert to array.
Source§

fn add(a: __m128i, b: __m128i) -> __m128i

Lane-wise addition (wrapping).
Source§

fn sub(a: __m128i, b: __m128i) -> __m128i

Lane-wise subtraction (wrapping).
Source§

fn min(a: __m128i, b: __m128i) -> __m128i

Lane-wise minimum.
Source§

fn max(a: __m128i, b: __m128i) -> __m128i

Lane-wise maximum.
Source§

fn simd_eq(a: __m128i, b: __m128i) -> __m128i

Lane-wise equality.
Source§

fn simd_ne(a: __m128i, b: __m128i) -> __m128i

Lane-wise inequality.
Source§

fn simd_gt(a: __m128i, b: __m128i) -> __m128i

Lane-wise greater-than.
Source§

fn simd_lt(a: __m128i, b: __m128i) -> __m128i

Lane-wise less-than.
Source§

fn simd_le(a: __m128i, b: __m128i) -> __m128i

Lane-wise less-than-or-equal.
Source§

fn simd_ge(a: __m128i, b: __m128i) -> __m128i

Lane-wise greater-than-or-equal.
Source§

fn blend(mask: __m128i, if_true: __m128i, if_false: __m128i) -> __m128i

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn reduce_add(a: __m128i) -> u64

Sum all 2 lanes (wrapping).
Source§

fn not(a: __m128i) -> __m128i

Bitwise NOT.
Source§

fn bitand(a: __m128i, b: __m128i) -> __m128i

Bitwise AND.
Source§

fn bitor(a: __m128i, b: __m128i) -> __m128i

Bitwise OR.
Source§

fn bitxor(a: __m128i, b: __m128i) -> __m128i

Bitwise XOR.
Source§

fn shl_const<const N: i32>(a: __m128i) -> __m128i

Shift left by constant.
Source§

fn shr_logical_const<const N: i32>(a: __m128i) -> __m128i

Logical shift right by constant (zero-filling).
Source§

fn all_true(a: __m128i) -> bool

True if all lanes have their sign bit set (all-1s mask).
Source§

fn any_true(a: __m128i) -> bool

True if any lane has its sign bit set.
Source§

fn bitmask(a: __m128i) -> u32

Extract the high bit of each lane as a bitmask.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

impl U64x2Bitcast for X64V3Token

Available on x86-64 only.
Source§

fn bitcast_u64_to_i64(a: __m128i) -> __m128i

Bitcast u64x2 to i64x2 (reinterpret bits).
Source§

fn bitcast_i64_to_u64(a: __m128i) -> __m128i

Bitcast i64x2 to u64x2 (reinterpret bits).
Source§

impl U64x4Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = __m256i

Platform-native SIMD representation.
Source§

fn splat(v: u64) -> __m256i

Broadcast scalar to all 4 lanes.
Source§

fn zero() -> __m256i

All lanes zero.
Source§

fn load(data: &[u64; 4]) -> __m256i

Load from an aligned array.
Source§

fn from_array(arr: [u64; 4]) -> __m256i

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: __m256i, out: &mut [u64; 4])

Store to array.
Source§

fn to_array(repr: __m256i) -> [u64; 4]

Convert to array.
Source§

fn add(a: __m256i, b: __m256i) -> __m256i

Lane-wise addition (wrapping).
Source§

fn sub(a: __m256i, b: __m256i) -> __m256i

Lane-wise subtraction (wrapping).
Source§

fn min(a: __m256i, b: __m256i) -> __m256i

Lane-wise minimum.
Source§

fn max(a: __m256i, b: __m256i) -> __m256i

Lane-wise maximum.
Source§

fn simd_eq(a: __m256i, b: __m256i) -> __m256i

Lane-wise equality.
Source§

fn simd_ne(a: __m256i, b: __m256i) -> __m256i

Lane-wise inequality.
Source§

fn simd_gt(a: __m256i, b: __m256i) -> __m256i

Lane-wise greater-than.
Source§

fn simd_lt(a: __m256i, b: __m256i) -> __m256i

Lane-wise less-than.
Source§

fn simd_le(a: __m256i, b: __m256i) -> __m256i

Lane-wise less-than-or-equal.
Source§

fn simd_ge(a: __m256i, b: __m256i) -> __m256i

Lane-wise greater-than-or-equal.
Source§

fn blend(mask: __m256i, if_true: __m256i, if_false: __m256i) -> __m256i

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn reduce_add(a: __m256i) -> u64

Sum all 4 lanes (wrapping).
Source§

fn not(a: __m256i) -> __m256i

Bitwise NOT.
Source§

fn bitand(a: __m256i, b: __m256i) -> __m256i

Bitwise AND.
Source§

fn bitor(a: __m256i, b: __m256i) -> __m256i

Bitwise OR.
Source§

fn bitxor(a: __m256i, b: __m256i) -> __m256i

Bitwise XOR.
Source§

fn shl_const<const N: i32>(a: __m256i) -> __m256i

Shift left by constant.
Source§

fn shr_logical_const<const N: i32>(a: __m256i) -> __m256i

Logical shift right by constant (zero-filling).
Source§

fn all_true(a: __m256i) -> bool

True if all lanes have their sign bit set (all-1s mask).
Source§

fn any_true(a: __m256i) -> bool

True if any lane has its sign bit set.
Source§

fn bitmask(a: __m256i) -> u32

Extract the high bit of each lane as a bitmask.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

impl U64x4Bitcast for X64V3Token

Available on x86-64 only.
Source§

fn bitcast_u64_to_i64(a: __m256i) -> __m256i

Bitcast u64x4 to i64x4 (reinterpret bits).
Source§

fn bitcast_i64_to_u64(a: __m256i) -> __m256i

Bitcast i64x4 to u64x4 (reinterpret bits).
Source§

impl U64x8Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = [__m256i; 2]

Platform-native SIMD representation.
Source§

fn splat(v: u64) -> [__m256i; 2]

Broadcast scalar to all 8 lanes.
Source§

fn zero() -> [__m256i; 2]

All lanes zero.
Source§

fn load(data: &[u64; 8]) -> [__m256i; 2]

Load from an aligned array.
Source§

fn from_array(arr: [u64; 8]) -> [__m256i; 2]

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: [__m256i; 2], out: &mut [u64; 8])

Store to array.
Source§

fn to_array(repr: [__m256i; 2]) -> [u64; 8]

Convert to array.
Source§

fn add(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise addition.
Source§

fn sub(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise subtraction.
Source§

fn neg(a: [__m256i; 2]) -> [__m256i; 2]

Lane-wise negation.
Source§

fn min(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise minimum.
Source§

fn max(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise maximum.
Source§

fn reduce_add(a: [__m256i; 2]) -> u64

Sum all 8 lanes.
Source§

fn shl_const<const N: i32>(a: [__m256i; 2]) -> [__m256i; 2]

Shift left by constant.
Source§

fn shr_arithmetic_const<const N: i32>(a: [__m256i; 2]) -> [__m256i; 2]

Arithmetic shift right by constant (sign-extending).
Source§

fn shr_logical_const<const N: i32>(a: [__m256i; 2]) -> [__m256i; 2]

Logical shift right by constant (zero-filling).
Source§

fn all_true(a: [__m256i; 2]) -> bool

True if all lanes have their sign bit set (all-1s mask).
Source§

fn any_true(a: [__m256i; 2]) -> bool

True if any lane has its sign bit set (any all-1s mask lane).
Source§

fn bitmask(a: [__m256i; 2]) -> u64

Extract the high bit of each lane as a bitmask.
Source§

fn simd_eq(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise equality.
Source§

fn simd_ne(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise inequality.
Source§

fn simd_lt(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise less-than.
Source§

fn simd_le(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise less-than-or-equal.
Source§

fn simd_gt(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise greater-than.
Source§

fn simd_ge(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise greater-than-or-equal.
Source§

fn blend( mask: [__m256i; 2], if_true: [__m256i; 2], if_false: [__m256i; 2], ) -> [__m256i; 2]

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn not(a: [__m256i; 2]) -> [__m256i; 2]

Bitwise NOT.
Source§

fn bitand(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Bitwise AND.
Source§

fn bitor(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Bitwise OR.
Source§

fn bitxor(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Bitwise XOR.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

impl U8x16Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = __m128i

Platform-native SIMD representation.
Source§

fn splat(v: u8) -> __m128i

Broadcast scalar to all 16 lanes.
Source§

fn zero() -> __m128i

All lanes zero.
Source§

fn load(data: &[u8; 16]) -> __m128i

Load from an aligned array.
Source§

fn from_array(arr: [u8; 16]) -> __m128i

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: __m128i, out: &mut [u8; 16])

Store to array.
Source§

fn to_array(repr: __m128i) -> [u8; 16]

Convert to array.
Source§

fn add(a: __m128i, b: __m128i) -> __m128i

Lane-wise addition (wrapping).
Source§

fn sub(a: __m128i, b: __m128i) -> __m128i

Lane-wise subtraction (wrapping).
Source§

fn min(a: __m128i, b: __m128i) -> __m128i

Lane-wise minimum.
Source§

fn max(a: __m128i, b: __m128i) -> __m128i

Lane-wise maximum.
Source§

fn simd_eq(a: __m128i, b: __m128i) -> __m128i

Lane-wise equality.
Source§

fn simd_ne(a: __m128i, b: __m128i) -> __m128i

Lane-wise inequality.
Source§

fn simd_gt(a: __m128i, b: __m128i) -> __m128i

Lane-wise greater-than.
Source§

fn simd_lt(a: __m128i, b: __m128i) -> __m128i

Lane-wise less-than.
Source§

fn simd_le(a: __m128i, b: __m128i) -> __m128i

Lane-wise less-than-or-equal.
Source§

fn simd_ge(a: __m128i, b: __m128i) -> __m128i

Lane-wise greater-than-or-equal.
Source§

fn blend(mask: __m128i, if_true: __m128i, if_false: __m128i) -> __m128i

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn reduce_add(a: __m128i) -> u8

Sum all 16 lanes (wrapping).
Source§

fn not(a: __m128i) -> __m128i

Bitwise NOT.
Source§

fn bitand(a: __m128i, b: __m128i) -> __m128i

Bitwise AND.
Source§

fn bitor(a: __m128i, b: __m128i) -> __m128i

Bitwise OR.
Source§

fn bitxor(a: __m128i, b: __m128i) -> __m128i

Bitwise XOR.
Source§

fn shl_const<const N: i32>(a: __m128i) -> __m128i

Shift left by constant.
Source§

fn shr_logical_const<const N: i32>(a: __m128i) -> __m128i

Logical shift right by constant (zero-filling).
Source§

fn all_true(a: __m128i) -> bool

True if all lanes have their sign bit set (all-1s mask).
Source§

fn any_true(a: __m128i) -> bool

True if any lane has its sign bit set.
Source§

fn bitmask(a: __m128i) -> u32

Extract the high bit of each lane as a bitmask.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

impl U8x32Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = __m256i

Platform-native SIMD representation.
Source§

fn splat(v: u8) -> __m256i

Broadcast scalar to all 32 lanes.
Source§

fn zero() -> __m256i

All lanes zero.
Source§

fn load(data: &[u8; 32]) -> __m256i

Load from an aligned array.
Source§

fn from_array(arr: [u8; 32]) -> __m256i

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: __m256i, out: &mut [u8; 32])

Store to array.
Source§

fn to_array(repr: __m256i) -> [u8; 32]

Convert to array.
Source§

fn add(a: __m256i, b: __m256i) -> __m256i

Lane-wise addition (wrapping).
Source§

fn sub(a: __m256i, b: __m256i) -> __m256i

Lane-wise subtraction (wrapping).
Source§

fn min(a: __m256i, b: __m256i) -> __m256i

Lane-wise minimum.
Source§

fn max(a: __m256i, b: __m256i) -> __m256i

Lane-wise maximum.
Source§

fn simd_eq(a: __m256i, b: __m256i) -> __m256i

Lane-wise equality.
Source§

fn simd_ne(a: __m256i, b: __m256i) -> __m256i

Lane-wise inequality.
Source§

fn simd_gt(a: __m256i, b: __m256i) -> __m256i

Lane-wise greater-than.
Source§

fn simd_lt(a: __m256i, b: __m256i) -> __m256i

Lane-wise less-than.
Source§

fn simd_le(a: __m256i, b: __m256i) -> __m256i

Lane-wise less-than-or-equal.
Source§

fn simd_ge(a: __m256i, b: __m256i) -> __m256i

Lane-wise greater-than-or-equal.
Source§

fn blend(mask: __m256i, if_true: __m256i, if_false: __m256i) -> __m256i

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn reduce_add(a: __m256i) -> u8

Sum all 32 lanes (wrapping).
Source§

fn not(a: __m256i) -> __m256i

Bitwise NOT.
Source§

fn bitand(a: __m256i, b: __m256i) -> __m256i

Bitwise AND.
Source§

fn bitor(a: __m256i, b: __m256i) -> __m256i

Bitwise OR.
Source§

fn bitxor(a: __m256i, b: __m256i) -> __m256i

Bitwise XOR.
Source§

fn shl_const<const N: i32>(a: __m256i) -> __m256i

Shift left by constant.
Source§

fn shr_logical_const<const N: i32>(a: __m256i) -> __m256i

Logical shift right by constant (zero-filling).
Source§

fn all_true(a: __m256i) -> bool

True if all lanes have their sign bit set (all-1s mask).
Source§

fn any_true(a: __m256i) -> bool

True if any lane has its sign bit set.
Source§

fn bitmask(a: __m256i) -> u32

Extract the high bit of each lane as a bitmask.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

impl U8x64Backend for X64V3Token

Available on x86-64 only.
Source§

type Repr = [__m256i; 2]

Platform-native SIMD representation.
Source§

fn splat(v: u8) -> [__m256i; 2]

Broadcast scalar to all 64 lanes.
Source§

fn zero() -> [__m256i; 2]

All lanes zero.
Source§

fn load(data: &[u8; 64]) -> [__m256i; 2]

Load from an aligned array.
Source§

fn from_array(arr: [u8; 64]) -> [__m256i; 2]

Create from array (zero-cost transmute where possible).
Source§

fn store(repr: [__m256i; 2], out: &mut [u8; 64])

Store to array.
Source§

fn to_array(repr: [__m256i; 2]) -> [u8; 64]

Convert to array.
Source§

fn add(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise addition.
Source§

fn sub(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise subtraction.
Source§

fn neg(a: [__m256i; 2]) -> [__m256i; 2]

Lane-wise negation.
Source§

fn min(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise minimum.
Source§

fn max(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise maximum.
Source§

fn reduce_add(a: [__m256i; 2]) -> u8

Sum all 64 lanes.
Source§

fn shl_const<const N: i32>(a: [__m256i; 2]) -> [__m256i; 2]

Shift left by constant.
Source§

fn shr_arithmetic_const<const N: i32>(a: [__m256i; 2]) -> [__m256i; 2]

Arithmetic shift right by constant (sign-extending).
Source§

fn shr_logical_const<const N: i32>(a: [__m256i; 2]) -> [__m256i; 2]

Logical shift right by constant (zero-filling).
Source§

fn all_true(a: [__m256i; 2]) -> bool

True if all lanes have their sign bit set (all-1s mask).
Source§

fn any_true(a: [__m256i; 2]) -> bool

True if any lane has its sign bit set (any all-1s mask lane).
Source§

fn bitmask(a: [__m256i; 2]) -> u64

Extract the high bit of each lane as a bitmask.
Source§

fn simd_eq(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise equality.
Source§

fn simd_ne(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise inequality.
Source§

fn simd_lt(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise less-than.
Source§

fn simd_le(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise less-than-or-equal.
Source§

fn simd_gt(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise greater-than.
Source§

fn simd_ge(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Lane-wise greater-than-or-equal.
Source§

fn blend( mask: [__m256i; 2], if_true: [__m256i; 2], if_false: [__m256i; 2], ) -> [__m256i; 2]

Select lanes: where mask is all-1s pick if_true, else if_false.
Source§

fn not(a: [__m256i; 2]) -> [__m256i; 2]

Bitwise NOT.
Source§

fn bitand(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Bitwise AND.
Source§

fn bitor(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Bitwise OR.
Source§

fn bitxor(a: [__m256i; 2], b: [__m256i; 2]) -> [__m256i; 2]

Bitwise XOR.
Source§

fn clamp(a: Self::Repr, lo: Self::Repr, hi: Self::Repr) -> Self::Repr

Clamp values between lo and hi.
Source§

impl WidthDispatch for X64V3Token

Source§

type F32x4 = f32x4<X64V3Token>

Source§

type F64x2 = f64x2<X64V3Token>

Source§

type I8x16 = i8x16<X64V3Token>

Source§

type U8x16 = u8x16<X64V3Token>

Source§

type I16x8 = i16x8<X64V3Token>

Source§

type U16x8 = u16x8<X64V3Token>

Source§

type I32x4 = i32x4<X64V3Token>

Source§

type U32x4 = u32x4<X64V3Token>

Source§

type I64x2 = i64x2<X64V3Token>

Source§

type U64x2 = u64x2<X64V3Token>

Source§

type F32x8 = f32x8<X64V3Token>

Source§

type F64x4 = f64x4<X64V3Token>

Source§

type I8x32 = i8x32<X64V3Token>

Source§

type U8x32 = u8x32<X64V3Token>

Source§

type I16x16 = i16x16<X64V3Token>

Source§

type U16x16 = u16x16<X64V3Token>

Source§

type I32x8 = i32x8<X64V3Token>

Source§

type U32x8 = u32x8<X64V3Token>

Source§

type I64x4 = i64x4<X64V3Token>

Source§

type U64x4 = u64x4<X64V3Token>

Source§

type F32x16 = f32x16

Source§

type F64x8 = f64x8

Source§

type I8x64 = i8x64

Source§

type U8x64 = u8x64

Source§

type I16x32 = i16x32

Source§

type U16x32 = u16x32

Source§

type I32x16 = i32x16

Source§

type U32x16 = u32x16

Source§

type I64x8 = i64x8

Source§

type U64x8 = u64x8

Source§

fn f32x4_splat(self, v: f32) -> <X64V3Token as WidthDispatch>::F32x4

Source§

fn f32x4_zero(self) -> <X64V3Token as WidthDispatch>::F32x4

Source§

fn f32x4_load(self, data: &[f32; 4]) -> <X64V3Token as WidthDispatch>::F32x4

Source§

fn f64x2_splat(self, v: f64) -> <X64V3Token as WidthDispatch>::F64x2

Source§

fn f64x2_zero(self) -> <X64V3Token as WidthDispatch>::F64x2

Source§

fn f64x2_load(self, data: &[f64; 2]) -> <X64V3Token as WidthDispatch>::F64x2

Source§

fn i8x16_splat(self, v: i8) -> <X64V3Token as WidthDispatch>::I8x16

Source§

fn i8x16_zero(self) -> <X64V3Token as WidthDispatch>::I8x16

Source§

fn i8x16_load(self, data: &[i8; 16]) -> <X64V3Token as WidthDispatch>::I8x16

Source§

fn u8x16_splat(self, v: u8) -> <X64V3Token as WidthDispatch>::U8x16

Source§

fn u8x16_zero(self) -> <X64V3Token as WidthDispatch>::U8x16

Source§

fn u8x16_load(self, data: &[u8; 16]) -> <X64V3Token as WidthDispatch>::U8x16

Source§

fn i16x8_splat(self, v: i16) -> <X64V3Token as WidthDispatch>::I16x8

Source§

fn i16x8_zero(self) -> <X64V3Token as WidthDispatch>::I16x8

Source§

fn i16x8_load(self, data: &[i16; 8]) -> <X64V3Token as WidthDispatch>::I16x8

Source§

fn u16x8_splat(self, v: u16) -> <X64V3Token as WidthDispatch>::U16x8

Source§

fn u16x8_zero(self) -> <X64V3Token as WidthDispatch>::U16x8

Source§

fn u16x8_load(self, data: &[u16; 8]) -> <X64V3Token as WidthDispatch>::U16x8

Source§

fn i32x4_splat(self, v: i32) -> <X64V3Token as WidthDispatch>::I32x4

Source§

fn i32x4_zero(self) -> <X64V3Token as WidthDispatch>::I32x4

Source§

fn i32x4_load(self, data: &[i32; 4]) -> <X64V3Token as WidthDispatch>::I32x4

Source§

fn u32x4_splat(self, v: u32) -> <X64V3Token as WidthDispatch>::U32x4

Source§

fn u32x4_zero(self) -> <X64V3Token as WidthDispatch>::U32x4

Source§

fn u32x4_load(self, data: &[u32; 4]) -> <X64V3Token as WidthDispatch>::U32x4

Source§

fn i64x2_splat(self, v: i64) -> <X64V3Token as WidthDispatch>::I64x2

Source§

fn i64x2_zero(self) -> <X64V3Token as WidthDispatch>::I64x2

Source§

fn i64x2_load(self, data: &[i64; 2]) -> <X64V3Token as WidthDispatch>::I64x2

Source§

fn u64x2_splat(self, v: u64) -> <X64V3Token as WidthDispatch>::U64x2

Source§

fn u64x2_zero(self) -> <X64V3Token as WidthDispatch>::U64x2

Source§

fn u64x2_load(self, data: &[u64; 2]) -> <X64V3Token as WidthDispatch>::U64x2

Source§

fn f32x8_splat(self, v: f32) -> <X64V3Token as WidthDispatch>::F32x8

Source§

fn f32x8_zero(self) -> <X64V3Token as WidthDispatch>::F32x8

Source§

fn f32x8_load(self, data: &[f32; 8]) -> <X64V3Token as WidthDispatch>::F32x8

Source§

fn f64x4_splat(self, v: f64) -> <X64V3Token as WidthDispatch>::F64x4

Source§

fn f64x4_zero(self) -> <X64V3Token as WidthDispatch>::F64x4

Source§

fn f64x4_load(self, data: &[f64; 4]) -> <X64V3Token as WidthDispatch>::F64x4

Source§

fn i8x32_splat(self, v: i8) -> <X64V3Token as WidthDispatch>::I8x32

Source§

fn i8x32_zero(self) -> <X64V3Token as WidthDispatch>::I8x32

Source§

fn i8x32_load(self, data: &[i8; 32]) -> <X64V3Token as WidthDispatch>::I8x32

Source§

fn u8x32_splat(self, v: u8) -> <X64V3Token as WidthDispatch>::U8x32

Source§

fn u8x32_zero(self) -> <X64V3Token as WidthDispatch>::U8x32

Source§

fn u8x32_load(self, data: &[u8; 32]) -> <X64V3Token as WidthDispatch>::U8x32

Source§

fn i16x16_splat(self, v: i16) -> <X64V3Token as WidthDispatch>::I16x16

Source§

fn i16x16_zero(self) -> <X64V3Token as WidthDispatch>::I16x16

Source§

fn i16x16_load(self, data: &[i16; 16]) -> <X64V3Token as WidthDispatch>::I16x16

Source§

fn u16x16_splat(self, v: u16) -> <X64V3Token as WidthDispatch>::U16x16

Source§

fn u16x16_zero(self) -> <X64V3Token as WidthDispatch>::U16x16

Source§

fn u16x16_load(self, data: &[u16; 16]) -> <X64V3Token as WidthDispatch>::U16x16

Source§

fn i32x8_splat(self, v: i32) -> <X64V3Token as WidthDispatch>::I32x8

Source§

fn i32x8_zero(self) -> <X64V3Token as WidthDispatch>::I32x8

Source§

fn i32x8_load(self, data: &[i32; 8]) -> <X64V3Token as WidthDispatch>::I32x8

Source§

fn u32x8_splat(self, v: u32) -> <X64V3Token as WidthDispatch>::U32x8

Source§

fn u32x8_zero(self) -> <X64V3Token as WidthDispatch>::U32x8

Source§

fn u32x8_load(self, data: &[u32; 8]) -> <X64V3Token as WidthDispatch>::U32x8

Source§

fn i64x4_splat(self, v: i64) -> <X64V3Token as WidthDispatch>::I64x4

Source§

fn i64x4_zero(self) -> <X64V3Token as WidthDispatch>::I64x4

Source§

fn i64x4_load(self, data: &[i64; 4]) -> <X64V3Token as WidthDispatch>::I64x4

Source§

fn u64x4_splat(self, v: u64) -> <X64V3Token as WidthDispatch>::U64x4

Source§

fn u64x4_zero(self) -> <X64V3Token as WidthDispatch>::U64x4

Source§

fn u64x4_load(self, data: &[u64; 4]) -> <X64V3Token as WidthDispatch>::U64x4

Source§

fn f32x16_splat(self, v: f32) -> <X64V3Token as WidthDispatch>::F32x16

Source§

fn f32x16_zero(self) -> <X64V3Token as WidthDispatch>::F32x16

Source§

fn f32x16_load(self, data: &[f32; 16]) -> <X64V3Token as WidthDispatch>::F32x16

Source§

fn f64x8_splat(self, v: f64) -> <X64V3Token as WidthDispatch>::F64x8

Source§

fn f64x8_zero(self) -> <X64V3Token as WidthDispatch>::F64x8

Source§

fn f64x8_load(self, data: &[f64; 8]) -> <X64V3Token as WidthDispatch>::F64x8

Source§

fn i8x64_splat(self, v: i8) -> <X64V3Token as WidthDispatch>::I8x64

Source§

fn i8x64_zero(self) -> <X64V3Token as WidthDispatch>::I8x64

Source§

fn i8x64_load(self, data: &[i8; 64]) -> <X64V3Token as WidthDispatch>::I8x64

Source§

fn u8x64_splat(self, v: u8) -> <X64V3Token as WidthDispatch>::U8x64

Source§

fn u8x64_zero(self) -> <X64V3Token as WidthDispatch>::U8x64

Source§

fn u8x64_load(self, data: &[u8; 64]) -> <X64V3Token as WidthDispatch>::U8x64

Source§

fn i16x32_splat(self, v: i16) -> <X64V3Token as WidthDispatch>::I16x32

Source§

fn i16x32_zero(self) -> <X64V3Token as WidthDispatch>::I16x32

Source§

fn i16x32_load(self, data: &[i16; 32]) -> <X64V3Token as WidthDispatch>::I16x32

Source§

fn u16x32_splat(self, v: u16) -> <X64V3Token as WidthDispatch>::U16x32

Source§

fn u16x32_zero(self) -> <X64V3Token as WidthDispatch>::U16x32

Source§

fn u16x32_load(self, data: &[u16; 32]) -> <X64V3Token as WidthDispatch>::U16x32

Source§

fn i32x16_splat(self, v: i32) -> <X64V3Token as WidthDispatch>::I32x16

Source§

fn i32x16_zero(self) -> <X64V3Token as WidthDispatch>::I32x16

Source§

fn i32x16_load(self, data: &[i32; 16]) -> <X64V3Token as WidthDispatch>::I32x16

Source§

fn u32x16_splat(self, v: u32) -> <X64V3Token as WidthDispatch>::U32x16

Source§

fn u32x16_zero(self) -> <X64V3Token as WidthDispatch>::U32x16

Source§

fn u32x16_load(self, data: &[u32; 16]) -> <X64V3Token as WidthDispatch>::U32x16

Source§

fn i64x8_splat(self, v: i64) -> <X64V3Token as WidthDispatch>::I64x8

Source§

fn i64x8_zero(self) -> <X64V3Token as WidthDispatch>::I64x8

Source§

fn i64x8_load(self, data: &[i64; 8]) -> <X64V3Token as WidthDispatch>::I64x8

Source§

fn u64x8_splat(self, v: u64) -> <X64V3Token as WidthDispatch>::U64x8

Source§

fn u64x8_zero(self) -> <X64V3Token as WidthDispatch>::U64x8

Source§

fn u64x8_load(self, data: &[u64; 8]) -> <X64V3Token as WidthDispatch>::U64x8

Source§

impl Copy for X64V3Token

Source§

impl Has128BitSimd for X64V3Token

Source§

impl Has256BitSimd for X64V3Token

Source§

impl HasX64V2 for X64V3Token

Source§

impl Sealed for X64V3Token

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.