[][src]Module core_arch::aarch64

🔬 This is a nightly-only experimental API. (stdsimd)
This is supported on AArch64 only.

Platform-specific intrinsics for the aarch64 platform.

See the module documentation for more details.

Structs

APSRExperimentalAArch64

Application Program Status Register

SYExperimentalAArch64

Full system is the required shareability domain, reads and writes are the required access types

float32x2_tExperimentalAArch64

ARM-specific 64-bit wide vector of two packed f32.

float32x4_tExperimentalAArch64

ARM-specific 128-bit wide vector of four packed f32.

float64x1_tExperimentalAArch64

ARM-specific 64-bit wide vector of one packed f64.

float64x2_tExperimentalAArch64

ARM-specific 128-bit wide vector of two packed f64.

int16x4_tExperimentalAArch64

ARM-specific 64-bit wide vector of four packed i16.

int16x8_tExperimentalAArch64

ARM-specific 128-bit wide vector of eight packed i16.

int32x2_tExperimentalAArch64

ARM-specific 64-bit wide vector of two packed i32.

int32x4_tExperimentalAArch64

ARM-specific 128-bit wide vector of four packed i32.

int64x1_tExperimentalAArch64

ARM-specific 64-bit wide vector of one packed i64.

int64x2_tExperimentalAArch64

ARM-specific 128-bit wide vector of two packed i64.

int8x8_tExperimentalAArch64

ARM-specific 64-bit wide vector of eight packed i8.

int8x16_tExperimentalAArch64

ARM-specific 128-bit wide vector of sixteen packed i8.

int8x16x2_tExperimentalAArch64

ARM-specific type containing two int8x16_t vectors.

int8x16x3_tExperimentalAArch64

ARM-specific type containing three int8x16_t vectors.

int8x16x4_tExperimentalAArch64

ARM-specific type containing four int8x16_t vectors.

int8x8x2_tExperimentalAArch64

ARM-specific type containing two int8x8_t vectors.

int8x8x3_tExperimentalAArch64

ARM-specific type containing three int8x8_t vectors.

int8x8x4_tExperimentalAArch64

ARM-specific type containing four int8x8_t vectors.

poly16x4_tExperimentalAArch64

ARM-specific 64-bit wide vector of four packed u16.

poly16x8_tExperimentalAArch64

ARM-specific 128-bit wide vector of eight packed u16.

poly64x1_tExperimentalAArch64

ARM-specific 64-bit wide vector of one packed p64.

poly64x2_tExperimentalAArch64

ARM-specific 64-bit wide vector of two packed p64.

poly8x8_tExperimentalAArch64

ARM-specific 64-bit wide polynomial vector of eight packed u8.

poly8x16_tExperimentalAArch64

ARM-specific 128-bit wide vector of sixteen packed u8.

poly8x16x2_tExperimentalAArch64

ARM-specific type containing two poly8x16_t vectors.

poly8x16x3_tExperimentalAArch64

ARM-specific type containing three poly8x16_t vectors.

poly8x16x4_tExperimentalAArch64

ARM-specific type containing four poly8x16_t vectors.

poly8x8x2_tExperimentalAArch64

ARM-specific type containing two poly8x8_t vectors.

poly8x8x3_tExperimentalAArch64

ARM-specific type containing three poly8x8_t vectors.

poly8x8x4_tExperimentalAArch64

ARM-specific type containing four poly8x8_t vectors.

uint16x4_tExperimentalAArch64

ARM-specific 64-bit wide vector of four packed u16.

uint16x8_tExperimentalAArch64

ARM-specific 128-bit wide vector of eight packed u16.

uint32x2_tExperimentalAArch64

ARM-specific 64-bit wide vector of two packed u32.

uint32x4_tExperimentalAArch64

ARM-specific 128-bit wide vector of four packed u32.

uint64x1_tExperimentalAArch64

ARM-specific 64-bit wide vector of one packed u64.

uint64x2_tExperimentalAArch64

ARM-specific 128-bit wide vector of two packed u64.

uint8x8_tExperimentalAArch64

ARM-specific 64-bit wide vector of eight packed u8.

uint8x16_tExperimentalAArch64

ARM-specific 128-bit wide vector of sixteen packed u8.

uint8x16x2_tExperimentalAArch64

ARM-specific type containing two uint8x16_t vectors.

uint8x16x3_tExperimentalAArch64

ARM-specific type containing three uint8x16_t vectors.

uint8x16x4_tExperimentalAArch64

ARM-specific type containing four uint8x16_t vectors.

uint8x8x2_tExperimentalAArch64

ARM-specific type containing two uint8x8_t vectors.

uint8x8x3_tExperimentalAArch64

ARM-specific type containing three uint8x8_t vectors.

uint8x8x4_tExperimentalAArch64

ARM-specific type containing four uint8x8_t vectors.

Functions

__breakpointExperimentalAArch64

Inserts a breakpoint instruction.

__crc32bExperimentalAArch64 and crc

CRC32 single round checksum for bytes (8 bits).

__crc32hExperimentalAArch64 and crc

CRC32 single round checksum for half words (16 bits).

__crc32wExperimentalAArch64 and crc

CRC32 single round checksum for words (32 bits).

__crc32dExperimentalAArch64 and crc

CRC32 single round checksum for quad words (64 bits).

__crc32cbExperimentalAArch64 and crc

CRC32-C single round checksum for bytes (8 bits).

__crc32chExperimentalAArch64 and crc

CRC32-C single round checksum for half words (16 bits).

__crc32cwExperimentalAArch64 and crc

CRC32-C single round checksum for words (32 bits).

__crc32cdExperimentalAArch64 and crc

CRC32-C single round checksum for quad words (64 bits).

__dmbExperimentalAArch64

Generates a DMB (data memory barrier) instruction or equivalent CP15 instruction.

__dsbExperimentalAArch64

Generates a DSB (data synchronization barrier) instruction or equivalent CP15 instruction.

__isbExperimentalAArch64

Generates an ISB (instruction synchronization barrier) instruction or equivalent CP15 instruction.

__nopExperimentalAArch64

Generates an unspecified no-op instruction.

__rsrExperimentalAArch64

Reads a 32-bit system register

__rsrpExperimentalAArch64

Reads a system register containing an address

__wsrExperimentalAArch64

Writes a 32-bit system register

__wsrpExperimentalAArch64

Writes a system register containing an address

_cls_u32ExperimentalAArch64

Counts the leading most significant bits set.

_cls_u64ExperimentalAArch64

Counts the leading most significant bits set.

_clz_u64ExperimentalAArch64

Count Leading Zeros.

_rbit_u64ExperimentalAArch64

Reverse the bit order.

_rev_u16ExperimentalAArch64

Reverse the order of the bytes.

_rev_u32ExperimentalAArch64

Reverse the order of the bytes.

_rev_u64ExperimentalAArch64

Reverse the order of the bytes.

brkExperimentalAArch64

Generates the trap instruction BRK 1

vadd_f32ExperimentalAArch64 and neon

Vector add.

vadd_f64ExperimentalAArch64 and neon

Vector add.

vadd_s8ExperimentalAArch64 and neon

Vector add.

vadd_s16ExperimentalAArch64 and neon

Vector add.

vadd_s32ExperimentalAArch64 and neon

Vector add.

vadd_u8ExperimentalAArch64 and neon

Vector add.

vadd_u16ExperimentalAArch64 and neon

Vector add.

vadd_u32ExperimentalAArch64 and neon

Vector add.

vaddd_s64ExperimentalAArch64 and neon

Vector add.

vaddd_u64ExperimentalAArch64 and neon

Vector add.

vaddl_s8ExperimentalAArch64 and neon

Vector long add.

vaddl_s16ExperimentalAArch64 and neon

Vector long add.

vaddl_s32ExperimentalAArch64 and neon

Vector long add.

vaddl_u8ExperimentalAArch64 and neon

Vector long add.

vaddl_u16ExperimentalAArch64 and neon

Vector long add.

vaddl_u32ExperimentalAArch64 and neon

Vector long add.

vaddq_f32ExperimentalAArch64 and neon

Vector add.

vaddq_f64ExperimentalAArch64 and neon

Vector add.

vaddq_s8ExperimentalAArch64 and neon

Vector add.

vaddq_s16ExperimentalAArch64 and neon

Vector add.

vaddq_s32ExperimentalAArch64 and neon

Vector add.

vaddq_s64ExperimentalAArch64 and neon

Vector add.

vaddq_u8ExperimentalAArch64 and neon

Vector add.

vaddq_u16ExperimentalAArch64 and neon

Vector add.

vaddq_u32ExperimentalAArch64 and neon

Vector add.

vaddq_u64ExperimentalAArch64 and neon

Vector add.

vaesdq_u8ExperimentalAArch64 and crypto

AES single round decryption.

vaeseq_u8ExperimentalAArch64 and crypto

AES single round encryption.

vaesimcq_u8ExperimentalAArch64 and crypto

AES inverse mix columns.

vaesmcq_u8ExperimentalAArch64 and crypto

AES mix columns.

vcombine_f32ExperimentalAArch64 and neon

Vector combine

vcombine_f64ExperimentalAArch64 and neon

Vector combine

vcombine_p8ExperimentalAArch64 and neon

Vector combine

vcombine_p16ExperimentalAArch64 and neon

Vector combine

vcombine_p64ExperimentalAArch64 and neon

Vector combine

vcombine_s8ExperimentalAArch64 and neon

Vector combine

vcombine_s16ExperimentalAArch64 and neon

Vector combine

vcombine_s32ExperimentalAArch64 and neon

Vector combine

vcombine_s64ExperimentalAArch64 and neon

Vector combine

vcombine_u8ExperimentalAArch64 and neon

Vector combine

vcombine_u16ExperimentalAArch64 and neon

Vector combine

vcombine_u32ExperimentalAArch64 and neon

Vector combine

vcombine_u64ExperimentalAArch64 and neon

Vector combine

vmaxv_f32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_s8ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_s16ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_s32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_u8ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_u16ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_u32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_f32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_f64ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_s8ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_s16ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_s32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_u8ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_u16ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_u32ExperimentalAArch64 and neon

Horizontal vector max.

vminv_f32ExperimentalAArch64 and neon

Horizontal vector min.

vminv_s8ExperimentalAArch64 and neon

Horizontal vector min.

vminv_s16ExperimentalAArch64 and neon

Horizontal vector min.

vminv_s32ExperimentalAArch64 and neon

Horizontal vector min.

vminv_u8ExperimentalAArch64 and neon

Horizontal vector min.

vminv_u16ExperimentalAArch64 and neon

Horizontal vector min.

vminv_u32ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_f32ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_f64ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_s8ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_s16ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_s32ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_u8ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_u16ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_u32ExperimentalAArch64 and neon

Horizontal vector min.

vmovl_s8ExperimentalAArch64 and neon

Vector long move.

vmovl_s16ExperimentalAArch64 and neon

Vector long move.

vmovl_s32ExperimentalAArch64 and neon

Vector long move.

vmovl_u8ExperimentalAArch64 and neon

Vector long move.

vmovl_u16ExperimentalAArch64 and neon

Vector long move.

vmovl_u32ExperimentalAArch64 and neon

Vector long move.

vmovn_s16ExperimentalAArch64 and neon

Vector narrow integer.

vmovn_s32ExperimentalAArch64 and neon

Vector narrow integer.

vmovn_s64ExperimentalAArch64 and neon

Vector narrow integer.

vmovn_u16ExperimentalAArch64 and neon

Vector narrow integer.

vmovn_u32ExperimentalAArch64 and neon

Vector narrow integer.

vmovn_u64ExperimentalAArch64 and neon

Vector narrow integer.

vpmax_f32ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmax_s8ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmax_s16ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmax_s32ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmax_u8ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmax_u16ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmax_u32ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_f32ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_f64ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_s8ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_s16ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_s32ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_u8ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_u16ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_u32ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmin_f32ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpmin_s8ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpmin_s16ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpmin_s32ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpmin_u8ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpmin_u16ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpmin_u32ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_f32ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_f64ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_s8ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_s16ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_s32ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_u8ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_u16ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_u32ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vqtbl1_p8ExperimentalAArch64 and neon

Table look-up

vqtbl1_s8ExperimentalAArch64 and neon

Table look-up

vqtbl1_u8ExperimentalAArch64 and neon

Table look-up

vqtbl1q_p8ExperimentalAArch64 and neon

Table look-up

vqtbl1q_s8ExperimentalAArch64 and neon

Table look-up

vqtbl1q_u8ExperimentalAArch64 and neon

Table look-up

vqtbl2_p8ExperimentalAArch64 and neon

Table look-up

vqtbl2_s8ExperimentalAArch64 and neon

Table look-up

vqtbl2_u8ExperimentalAArch64 and neon

Table look-up

vqtbl2q_p8ExperimentalAArch64 and neon

Table look-up

vqtbl2q_s8ExperimentalAArch64 and neon

Table look-up

vqtbl2q_u8ExperimentalAArch64 and neon

Table look-up

vqtbl3_p8ExperimentalAArch64 and neon

Table look-up

vqtbl3_s8ExperimentalAArch64 and neon

Table look-up

vqtbl3_u8ExperimentalAArch64 and neon

Table look-up

vqtbl3q_p8ExperimentalAArch64 and neon

Table look-up

vqtbl3q_s8ExperimentalAArch64 and neon

Table look-up

vqtbl3q_u8ExperimentalAArch64 and neon

Table look-up

vqtbl4_p8ExperimentalAArch64 and neon

Table look-up

vqtbl4_s8ExperimentalAArch64 and neon

Table look-up

vqtbl4_u8ExperimentalAArch64 and neon

Table look-up

vqtbl4q_p8ExperimentalAArch64 and neon

Table look-up

vqtbl4q_s8ExperimentalAArch64 and neon

Table look-up

vqtbl4q_u8ExperimentalAArch64 and neon

Table look-up

vqtbx1_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1q_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1q_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1q_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2q_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2q_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2q_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3q_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3q_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3q_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4q_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4q_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4q_u8ExperimentalAArch64 and neon

Extended table look-up

vrsqrte_f32ExperimentalAArch64 and neon

Reciprocal square-root estimate.

vsha1cq_u32ExperimentalAArch64 and crypto

SHA1 hash update accelerator, choose.

vsha1h_u32ExperimentalAArch64 and crypto

SHA1 fixed rotate.

vsha1mq_u32ExperimentalAArch64 and crypto

SHA1 hash update accelerator, majority.

vsha1pq_u32ExperimentalAArch64 and crypto

SHA1 hash update accelerator, parity.

vsha1su0q_u32ExperimentalAArch64 and crypto

SHA1 schedule update accelerator, first part.

vsha1su1q_u32ExperimentalAArch64 and crypto

SHA1 schedule update accelerator, second part.

vsha256h2q_u32ExperimentalAArch64 and crypto

SHA256 hash update accelerator, upper part.

vsha256hq_u32ExperimentalAArch64 and crypto

SHA256 hash update accelerator.

vsha256su0q_u32ExperimentalAArch64 and crypto

SHA256 schedule update accelerator, first part.

vsha256su1q_u32ExperimentalAArch64 and crypto

SHA256 schedule update accelerator, second part.

vtbl1_p8ExperimentalAArch64 and neon

Table look-up

vtbl1_s8ExperimentalAArch64 and neon

Table look-up

vtbl1_u8ExperimentalAArch64 and neon

Table look-up

vtbl2_p8ExperimentalAArch64 and neon

Table look-up

vtbl2_s8ExperimentalAArch64 and neon

Table look-up

vtbl2_u8ExperimentalAArch64 and neon

Table look-up

vtbl3_p8ExperimentalAArch64 and neon

Table look-up

vtbl3_s8ExperimentalAArch64 and neon

Table look-up

vtbl3_u8ExperimentalAArch64 and neon

Table look-up

vtbl4_p8ExperimentalAArch64 and neon

Table look-up

vtbl4_s8ExperimentalAArch64 and neon

Table look-up

vtbl4_u8ExperimentalAArch64 and neon

Table look-up

vtbx1_p8ExperimentalAArch64 and neon

Extended table look-up

vtbx1_s8ExperimentalAArch64 and neon

Extended table look-up

vtbx1_u8ExperimentalAArch64 and neon

Extended table look-up

vtbx2_p8ExperimentalAArch64 and neon

Extended table look-up

vtbx2_s8ExperimentalAArch64 and neon

Extended table look-up

vtbx2_u8ExperimentalAArch64 and neon

Extended table look-up

vtbx3_p8ExperimentalAArch64 and neon

Extended table look-up

vtbx3_s8ExperimentalAArch64 and neon

Extended table look-up

vtbx3_u8ExperimentalAArch64 and neon

Extended table look-up

vtbx4_p8ExperimentalAArch64 and neon

Extended table look-up

vtbx4_s8ExperimentalAArch64 and neon

Extended table look-up

vtbx4_u8ExperimentalAArch64 and neon

Extended table look-up