safe_unaligned_simd 0.1.0

Safe SIMD wrappers for unaligned load and store operations.
Documentation

safe_unaligned_simd

Build Status Crates.io Docs.rs

Safe wrappers for unaligned SIMD load and store operations.

The goal of this crate is to remove the need for "unnecessary unsafe" code when using vector intrinsics with no alignment requirements.

Platform-intrinsics that take raw pointers have been wrapped in functions that receive Rust reference types as arguments.

MSRV: 1.87

Implemented Intrinsics

x86_64

  • sse, sse2, avx

Some example function signatures:

#[target_feature(enable = "sse")]
fn _mm_storeu_ps(mem_addr: &mut [f32; 4], a: __m128);
#[target_feature(enable = "sse2")]
fn _mm_store_sd(mem_addr: &mut f64, a: __m128d);
#[target_feature(enable = "avx")]
fn _mm256_loadu2_m128(hiaddr: &[f32; 4], loaddr: &[f32; 4]) -> __m256;

Currently, there is no plan to implement gather/scatter or masked load/store intrinsics for this platform.

Other platforms

Not yet supported.

License

This crate is licensed under either

at your option.

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.