1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006
#![cfg(target_feature="sse")] use super::*; /// A 128-bit SIMD value. Always used as `f32x4`. /// /// * This documentation numbers the lanes based on the index you'd need to use /// to access that lane if the value were cast to an array. /// * This is also the way that the type is printed out using /// [`Debug`](core::fmt::Debug), [`Display`](core::fmt::Display), /// [`LowerExp`](core::fmt::LowerExp), and [`UpperExp`](core::fmt::UpperExp). /// * This is not necessarily the ordering you'll see if you look an `xmm` /// register in a debugger! Basically because of how little-endian works. /// * Most operations work per-lane, "lanewise". /// * Some operations work using lane 0 only. When appropriate, these have the /// same name as the lanewise version but with a `0` on the end (example: /// `cmp_eq` and `cmp_eq0`). With the 0 version the other lanes are simply /// copied forward from `self`. /// * Comparisons give "bool-ish" output, where all bits 1 in a lane is true, /// and all bits 0 in a lane is false. Unfortunately, all bits 1 with an `f32` /// is one of the `NaN` values, and `NaN != NaN`, so it can be a little tricky /// to work with until you're used to it. #[derive(Clone, Copy)] #[allow(bad_style)] #[repr(transparent)] pub struct m128(pub __m128); unsafe impl Zeroable for m128 {} unsafe impl Pod for m128 {} impl Default for m128 { #[inline(always)] #[must_use] fn default() -> Self { Self::zeroed() } } #[cfg(feature = "serde")] impl serde::Serialize for m128 { #[inline(always)] fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error> where S: serde::Serializer, { let floats: [f32; 4] = cast(*self); floats.serialize(serializer) } } #[cfg(feature = "serde")] impl<'de> serde::Deserialize<'de> for m128 { #[inline(always)] fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where D: serde::Deserializer<'de>, { let floats: [f32; 4] = <[f32; 4]>::deserialize(deserializer)?; Ok(cast(floats)) } } impl core::fmt::Debug for m128 { /// Debug formats in offset order. /// /// All `Formatter` information is passed directly to each individual `f32` /// lane being formatted. fn fmt(&self, f: &mut core::fmt::Formatter) -> core::fmt::Result { let a: [f32; 4] = cast(self.0); f.write_str("m128(")?; core::fmt::Debug::fmt(&a[0], f)?; f.write_str(", ")?; core::fmt::Debug::fmt(&a[1], f)?; f.write_str(", ")?; core::fmt::Debug::fmt(&a[2], f)?; f.write_str(", ")?; core::fmt::Debug::fmt(&a[3], f)?; f.write_str(")") } } impl core::fmt::Display for m128 { /// Display formats in offset order. /// /// All `Formatter` information is passed directly to each individual `f32` /// lane being formatted. fn fmt(&self, f: &mut core::fmt::Formatter) -> core::fmt::Result { let a: [f32; 4] = cast(self.0); f.write_str("m128(")?; core::fmt::Display::fmt(&a[0], f)?; f.write_str(", ")?; core::fmt::Display::fmt(&a[1], f)?; f.write_str(", ")?; core::fmt::Display::fmt(&a[2], f)?; f.write_str(", ")?; core::fmt::Display::fmt(&a[3], f)?; f.write_str(")") } } impl core::fmt::LowerExp for m128 { /// LowerExp formats in offset order. /// /// All `Formatter` information is passed directly to each individual `f32` /// lane being formatted. fn fmt(&self, f: &mut core::fmt::Formatter) -> core::fmt::Result { let a: [f32; 4] = cast(self.0); f.write_str("m128(")?; core::fmt::LowerExp::fmt(&a[0], f)?; f.write_str(", ")?; core::fmt::LowerExp::fmt(&a[1], f)?; f.write_str(", ")?; core::fmt::LowerExp::fmt(&a[2], f)?; f.write_str(", ")?; core::fmt::LowerExp::fmt(&a[3], f)?; f.write_str(")") } } impl core::fmt::UpperExp for m128 { /// UpperExp formats in offset order. /// /// All `Formatter` information is passed directly to each individual `f32` /// lane being formatted. fn fmt(&self, f: &mut core::fmt::Formatter) -> core::fmt::Result { let a: [f32; 4] = cast(self.0); f.write_str("m128(")?; core::fmt::UpperExp::fmt(&a[0], f)?; f.write_str(", ")?; core::fmt::UpperExp::fmt(&a[1], f)?; f.write_str(", ")?; core::fmt::UpperExp::fmt(&a[2], f)?; f.write_str(", ")?; core::fmt::UpperExp::fmt(&a[3], f)?; f.write_str(")") } } impl Add for m128 { type Output = Self; /// Lanewise addition. #[inline(always)] #[must_use] fn add(self, rhs: Self) -> Self { Self(unsafe { _mm_add_ps(self.0, rhs.0) }) } } impl AddAssign for m128 { /// Lanewise addition. #[inline(always)] fn add_assign(&mut self, rhs: Self) { self.0 = unsafe { _mm_add_ps(self.0, rhs.0) }; } } impl BitAnd for m128 { type Output = Self; /// Bitwise AND. #[inline(always)] #[must_use] fn bitand(self, rhs: Self) -> Self { Self(unsafe { _mm_and_ps(self.0, rhs.0) }) } } impl BitAndAssign for m128 { /// Bitwise AND. #[inline(always)] fn bitand_assign(&mut self, rhs: Self) { self.0 = unsafe { _mm_and_ps(self.0, rhs.0) }; } } impl Div for m128 { type Output = Self; /// Lanewise division. #[inline(always)] #[must_use] fn div(self, rhs: Self) -> Self { Self(unsafe { _mm_div_ps(self.0, rhs.0) }) } } impl DivAssign for m128 { /// Lanewise division. #[inline(always)] fn div_assign(&mut self, rhs: Self) { self.0 = unsafe { _mm_div_ps(self.0, rhs.0) }; } } impl Mul for m128 { type Output = Self; /// Lanewise multiplication. #[inline(always)] #[must_use] fn mul(self, rhs: Self) -> Self { Self(unsafe { _mm_mul_ps(self.0, rhs.0) }) } } impl MulAssign for m128 { /// Lanewise multiplication. #[inline(always)] fn mul_assign(&mut self, rhs: Self) { self.0 = unsafe { _mm_mul_ps(self.0, rhs.0) }; } } impl Sub for m128 { type Output = Self; /// Lanewise subtraction. #[inline(always)] #[must_use] fn sub(self, rhs: Self) -> Self { Self(unsafe { _mm_sub_ps(self.0, rhs.0) }) } } impl SubAssign for m128 { /// Lanewise subtraction. #[inline(always)] fn sub_assign(&mut self, rhs: Self) { self.0 = unsafe { _mm_sub_ps(self.0, rhs.0) }; } } impl BitOr for m128 { type Output = Self; /// Bitwise OR. #[inline(always)] #[must_use] fn bitor(self, rhs: Self) -> Self { Self(unsafe { _mm_or_ps(self.0, rhs.0) }) } } impl BitOrAssign for m128 { /// Bitwise OR. #[inline(always)] fn bitor_assign(&mut self, rhs: Self) { self.0 = unsafe { _mm_or_ps(self.0, rhs.0) }; } } impl BitXor for m128 { type Output = Self; /// Bitwise XOR. #[inline(always)] #[must_use] fn bitxor(self, rhs: Self) -> Self { Self(unsafe { _mm_xor_ps(self.0, rhs.0) }) } } impl BitXorAssign for m128 { /// Bitwise XOR. #[inline(always)] fn bitxor_assign(&mut self, rhs: Self) { self.0 = unsafe { _mm_xor_ps(self.0, rhs.0) }; } } impl Neg for m128 { type Output = Self; /// Lanewise `0.0 - self` #[inline(always)] #[must_use] fn neg(self) -> Self { Self(unsafe { _mm_sub_ps(_mm_setzero_ps(), self.0) }) } } impl Not for m128 { type Output = Self; /// Bitwise negation #[inline(always)] #[must_use] fn not(self) -> Self { let f: f32 = cast(-1_i32); let b = Self::splat(f); self ^ b } } /// # SSE Operations impl m128 { /// Adds the 0th lanes without affecting the other lanes of `self. #[inline(always)] #[must_use] pub fn add0(self, rhs: Self) -> Self { Self(unsafe { _mm_add_ss(self.0, rhs.0) }) } /// Bitwise `(!self) & rhs` #[inline(always)] #[must_use] pub fn andnot(self, rhs: Self) -> Self { Self(unsafe { _mm_andnot_ps(self.0, rhs.0) }) } /// Lanewise `self == rhs` check, bool-ish output. #[inline(always)] #[must_use] pub fn cmp_eq(self, rhs: Self) -> Self { Self(unsafe { _mm_cmpeq_ps(self.0, rhs.0) }) } /// Lane 0: `self == rhs`, bool-ish output. #[inline(always)] #[must_use] pub fn cmp_eq0(self, rhs: Self) -> Self { Self(unsafe { _mm_cmpeq_ss(self.0, rhs.0) }) } /// Lanewise `self >= rhs` check, bool-ish output. #[inline(always)] #[must_use] pub fn cmp_ge(self, rhs: Self) -> Self { Self(unsafe { _mm_cmpge_ps(self.0, rhs.0) }) } /// Lane 0: `self >= rhs`, bool-ish output. #[inline(always)] #[must_use] pub fn cmp_ge0(self, rhs: Self) -> Self { Self(unsafe { _mm_cmpge_ss(self.0, rhs.0) }) } /// Lanewise `self > rhs` check, bool-ish output. #[inline(always)] #[must_use] pub fn cmp_gt(self, rhs: Self) -> Self { Self(unsafe { _mm_cmpgt_ps(self.0, rhs.0) }) } /// Lane 0: `self > rhs`, bool-ish output. #[inline(always)] #[must_use] pub fn cmp_gt0(self, rhs: Self) -> Self { Self(unsafe { _mm_cmpgt_ss(self.0, rhs.0) }) } /// Lanewise `self <= rhs` check, bool-ish output. #[inline(always)] #[must_use] pub fn cmp_le(self, rhs: Self) -> Self { Self(unsafe { _mm_cmple_ps(self.0, rhs.0) }) } /// Lane 0: `self <= rhs`, bool-ish output. #[inline(always)] #[must_use] pub fn cmp_le0(self, rhs: Self) -> Self { Self(unsafe { _mm_cmple_ss(self.0, rhs.0) }) } /// Lanewise `self < rhs` check, bool-ish output. #[inline(always)] #[must_use] pub fn cmp_lt(self, rhs: Self) -> Self { Self(unsafe { _mm_cmplt_ps(self.0, rhs.0) }) } /// Lane 0: `self < rhs`, bool-ish output. #[inline(always)] #[must_use] pub fn cmp_lt0(self, rhs: Self) -> Self { Self(unsafe { _mm_cmplt_ss(self.0, rhs.0) }) } /// Lanewise `self != rhs` check, bool-ish output. #[inline(always)] #[must_use] pub fn cmp_ne(self, rhs: Self) -> Self { Self(unsafe { _mm_cmpneq_ps(self.0, rhs.0) }) } /// Lane 0: `self != rhs`, bool-ish output. #[inline(always)] #[must_use] pub fn cmp_ne0(self, rhs: Self) -> Self { Self(unsafe { _mm_cmpneq_ss(self.0, rhs.0) }) } /// Lanewise `!(self >= rhs)` check, bool-ish output. /// /// Also, this triggers 3rd Impact. #[inline(always)] #[must_use] pub fn cmp_nge(self, rhs: Self) -> Self { Self(unsafe { _mm_cmpnge_ps(self.0, rhs.0) }) } /// Lane 0: `!(self >= rhs)`, bool-ish output. #[inline(always)] #[must_use] pub fn cmp_nge0(self, rhs: Self) -> Self { Self(unsafe { _mm_cmpnge_ss(self.0, rhs.0) }) } /// Lanewise `!(self > rhs)` check, bool-ish output. #[inline(always)] #[must_use] pub fn cmp_ngt(self, rhs: Self) -> Self { Self(unsafe { _mm_cmpngt_ps(self.0, rhs.0) }) } /// Lane 0: `!(self > rhs)`, bool-ish output. #[inline(always)] #[must_use] pub fn cmp_ngt0(self, rhs: Self) -> Self { Self(unsafe { _mm_cmpngt_ss(self.0, rhs.0) }) } /// Lanewise `!(self <= rhs)` check, bool-ish output. #[inline(always)] #[must_use] pub fn cmp_nle(self, rhs: Self) -> Self { Self(unsafe { _mm_cmpnle_ps(self.0, rhs.0) }) } /// Lane 0: `!(self <= rhs)`, bool-ish output. #[inline(always)] #[must_use] pub fn cmp_nle0(self, rhs: Self) -> Self { Self(unsafe { _mm_cmpnle_ss(self.0, rhs.0) }) } /// Lanewise `!(self < rhs)` check, bool-ish output. #[inline(always)] #[must_use] pub fn cmp_nlt(self, rhs: Self) -> Self { Self(unsafe { _mm_cmpnlt_ps(self.0, rhs.0) }) } /// Lane 0: `!(self < rhs)`, bool-ish output. #[inline(always)] #[must_use] pub fn cmp_nlt0(self, rhs: Self) -> Self { Self(unsafe { _mm_cmpnlt_ss(self.0, rhs.0) }) } /// Lanewise `self.not_nan() & rhs.not_nan()` check, bool-ish output. #[inline(always)] #[must_use] pub fn cmp_ordinary(self, rhs: Self) -> Self { Self(unsafe { _mm_cmpord_ps(self.0, rhs.0) }) } /// Lane 0: `self.not_nan() & rhs.not_nan()`, bool-ish output. #[inline(always)] #[must_use] pub fn cmp_ordinary0(self, rhs: Self) -> Self { Self(unsafe { _mm_cmpord_ss(self.0, rhs.0) }) } /// Lanewise `self.is_nan() | rhs.is_nan()` check, bool-ish output. #[inline(always)] #[must_use] pub fn cmp_nan(self, rhs: Self) -> Self { Self(unsafe { _mm_cmpunord_ps(self.0, rhs.0) }) } /// Lane 0: `self.is_nan() | rhs.is_nan()`, bool-ish output. #[inline(always)] #[must_use] pub fn cmp_nan0(self, rhs: Self) -> Self { Self(unsafe { _mm_cmpunord_ss(self.0, rhs.0) }) } /// Lane 0: `self == rhs`, 0 or 1 `i32` output. #[inline(always)] #[must_use] pub fn cmpi_eq0(self, rhs: Self) -> i32 { unsafe { _mm_comieq_ss(self.0, rhs.0) } } /// Lane 0: `self >= rhs`, 0 or 1 `i32` output. #[inline(always)] #[must_use] pub fn cmpi_ge0(self, rhs: Self) -> i32 { unsafe { _mm_comige_ss(self.0, rhs.0) } } /// Lane 0: `self > rhs`, 0 or 1 `i32` output. #[inline(always)] #[must_use] pub fn cmpi_gt0(self, rhs: Self) -> i32 { unsafe { _mm_comigt_ss(self.0, rhs.0) } } /// Lane 0: `self <= rhs`, 0 or 1 `i32` output. #[inline(always)] #[must_use] pub fn cmpi_le0(self, rhs: Self) -> i32 { unsafe { _mm_comile_ss(self.0, rhs.0) } } /// Lane 0: `self < rhs`, 0 or 1 `i32` output. #[inline(always)] #[must_use] pub fn cmpi_lt0(self, rhs: Self) -> i32 { unsafe { _mm_comilt_ss(self.0, rhs.0) } } /// Lane 0: `self != rhs`, 0 or 1 `i32` output. #[inline(always)] #[must_use] pub fn cmpi_ne0(self, rhs: Self) -> i32 { unsafe { _mm_comineq_ss(self.0, rhs.0) } } /// Round the `i32` to `f32` and replace lane 0. /// /// Subject to the current thread's [rounding /// mode](https://doc.rust-lang.org/core/arch/x86_64/fn._mm_setcsr.html#rounding-mode) #[inline(always)] #[must_use] pub fn round_replace0_i32(self, rhs: i32) -> Self { Self(unsafe { _mm_cvt_si2ss(self.0, rhs) }) } /// Round lane 0 to `i32` and return. /// /// Subject to the current thread's [rounding /// mode](https://doc.rust-lang.org/core/arch/x86_64/fn._mm_setcsr.html#rounding-mode) #[inline(always)] #[must_use] pub fn round_extract0_i32(self) -> i32 { unsafe { _mm_cvt_ss2si(self.0) } } /// Round the `i64` to `f32` and replace lane 0. /// /// Subject to the current thread's [rounding /// mode](https://doc.rust-lang.org/core/arch/x86_64/fn._mm_setcsr.html#rounding-mode) /// /// Not available to `x86` #[inline(always)] #[cfg(target_arch = "x86_64")] #[must_use] pub fn round_replace0_i64(self, rhs: i64) -> Self { Self(unsafe { _mm_cvtsi64_ss(self.0, rhs) }) } /// Directly extracts lane 0 as `f32`. #[inline(always)] #[must_use] pub fn extract0(self) -> f32 { unsafe { _mm_cvtss_f32(self.0) } } /// Round lane 0 to `i64` and return. /// /// Subject to the current thread's [rounding /// mode](https://doc.rust-lang.org/core/arch/x86_64/fn._mm_setcsr.html#rounding-mode) #[inline(always)] #[cfg(target_arch = "x86_64")] #[must_use] pub fn round_extract0_i64(self) -> i64 { unsafe { _mm_cvtss_si64(self.0) } } /// Truncate lane 0 to `i32` and return. #[inline(always)] #[must_use] pub fn truncate_extract0_i32(self) -> i32 { unsafe { _mm_cvtt_ss2si(self.0) } } /// Truncate lane 0 to `i64` and return. #[inline(always)] #[must_use] #[cfg(target_arch = "x86_64")] pub fn truncate_extract0_i64(self) -> i64 { unsafe { _mm_cvttss_si64(self.0) } } /// Divides the 0th lanes without affecting the other lanes of `self. #[inline(always)] #[must_use] pub fn div0(self, rhs: Self) -> Self { Self(unsafe { _mm_div_ss(self.0, rhs.0) }) } /// Loads a 16-byte aligned `f32` array address into an `m128`. /// /// This produces the same lane order as you'd get if you de-referenced the /// pointed to array and then used `transmute`. #[inline(always)] #[must_use] pub fn load(addr: &Align16<[f32; 4]>) -> Self { let ptr: *const f32 = addr as *const Align16<[f32; 4]> as *const f32; Self(unsafe { _mm_load_ps(ptr) }) } /// Loads the `f32` address into all lanes. #[allow(clippy::trivially_copy_pass_by_ref)] #[inline(always)] #[must_use] pub fn load_splat(addr: &f32) -> Self { Self(unsafe { _mm_load_ps1(addr) }) } /// Loads the `f32` address into lane 0, other lanes are `0.0`. #[allow(clippy::trivially_copy_pass_by_ref)] #[inline(always)] #[must_use] pub fn load0(addr: &f32) -> Self { Self(unsafe { _mm_load_ss(addr) }) } /// Loads 16-byte aligned `f32`s into an `m128`. /// /// This produces the **reverse** lane order as you'd get if you used a /// `transmute` on the pointed to array. #[inline(always)] #[must_use] pub fn load_reverse(addr: &Align16<[f32; 4]>) -> Self { let ptr: *const f32 = addr as *const Align16<[f32; 4]> as *const f32; Self(unsafe { _mm_loadr_ps(ptr) }) } /// Loads 16-byte `f32`s into an `m128`. /// /// This doesn't have the alignment requirements of [`load`](m128::load), but /// the lane ordering is the same. #[inline(always)] #[must_use] pub fn load_unaligned(addr: &[f32; 4]) -> Self { let ptr: *const f32 = addr as *const [f32; 4] as *const f32; Self(unsafe { _mm_loadu_ps(ptr) }) } /// Lanewise maximum. #[inline(always)] #[must_use] pub fn max(self, rhs: Self) -> Self { Self(unsafe { _mm_max_ps(self.0, rhs.0) }) } /// Lane 0 maximum, other lanes are `self`. #[inline(always)] #[must_use] pub fn max0(self, rhs: Self) -> Self { Self(unsafe { _mm_max_ss(self.0, rhs.0) }) } /// Lanewise minimum. #[inline(always)] #[must_use] pub fn min(self, rhs: Self) -> Self { Self(unsafe { _mm_min_ps(self.0, rhs.0) }) } /// Lane 0 minimum, other lanes are `self`. #[inline(always)] #[must_use] pub fn min0(self, rhs: Self) -> Self { Self(unsafe { _mm_min_ss(self.0, rhs.0) }) } /// Copies lane 0 from `rhs`, other lanes are `self`. #[inline(always)] #[must_use] pub fn copy0(self, rhs: Self) -> Self { Self(unsafe { _mm_move_ss(self.0, rhs.0) }) } /// Copy the high two lanes of `rhs` over top of the low two lanes of `self`, /// other lanes unchanged. /// /// ```txt /// out[0] = rhs[2] /// out[1] = rhs[3] /// out[2] = self[2] /// out[3] = self[3] /// ``` #[inline(always)] #[must_use] pub fn copy_high_low(self, rhs: Self) -> Self { Self(unsafe { _mm_movehl_ps(self.0, rhs.0) }) } /// Copy the low two lanes of `rhs` over top of the high two lanes of `self`, /// other lanes unchanged. /// /// ```txt /// out[0] = self[0] /// out[1] = self[1] /// out[2] = rhs[0] /// out[3] = rhs[1] /// ``` #[inline(always)] #[must_use] pub fn copy_low_high(self, rhs: Self) -> Self { Self(unsafe { _mm_movelh_ps(self.0, rhs.0) }) } /// Assumes that this is a bool-ish mask and packs it into an `i32`. /// /// Specifically, the output `i32` has bits 0/1/2/3 set to be the same as the /// most significant bit in lanes 0/1/2/3 of `self`. /// /// (Yes, this name is kinda stupid but I couldn't come up with a better thing /// to rename it to, oh well.) #[inline(always)] #[must_use] pub fn move_mask(self) -> i32 { unsafe { _mm_movemask_ps(self.0) } } /// Lanewise approximate reciprocal. /// /// The maximum relative error for this approximation is less than /// 1.5*2.0e-12. #[inline(always)] #[must_use] pub fn reciprocal(self) -> Self { Self(unsafe { _mm_rcp_ps(self.0) }) } /// Lane 0 approximate reciprocal, other lanes are `self`. /// /// The maximum relative error for this approximation is less than /// 1.5*2.0e-12. #[inline(always)] #[must_use] pub fn reciprocal0(self) -> Self { Self(unsafe { _mm_rcp_ss(self.0) }) } /// Lanewise approximate reciprocal of the square root. /// /// The maximum relative error for this approximation is less than /// 1.5*2.0e-12. #[inline(always)] #[must_use] pub fn reciprocal_sqrt(self) -> Self { Self(unsafe { _mm_rsqrt_ps(self.0) }) } /// Lane 0 approximate reciprocal of the square root, other lanes are `self`. /// /// The maximum relative error for this approximation is less than /// 1.5*2.0e-12. #[inline(always)] #[must_use] pub fn reciprocal_sqrt0(self) -> Self { Self(unsafe { _mm_rsqrt_ss(self.0) }) } /// Set four `f32` values into an `m128`. /// /// Because of how little-endian works, this produces the **opposite** lane /// order as you'd get compared to putting the arguments in to an array and /// then using [`load`](m128::load) on that array. Same with using `transmute` /// or similar. #[inline(always)] #[must_use] pub fn set(a: f32, b: f32, c: f32, d: f32) -> Self { Self(unsafe { _mm_set_ps(a, b, c, d) }) } /// Set the `f32` into all lanes. #[inline(always)] #[must_use] pub fn splat(a: f32) -> Self { Self(unsafe { _mm_set1_ps(a) }) } /// Set the value into lane 0, other lanes `0.0`. #[inline(always)] #[must_use] pub fn set0(a: f32) -> Self { Self(unsafe { _mm_set_ss(a) }) } /// Set four `f32` values into an `m128`, order reversed from normal /// [`set`](m128::set). #[inline(always)] #[must_use] pub fn set_reverse(a: f32, b: f32, c: f32, d: f32) -> Self { Self(unsafe { _mm_setr_ps(a, b, c, d) }) } /// Lanewise square root. #[inline(always)] #[must_use] pub fn sqrt(self) -> Self { Self(unsafe { _mm_sqrt_ps(self.0) }) } /// Lane 0 square root, other lanes are `self`. #[inline(always)] #[must_use] pub fn sqrt0(self) -> Self { Self(unsafe { _mm_sqrt_ss(self.0) }) } /// Stores an `m128` into a 16-byte aligned `f32` array address. /// /// This uses the same lane order as [`load`](m128::load). #[inline(always)] pub fn store(self, addr: &mut Align16<[f32; 4]>) { let ptr: *mut f32 = addr as *mut Align16<[f32; 4]> as *mut f32; unsafe { _mm_store_ps(ptr, self.0) } } /// Stores lane 0 to all indexes of the array. #[inline(always)] pub fn store0_all(self, addr: &mut Align16<[f32; 4]>) { let ptr: *mut f32 = addr as *mut Align16<[f32; 4]> as *mut f32; unsafe { _mm_store_ps1(ptr, self.0) } } /// Stores lane 0 to the address given. #[inline(always)] pub fn store0(self, addr: &mut f32) { unsafe { _mm_store_ss(addr, self.0) } } /// Stores an `m128` into a 16-byte aligned `f32` array address. /// /// This uses the same lane order as [`load_reverse`](m128::load_reverse). #[inline(always)] pub fn store_reverse(self, addr: &mut Align16<[f32; 4]>) { let ptr: *mut f32 = addr as *mut Align16<[f32; 4]> as *mut f32; unsafe { _mm_storer_ps(ptr, self.0) } } /// Stores an `m128` into a `f32` array address. /// /// This doesn't have the alignment requirements of [`store`](m128::store), /// but the lane ordering is the same. #[inline(always)] pub fn store_unaligned(self, addr: &mut [f32; 4]) { let ptr: *mut f32 = addr as *mut [f32; 4] as *mut f32; unsafe { _mm_storeu_ps(ptr, self.0) } } /// Subtracts the 0th lanes without affecting the other lanes of `self. #[inline(always)] #[must_use] pub fn sub0(self, rhs: Self) -> Self { Self(unsafe { _mm_sub_ss(self.0, rhs.0) }) } /// Unpack and interleave the high lanes of `self` and `rhs`. /// /// ```txt /// out[0] = self[2] /// out[1] = rhs[2] /// out[2] = self[3] /// out[3] = rhs[3] /// ``` #[inline(always)] #[must_use] pub fn unpack_high(self, rhs: Self) -> Self { Self(unsafe { _mm_unpackhi_ps(self.0, rhs.0) }) } /// Unpack and interleave the low lanes of `self` and `rhs`. /// /// ```txt /// out[0] = self[0] /// out[1] = rhs[0] /// out[2] = self[1] /// out[3] = rhs[1] /// ``` #[inline(always)] #[must_use] pub fn unpack_low(self, rhs: Self) -> Self { Self(unsafe { _mm_unpacklo_ps(self.0, rhs.0) }) } } /// Prefetch the cache line into all cache levels. /// /// A prefetch is just a hint to the CPU and has no effect on the correctness /// (or not) of a program. In other words, you can prefetch literally any /// address and it's never UB. However, if you prefetch an invalid address the /// CPU can actually slow down for a moment as it figures out that your address /// isn't valid. So, don't go silly with this. /// /// See Also: [`_mm_prefetch`](core::arch::x86_64::_mm_prefetch) #[inline(always)] pub fn prefetch0(ptr: *const impl Sized) { unsafe { _mm_prefetch(ptr as *const i8, _MM_HINT_T0) } } /// Prefetch the cache line into L2 and higher. /// /// A prefetch is just a hint to the CPU and has no effect on the correctness /// (or not) of a program. In other words, you can prefetch literally any /// address and it's never UB. However, if you prefetch an invalid address the /// CPU can actually slow down for a moment as it figures out that your address /// isn't valid. So, don't go silly with this. /// /// See Also: [`_mm_prefetch`](core::arch::x86_64::_mm_prefetch) #[inline(always)] pub fn prefetch1(ptr: *const impl Sized) { unsafe { _mm_prefetch(ptr as *const i8, _MM_HINT_T1) } } /// Prefetch the cache line into L3 and higher (or best effort). /// /// A prefetch is just a hint to the CPU and has no effect on the correctness /// (or not) of a program. In other words, you can prefetch literally any /// address and it's never UB. However, if you prefetch an invalid address the /// CPU can actually slow down for a moment as it figures out that your address /// isn't valid. So, don't go silly with this. /// /// See Also: [`_mm_prefetch`](core::arch::x86_64::_mm_prefetch) #[inline(always)] pub fn prefetch2(ptr: *const impl Sized) { unsafe { _mm_prefetch(ptr as *const i8, _MM_HINT_T2) } } /// Prefetch with non-temporal hint. /// /// Non-temporal access is inherently spooky with respect to the rest of the /// memory model. When I asked a member of the Rust Language Team how they felt /// about non-temporal access, they simply replied with [the confounded /// emoji]https://emojipedia.org/confounded-face/). I don't expose actual /// non-temporal store/load methods as safe operations, but a non-temporal /// _prefetch_ is still fine to do. #[inline(always)] pub fn prefetch_nta(ptr: *const impl Sized) { unsafe { _mm_prefetch(ptr as *const i8, _MM_HINT_NTA) } } /// Transposes, in place, the four `m128` values as if they formed a 4x4 Matrix. /// /// The Intel guide lists the official implementation of this as being: /// ```txt /// __m128 tmp3, tmp2, tmp1, tmp0; /// tmp0 := _mm_unpacklo_ps(row0, row1); /// tmp2 := _mm_unpacklo_ps(row2, row3); /// tmp1 := _mm_unpackhi_ps(row0, row1); /// tmp3 := _mm_unpackhi_ps(row2, row3); /// row0 := _mm_movelh_ps(tmp0, tmp2); /// row1 := _mm_movehl_ps(tmp2, tmp0); /// row2 := _mm_movelh_ps(tmp1, tmp3); /// row3 := _mm_movehl_ps(tmp3, tmp1); /// ``` #[inline(always)] pub fn transpose4(r0: &mut m128, r1: &mut m128, r2: &mut m128, r3: &mut m128) { unsafe { _MM_TRANSPOSE4_PS(&mut r0.0, &mut r1.0, &mut r2.0, &mut r3.0) } } /// Shuffles around some `f32` lanes into a new `m128` /// /// This is a macro and not a function because the shuffle pattern must be a /// compile time constant. The macro takes some requested indexes and then makes /// the correct shuffle pattern constant for you. /// /// * `$a` and `$b` are any `m128` expressions. /// * `$i0a`, `$i1a`, `$i2b`, and `$i3b` must each be `0`, `1`, `2`, or `3`. /// Technically any `u32` literal will work, but only the lowest two bits are /// used so stick to `0`, `1`, `2`, or `3`. /// * Each lane in the output uses one of the lanes from an input. Like the /// names hint, indexes 0 and 1 will come from somewhere in `$a`, and indexes /// 2 and 3 will come from somewhere in `$b`. /// /// ```txt /// shuffle128!(a, b, [0, 2, 1, 3]) /// ``` /// /// Would give an output of: `a[0], a[2], b[1], b[3]` #[macro_export] macro_rules! shuffle128 { ($a:expr, $b:expr, [$i0a:literal,$i1a:literal,$i2b:literal,$i3b:literal]) => {{ // Keep only 2 bits per index const I0A: u32 = $i0a & 0b11; const I1A: u32 = $i1a & 0b11; const I2B: u32 = $i2b & 0b11; const I3B: u32 = $i3b & 0b11; // pack it up little-endian const IMM: i32 = (I0A | I1A << 2 | I2B << 4 | I3B << 6) as i32; // #[cfg(target_arch = "x86")] use core::arch::x86::_mm_shuffle_ps; #[cfg(target_arch = "x86")] use $crate::arch::x86::m128; // #[cfg(target_arch = "x86_64")] use core::arch::x86_64::_mm_shuffle_ps; #[cfg(target_arch = "x86_64")] use $crate::arch::x86_64::m128; // m128(unsafe { _mm_shuffle_ps($a.0, $b.0, IMM) }) }}; } // // EXTRA FUNCTIONS THAT COMBINE INTRINSICS TO MAKE A USEFUL OP // /// # SSE Extras impl m128 { /// `[non-intrinsic]` Lanewise absolute value. /// /// This is not an official Intel intrinsic, instead it's a `bitand` operation /// with a mask so that the sign bit is cleared in all lanes. #[inline(always)] #[must_use] pub fn abs(self) -> Self { self & Self::splat(cast(i32::max_value())) } }