Skip to main content

portable_atomic/
lib.rs

1// SPDX-License-Identifier: Apache-2.0 OR MIT
2
3/*!
4<!-- Note: Document from sync-markdown-to-rustdoc:start through sync-markdown-to-rustdoc:end
5     is synchronized from README.md. Any changes to that range are not preserved. -->
6<!-- tidy:sync-markdown-to-rustdoc:start -->
7
8Portable atomic types including support for 128-bit atomics, atomic float, etc.
9
10- Provide all atomic integer types (`Atomic{I,U}{8,16,32,64}`) for all targets that can use atomic CAS. (i.e., all targets that can use `std`, and most no-std targets)
11- Provide `AtomicI128` and `AtomicU128`.
12- Provide `AtomicF32` and `AtomicF64`. ([optional, requires the `float` feature](#optional-features-float))
13- Provide `AtomicF16` and `AtomicF128` for [unstable `f16` and `f128`](https://github.com/rust-lang/rust/issues/116909). ([optional, requires the `float` feature and unstable cfgs](#optional-features-float))
14- Provide atomic load/store for targets where atomic is not available at all in the standard library. (RISC-V without A-extension, MSP430, AVR)
15- Provide atomic CAS for targets where atomic CAS is not available in the standard library. (thumbv6m, pre-v6 Arm, RISC-V without A-extension, MSP430, AVR, Xtensa, etc.) (always enabled for MSP430 and AVR, [optional](#optional-features-critical-section) otherwise)
16- Make features that require newer compilers, such as [`fetch_{max,min}`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.fetch_max), [`fetch_update`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.fetch_update), [`as_ptr`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.as_ptr), [`from_ptr`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.from_ptr), [`AtomicBool::fetch_not`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicBool.html#method.fetch_not), [`AtomicPtr::fetch_*`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicPtr.html#method.fetch_and), and [stronger CAS failure ordering](https://github.com/rust-lang/rust/pull/98383) available on Rust 1.34+.
17- Provide workaround for bugs in the standard library's atomic-related APIs, such as [rust-lang/rust#100650], `fence`/`compiler_fence` on MSP430 that cause LLVM error, etc.
18
19<!-- TODO:
20- mention Atomic{I,U}*::fetch_neg, Atomic{I*,U*,Ptr}::bit_*, etc.
21- mention optimizations not available in the standard library's equivalents
22-->
23
24portable-atomic version of `std::sync::Arc` is provided by the [portable-atomic-util](https://github.com/taiki-e/portable-atomic/tree/HEAD/portable-atomic-util) crate.
25
26## Usage
27
28Add this to your `Cargo.toml`:
29
30```toml
31[dependencies]
32portable-atomic = "1"
33```
34
35The default features are mainly for users who use atomics larger than the pointer width.
36If you don't need them, disabling the default features may reduce code size and compile time slightly.
37
38```toml
39[dependencies]
40portable-atomic = { version = "1", default-features = false }
41```
42
43If your crate supports no-std environment and requires atomic CAS, enabling the `require-cas` feature will allow the portable-atomic to display a [helpful error message](https://github.com/taiki-e/portable-atomic/pull/100) to users on targets requiring additional action on the user side to provide atomic CAS.
44
45```toml
46[dependencies]
47portable-atomic = { version = "1.3", default-features = false, features = ["require-cas"] }
48```
49
50(Since 1.8, portable-atomic can display a [helpful error message](https://github.com/taiki-e/portable-atomic/pull/181) even without the `require-cas` feature when the rustc version is 1.78+. However, the `require-cas` feature also allows rejecting builds at an earlier stage, we recommend enabling it unless enabling it causes [problems](https://github.com/matklad/once_cell/pull/267).)
51
52## 128-bit atomics support
53
54Native 128-bit atomic operations are available on x86_64 (Rust 1.59+), AArch64 (Rust 1.59+), riscv64 (Rust 1.59+), Arm64EC (Rust 1.84+), s390x (Rust 1.84+), and powerpc64 (Rust 1.95+), otherwise the fallback implementation is used.
55
56On x86_64, even if `cmpxchg16b` is not available at compile-time (Note: `cmpxchg16b` target feature is enabled by default only on Apple, Windows (except Windows 7), and Fuchsia targets), run-time detection checks whether `cmpxchg16b` is available. If `cmpxchg16b` is not available at either compile-time or run-time detection, the fallback implementation is used. See also [`portable_atomic_no_outline_atomics`](#optional-cfg-no-outline-atomics) cfg.
57
58They are usually implemented using inline assembly, and when using Miri or ThreadSanitizer that do not support inline assembly, core intrinsics are used instead of inline assembly if possible.
59
60See the [`atomic128` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/atomic128/README.md) for details.
61
62## <a name="optional-features"></a><a name="optional-cfg"></a>Optional features/cfgs
63
64portable-atomic provides features and cfgs to allow enabling specific APIs and customizing its behavior.
65
66Some options have both a feature and a cfg. When both exist, it indicates that the feature does not follow Cargo's recommendation that [features should be additive](https://doc.rust-lang.org/nightly/cargo/reference/features.html#feature-unification). Therefore, the maintainer's recommendation is to use cfg instead of feature. However, in the embedded ecosystem, it is very common to use features in such places, so these options provide both so you can choose based on your preference.
67
68<details>
69<summary>How to enable cfg (click to show)</summary>
70
71One of the ways to enable cfg is to set [rustflags in the cargo config](https://doc.rust-lang.org/cargo/reference/config.html#targettriplerustflags):
72
73```toml
74# .cargo/config.toml
75[target.<target>]
76rustflags = ["--cfg", "portable_atomic_unsafe_assume_single_core"]
77```
78
79Or set environment variable:
80
81```sh
82RUSTFLAGS="--cfg portable_atomic_unsafe_assume_single_core" cargo ...
83```
84
85</details>
86
87- <a name="optional-features-fallback"></a>**`fallback` feature** *(enabled by default)*<br>
88  Enable fallback implementations.
89
90  This enables atomic types with larger than the width supported by atomic instructions available on the current target. If the current target [supports 128-bit atomics](#128-bit-atomics-support), this is no-op.
91
92  This uses fallback implementation that using global locks by default. The following features/cfgs change this behavior:
93  - [`unsafe-assume-single-core` feature / `portable_atomic_unsafe_assume_single_core` cfg](#optional-features-unsafe-assume-single-core): Use fallback implementations that disabling interrupts instead of using global locks.
94    - If your target is single-core and calling interrupt disable instructions is safe, this is a safer and more efficient option.
95  - [`unsafe-assume-privileged` feature / `portable_atomic_unsafe_assume_privileged` cfg](#optional-features-unsafe-assume-privileged): Use fallback implementations that using global locks with disabling interrupts.
96    - If your target is multi-core and calling interrupt disable instructions is safe, this is a safer option.
97
98- <a name="optional-features-float"></a>**`float` feature**<br>
99  Provide `AtomicF{32,64}`.
100
101  If you want atomic types for unstable float types ([`f16` and `f128`](https://github.com/rust-lang/rust/issues/116909)), enable unstable cfg (`portable_atomic_unstable_f16` cfg for `AtomicF16`, `portable_atomic_unstable_f128` cfg for `AtomicF128`, [there is no possibility that both feature and cfg will be provided for unstable options.](https://github.com/taiki-e/portable-atomic/pull/200#issuecomment-2682252991)).
102
103<div class="rustdoc-alert rustdoc-alert-note">
104
105> **ⓘ Note**
106>
107> - Atomic float's `fetch_{add,sub,min,max}` are usually implemented using CAS loops, which can be slower than equivalent operations of atomic integers. As an exception, AArch64 with FEAT_LSFE and GPU targets have atomic float instructions and we use them on AArch64 when `lsfe` target feature is available at compile-time. We [plan to use atomic float instructions for GPU targets as well in the future.](https://github.com/taiki-e/portable-atomic/issues/34)
108> - Unstable cfgs are outside of the normal semver guarantees and minor or patch versions of portable-atomic may make breaking changes to them at any time.
109
110</div>
111
112- <a name="optional-features-std"></a>**`std` feature**<br>
113  Use `std`.
114
115- <a name="optional-features-require-cas"></a>**`require-cas` feature**<br>
116  Emit compile error if atomic CAS is not available. See [Usage](#usage) section for usage of this feature.
117
118- <a name="optional-features-serde"></a>**`serde` feature**<br>
119  Implement `serde::{Serialize,Deserialize}` for atomic types.
120
121  Note:
122  - The MSRV when this feature is enabled depends on the MSRV of [serde].
123
124- <a name="optional-features-critical-section"></a>**`critical-section` feature**<br>
125  Use [critical-section] to provide atomic CAS for targets where atomic CAS is not available in the standard library.
126
127  `critical-section` support is useful to get atomic CAS when the [`unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)](#optional-features-unsafe-assume-single-core) can't be used,
128  such as multi-core targets, unprivileged code running under some RTOS, or environments where disabling interrupts
129  needs extra care due to e.g. real-time requirements.
130
131<div class="rustdoc-alert rustdoc-alert-note">
132
133> **ⓘ Note**
134>
135> - When enabling this feature, you should provide a suitable critical section implementation for the current target, see the [critical-section] documentation for details on how to do so.
136> - With this feature, critical sections are taken for all atomic operations, while with `unsafe-assume-single-core` feature [some operations](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/interrupt/README.md#no-disable-interrupts) don't require disabling interrupts. Therefore, for better performance, if all the `critical-section` implementation for your target does is disable interrupts, prefer using `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg) instead.
137> - It is usually **discouraged** to always enable this feature in libraries that depend on `portable-atomic`.
138>
139>   Enabling this feature will prevent the end user from having the chance to take advantage of other (potentially) efficient implementations (implementations provided by `unsafe-assume-single-core` feature mentioned above, implementation proposed in [#60], etc.). Also, targets that are currently unsupported may be supported in the future.
140>
141>   The recommended approach for libraries is to leave it up to the end user whether or not to enable this feature. (However, it may make sense to enable this feature by default for libraries specific to a platform where other implementations are known not to work.)
142>
143>   See also [](https://github.com/matklad/once_cell/issues/264#issuecomment-2352654806).
144>
145>   As an example, the end-user's `Cargo.toml` that uses a crate that provides a critical-section implementation and a crate that depends on portable-atomic as an option would be expected to look like this:
146>
147>   ```toml
148>   [dependencies]
149>   portable-atomic = { version = "1", default-features = false, features = ["critical-section"] }
150>   crate-provides-critical-section-impl = "..."
151>   crate-uses-portable-atomic-as-feature = { version = "...", features = ["portable-atomic"] }
152>   ```
153>
154> - Enabling both this feature and `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg) will result in a compile error.
155> - Enabling both this feature and `unsafe-assume-privileged` feature (or `portable_atomic_unsafe_assume_privileged` cfg) will result in a compile error.
156> - The MSRV when this feature is enabled depends on the MSRV of [critical-section].
157
158</div>
159
160- <a name="optional-features-unsafe-assume-single-core"></a><a name="optional-cfg-unsafe-assume-single-core"></a>**`unsafe-assume-single-core` feature / `portable_atomic_unsafe_assume_single_core` cfg**<br>
161  Assume that the target is single-core and privileged instructions required to disable interrupts are available.
162
163  - When this feature/cfg is enabled, this crate provides atomic CAS for targets where atomic CAS is not available in the standard library by disabling interrupts.
164  - When both this feature/cfg and enabled-by-default `fallback` feature is enabled, this crate provides atomic types with larger than the width supported by native instructions by disabling interrupts.
165
166<div class="rustdoc-alert rustdoc-alert-warning">
167
168> **⚠ Warning**
169>
170> This feature/cfg is `unsafe`, and note the following safety requirements:
171> - Enabling this feature/cfg for multi-core systems is always **unsound**.
172>
173> - This uses privileged instructions to disable interrupts, so it usually doesn't work on unprivileged mode.
174>
175>   Enabling this feature/cfg in an environment where privileged instructions are not available, or if the instructions used are not sufficient to disable interrupts in the system, it is also usually considered **unsound**, although the details are system-dependent.
176>
177>   The following are known cases:
178>   - On Arm (except for M-Profile architectures), this disables only IRQs by default. For many systems (e.g., GBA) this is enough. If the system need to disable both IRQs and FIQs, you need to enable the `disable-fiq` feature (or `portable_atomic_disable_fiq` cfg) together.
179>   - On RISC-V, this generates code for machine-mode (M-mode) by default. If you enable the `s-mode` feature (or `portable_atomic_s_mode` cfg) together, this generates code for supervisor-mode (S-mode). In particular, `qemu-system-riscv*` uses [OpenSBI](https://github.com/riscv-software-src/opensbi) as the default firmware.
180
181</div>
182
183Consider using the [`unsafe-assume-privileged` feature (or `portable_atomic_unsafe_assume_privileged` cfg)](#optional-features-unsafe-assume-privileged) for multi-core systems with atomic CAS.
184
185Consider using the [`critical-section` feature](#optional-features-critical-section) for systems that cannot use this feature/cfg.
186
187See also the [`interrupt` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/interrupt/README.md).
188
189<div class="rustdoc-alert rustdoc-alert-note">
190
191> **ⓘ Note**
192>
193> - It is **very strongly discouraged** to enable this feature/cfg in libraries that depend on `portable-atomic`.
194>
195>   The recommended approach for libraries is to leave it up to the end user whether or not to enable this feature/cfg. (However, it may make sense to enable this feature/cfg by default for libraries specific to a platform where it is guaranteed to always be sound, for example in a hardware abstraction layer targeting a single-core chip.)
196> - Enabling this feature/cfg for unsupported architectures will result in a compile error.
197>   - Arm, RISC-V, and Xtensa are currently supported. (Since all MSP430 and AVR are single-core, we always provide atomic CAS for them without this feature/cfg.)
198>   - Feel free to [submit an issue](https://github.com/taiki-e/portable-atomic/issues/new) if your target is not supported yet.
199> - Enabling this feature/cfg for targets where privileged instructions are obviously unavailable (e.g., Linux) will result in a compile error.
200>   - Feel free to [submit an issue](https://github.com/taiki-e/portable-atomic/issues/new) if your target supports privileged instructions but the build rejected.
201> - Enabling both this feature/cfg and `critical-section` feature will result in a compile error.
202> - When both this feature/cfg and `unsafe-assume-privileged` feature (or `portable_atomic_unsafe_assume_privileged` cfg) are enabled, this feature/cfg is preferred.
203
204</div>
205
206- <a name="optional-features-unsafe-assume-privileged"></a><a name="optional-cfg-unsafe-assume-privileged"></a>**`unsafe-assume-privileged` feature / `portable_atomic_unsafe_assume_privileged` cfg**<br>
207  Similar to `unsafe-assume-single-core` feature / `portable_atomic_unsafe_assume_single_core` cfg, but only assumes about availability of privileged instructions required to disable interrupts.
208
209  - When both this feature/cfg and enabled-by-default `fallback` feature is enabled, this crate provides atomic types with larger than the width supported by native instructions by using global locks with disabling interrupts.
210
211<div class="rustdoc-alert rustdoc-alert-warning">
212
213> **⚠ Warning**
214>
215> This feature/cfg is `unsafe`, and except for being sound in multi-core systems, this has the same safety requirements as [`unsafe-assume-single-core` feature / `portable_atomic_unsafe_assume_single_core` cfg](#optional-features-unsafe-assume-single-core).
216
217</div>
218
219<div class="rustdoc-alert rustdoc-alert-note">
220
221> **ⓘ Note**
222>
223> - It is **very strongly discouraged** to enable this feature/cfg in libraries that depend on `portable-atomic`.
224>
225>   The recommended approach for libraries is to leave it up to the end user whether or not to enable this feature/cfg. (However, it may make sense to enable this feature/cfg by default for libraries specific to a platform where it is guaranteed to always be sound, for example in a hardware abstraction layer.)
226> - Enabling this feature/cfg for unsupported targets will result in a compile error.
227>   - This requires atomic CAS (`cfg(target_has_atomic = "ptr")` or `cfg_no_atomic_cas!`).
228>   - Arm, RISC-V, and Xtensa are currently supported.
229>   - Feel free to [submit an issue](https://github.com/taiki-e/portable-atomic/issues/new) if your target is not supported yet.
230> - Enabling this feature/cfg for targets where privileged instructions are obviously unavailable (e.g., Linux) will result in a compile error.
231>   - Feel free to [submit an issue](https://github.com/taiki-e/portable-atomic/issues/new) if your target supports privileged instructions but the build rejected.
232> - Enabling both this feature/cfg and `critical-section` feature will result in a compile error.
233> - When both this feature/cfg and `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg) are enabled, `unsafe-assume-single-core` is preferred.
234
235</div>
236
237- <a name="optional-cfg-no-outline-atomics"></a>**`portable_atomic_no_outline_atomics` cfg**<br>
238  Disable dynamic dispatching by run-time CPU feature detection.
239
240  Dynamic dispatching by run-time CPU feature detection allows maintaining support for older CPUs while using features that are not supported on older CPUs, such as CMPXCHG16B (x86_64) and FEAT_LSE/FEAT_LSE2 (AArch64).
241
242  See also the [`atomic128` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/atomic128/README.md).
243
244<div class="rustdoc-alert rustdoc-alert-note">
245
246> **ⓘ Note**
247>
248> - If the required target features are enabled at compile-time, dynamic dispatching is automatically disabled and the atomic operations are inlined.
249> - This is compatible with no-std (as with all features except `std`).
250> - On some targets, run-time detection is disabled by default mainly for compatibility with incomplete build environments or support for it is experimental, and can be enabled by `portable_atomic_outline_atomics` cfg. (When both cfg are enabled, `*_no_*` cfg is preferred.)
251> - Some AArch64 targets enable LLVM's `outline-atomics` target feature by default, so if you set this cfg, you may want to disable that as well. (However, portable-atomic's outline-atomics does not depend on the compiler-rt symbols, so even if you need to disable LLVM's outline-atomics, you may not need to disable portable-atomic's outline-atomics.)
252> - Dynamic detection is currently only supported in x86_64, AArch64, Arm, RISC-V, Arm64EC, and powerpc64. Enabling this cfg for unsupported architectures will result in a compile error.
253
254</div>
255
256## Related Projects
257
258- [atomic-maybe-uninit]: Atomic operations on potentially uninitialized integers.
259- [atomic-memcpy]: Byte-wise atomic memcpy.
260
261[#60]: https://github.com/taiki-e/portable-atomic/issues/60
262[atomic-maybe-uninit]: https://github.com/taiki-e/atomic-maybe-uninit
263[atomic-memcpy]: https://github.com/taiki-e/atomic-memcpy
264[critical-section]: https://github.com/rust-embedded/critical-section
265[rust-lang/rust#100650]: https://github.com/rust-lang/rust/issues/100650
266[serde]: https://github.com/serde-rs/serde
267
268<!-- tidy:sync-markdown-to-rustdoc:end -->
269*/
270
271#![no_std]
272#![doc(test(
273    no_crate_inject,
274    attr(
275        deny(warnings, rust_2018_idioms, single_use_lifetimes),
276        allow(dead_code, unused_variables)
277    )
278))]
279#![cfg_attr(not(portable_atomic_no_unsafe_op_in_unsafe_fn), warn(unsafe_op_in_unsafe_fn))] // unsafe_op_in_unsafe_fn requires Rust 1.52
280#![cfg_attr(portable_atomic_no_unsafe_op_in_unsafe_fn, allow(unused_unsafe))]
281#![warn(
282    // Lints that may help when writing public library.
283    missing_debug_implementations,
284    // missing_docs,
285    clippy::alloc_instead_of_core,
286    clippy::exhaustive_enums,
287    clippy::exhaustive_structs,
288    clippy::impl_trait_in_params,
289    clippy::missing_inline_in_public_items,
290    clippy::std_instead_of_alloc,
291    clippy::std_instead_of_core,
292    // Code outside of cfg(feature = "float") shouldn't use float.
293    clippy::float_arithmetic,
294)]
295#![cfg_attr(not(portable_atomic_no_asm), warn(missing_docs))] // module-level #![allow(missing_docs)] doesn't work for macros on old rustc
296#![cfg_attr(portable_atomic_no_strict_provenance, allow(unstable_name_collisions))]
297#![allow(clippy::inline_always, clippy::used_underscore_items)]
298// asm_experimental_arch
299// AVR, MSP430, and Xtensa are tier 3 platforms and require nightly anyway.
300// On tier 2 platforms (currently N/A), we use cfg set by build script to
301// determine whether this feature is available or not.
302#![cfg_attr(
303    all(
304        not(portable_atomic_no_asm),
305        any(
306            target_arch = "avr",
307            target_arch = "msp430",
308            all(
309                target_arch = "xtensa",
310                any(
311                    portable_atomic_unsafe_assume_single_core,
312                    portable_atomic_unsafe_assume_privileged,
313                ),
314            ),
315        ),
316    ),
317    feature(asm_experimental_arch)
318)]
319// f16/f128
320// cfg is unstable and explicitly enabled by the user
321#![cfg_attr(portable_atomic_unstable_f16, feature(f16))]
322#![cfg_attr(portable_atomic_unstable_f128, feature(f128))]
323// Old nightly only
324// These features are already stabilized or have already been removed from compilers,
325// and can safely be enabled for old nightly as long as version detection works.
326// - cfg(target_has_atomic)
327// - asm! on AArch64, Arm, RISC-V, x86, x86_64, Arm64EC, s390x, PowerPC64
328// - llvm_asm! on AVR (tier 3) and MSP430 (tier 3)
329// - #[instruction_set] on non-Linux/Android pre-v6 Arm (tier 3)
330// This also helps us test that our assembly code works with the minimum external
331// LLVM version of the first rustc version that inline assembly stabilized.
332#![cfg_attr(portable_atomic_unstable_cfg_target_has_atomic, feature(cfg_target_has_atomic))]
333#![cfg_attr(
334    all(
335        portable_atomic_unstable_asm,
336        any(
337            target_arch = "aarch64",
338            target_arch = "arm",
339            target_arch = "riscv32",
340            target_arch = "riscv64",
341            target_arch = "x86",
342            target_arch = "x86_64",
343        ),
344    ),
345    feature(asm)
346)]
347#![cfg_attr(
348    all(
349        portable_atomic_unstable_asm_experimental_arch,
350        any(target_arch = "arm64ec", target_arch = "s390x", target_arch = "powerpc64"),
351    ),
352    feature(asm_experimental_arch)
353)]
354#![cfg_attr(
355    all(any(target_arch = "avr", target_arch = "msp430"), portable_atomic_no_asm),
356    feature(llvm_asm)
357)]
358#![cfg_attr(
359    all(
360        target_arch = "arm",
361        portable_atomic_unstable_isa_attribute,
362        any(portable_atomic_unsafe_assume_single_core, portable_atomic_unsafe_assume_privileged),
363        not(any(target_feature = "v7", portable_atomic_target_feature = "v7")),
364        not(any(target_feature = "mclass", portable_atomic_target_feature = "mclass")),
365    ),
366    feature(isa_attribute)
367)]
368// Miri and/or ThreadSanitizer only
369// They do not support inline assembly, so we need to use unstable features instead.
370// Since they require nightly compilers anyway, we can use the unstable features.
371// This is not an ideal situation, but it is still better than always using lock-based
372// fallback and causing memory ordering problems to be missed by these checkers.
373#![cfg_attr(
374    all(
375        any(
376            target_arch = "aarch64",
377            target_arch = "arm64ec",
378            target_arch = "powerpc64",
379            target_arch = "s390x",
380        ),
381        any(miri, portable_atomic_sanitize_thread),
382    ),
383    allow(internal_features)
384)]
385#![cfg_attr(
386    all(
387        any(
388            target_arch = "aarch64",
389            target_arch = "arm64ec",
390            target_arch = "powerpc64",
391            target_arch = "s390x",
392        ),
393        any(miri, portable_atomic_sanitize_thread),
394    ),
395    feature(core_intrinsics)
396)]
397// docs.rs only (cfg is enabled by docs.rs, not build script)
398#![cfg_attr(docsrs, feature(doc_cfg))]
399#![cfg_attr(docsrs, doc(auto_cfg = false))]
400#![cfg_attr(
401    all(
402        portable_atomic_no_atomic_load_store,
403        not(any(
404            target_arch = "avr",
405            target_arch = "bpf",
406            target_arch = "msp430",
407            target_arch = "riscv32",
408            target_arch = "riscv64",
409            feature = "critical-section",
410            portable_atomic_unsafe_assume_single_core,
411        )),
412    ),
413    allow(unused_imports, unused_macros, clippy::unused_trait_names)
414)]
415
416#[cfg(any(test, feature = "std"))]
417extern crate std;
418
419#[macro_use]
420mod cfgs;
421#[cfg(target_pointer_width = "16")]
422pub use self::{cfg_has_atomic_16 as cfg_has_atomic_ptr, cfg_no_atomic_16 as cfg_no_atomic_ptr};
423#[cfg(target_pointer_width = "32")]
424pub use self::{cfg_has_atomic_32 as cfg_has_atomic_ptr, cfg_no_atomic_32 as cfg_no_atomic_ptr};
425#[cfg(target_pointer_width = "64")]
426pub use self::{cfg_has_atomic_64 as cfg_has_atomic_ptr, cfg_no_atomic_64 as cfg_no_atomic_ptr};
427#[cfg(target_pointer_width = "128")]
428pub use self::{cfg_has_atomic_128 as cfg_has_atomic_ptr, cfg_no_atomic_128 as cfg_no_atomic_ptr};
429
430// There are currently no 128-bit or higher builtin targets.
431// (Although some of our generic code is written with the future
432// addition of 128-bit targets in mind.)
433// Note that Rust (and C99) pointers must be at least 16-bit (i.e., 8-bit targets are impossible): https://github.com/rust-lang/rust/pull/49305
434#[cfg(not(any(
435    target_pointer_width = "16",
436    target_pointer_width = "32",
437    target_pointer_width = "64",
438)))]
439compile_error!(
440    "portable-atomic currently only supports targets with {16,32,64}-bit pointer width; \
441     if you need support for others, \
442     please submit an issue at <https://github.com/taiki-e/portable-atomic>"
443);
444
445// Reject unsupported architectures.
446#[cfg(portable_atomic_unsafe_assume_single_core)]
447#[cfg(not(any(
448    target_arch = "arm",
449    target_arch = "avr",
450    target_arch = "msp430",
451    target_arch = "riscv32",
452    target_arch = "riscv64",
453    target_arch = "xtensa",
454)))]
455compile_error!(
456    "`portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature) \
457     is not supported yet on this architecture;\n\
458     if you need unsafe-assume-{single-core,privileged} support for this target,\n\
459     please submit an issue at <https://github.com/taiki-e/portable-atomic/issues/new>"
460);
461// unsafe-assume-single-core is accepted on AVR/MSP430, but
462// unsafe-assume-privileged on them is really useless on them since they are
463// always single-core, so rejected here.
464#[cfg(portable_atomic_unsafe_assume_privileged)]
465#[cfg(not(any(
466    target_arch = "arm",
467    target_arch = "riscv32",
468    target_arch = "riscv64",
469    target_arch = "xtensa",
470)))]
471compile_error!(
472    "`portable_atomic_unsafe_assume_privileged` cfg (`unsafe-assume-privileged` feature) \
473     is not supported yet on this architecture;\n\
474     if you need unsafe-assume-{single-core,privileged} support for this target,\n\
475     please submit an issue at <https://github.com/taiki-e/portable-atomic/issues/new>"
476);
477// unsafe-assume-privileged requires CAS.
478#[cfg(portable_atomic_unsafe_assume_privileged)]
479cfg_no_atomic_cas! {
480    compile_error!(
481        "`portable_atomic_unsafe_assume_privileged` cfg (`unsafe-assume-privileged` feature) \
482        requires atomic CAS"
483    );
484}
485// Reject targets where privileged instructions are obviously unavailable.
486// TODO: Some embedded OSes should probably be accepted here.
487#[cfg(any(portable_atomic_unsafe_assume_single_core, portable_atomic_unsafe_assume_privileged))]
488#[cfg(any(
489    target_arch = "arm",
490    target_arch = "avr",
491    target_arch = "msp430",
492    target_arch = "riscv32",
493    target_arch = "riscv64",
494    target_arch = "xtensa",
495))]
496#[cfg_attr(
497    portable_atomic_no_cfg_target_has_atomic,
498    cfg(all(not(portable_atomic_no_atomic_cas), not(target_os = "none")))
499)]
500#[cfg_attr(
501    not(portable_atomic_no_cfg_target_has_atomic),
502    cfg(all(target_has_atomic = "ptr", not(target_os = "none")))
503)]
504compile_error!(
505    "`portable_atomic_unsafe_assume_{single_core,privileged}` cfg (`unsafe-assume-{single-core,privileged}` feature) \
506     is not compatible with target where privileged instructions are obviously unavailable;\n\
507     if you need unsafe-assume-{single-core,privileged} support for this target,\n\
508     please submit an issue at <https://github.com/taiki-e/portable-atomic/issues/new>\n\
509     see also <https://github.com/taiki-e/portable-atomic/issues/148> for troubleshooting"
510);
511
512#[cfg(portable_atomic_no_outline_atomics)]
513#[cfg(not(any(
514    target_arch = "aarch64",
515    target_arch = "arm",
516    target_arch = "arm64ec",
517    target_arch = "powerpc64",
518    target_arch = "riscv32",
519    target_arch = "riscv64",
520    target_arch = "x86_64",
521)))]
522compile_error!("`portable_atomic_no_outline_atomics` cfg does not compatible with this target");
523#[cfg(portable_atomic_outline_atomics)]
524#[cfg(not(any(
525    target_arch = "aarch64",
526    target_arch = "powerpc64",
527    target_arch = "riscv32",
528    target_arch = "riscv64",
529)))]
530compile_error!("`portable_atomic_outline_atomics` cfg does not compatible with this target");
531
532#[cfg(portable_atomic_disable_fiq)]
533#[cfg(not(all(
534    target_arch = "arm",
535    not(any(target_feature = "mclass", portable_atomic_target_feature = "mclass")),
536)))]
537compile_error!(
538    "`portable_atomic_disable_fiq` cfg (`disable-fiq` feature) is only available on Arm (except for M-Profile architectures)"
539);
540#[cfg(portable_atomic_s_mode)]
541#[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64")))]
542compile_error!("`portable_atomic_s_mode` cfg (`s-mode` feature) is only available on RISC-V");
543#[cfg(portable_atomic_force_amo)]
544#[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64")))]
545compile_error!("`portable_atomic_force_amo` cfg (`force-amo` feature) is only available on RISC-V");
546
547#[cfg(portable_atomic_disable_fiq)]
548#[cfg(not(any(
549    portable_atomic_unsafe_assume_single_core,
550    portable_atomic_unsafe_assume_privileged,
551)))]
552compile_error!(
553    "`portable_atomic_disable_fiq` cfg (`disable-fiq` feature) may only be used together with `portable_atomic_unsafe_assume_{single_core,privileged}` cfg (`unsafe-assume-{single-core,privileged}` feature)"
554);
555#[cfg(portable_atomic_s_mode)]
556#[cfg(not(any(
557    portable_atomic_unsafe_assume_single_core,
558    portable_atomic_unsafe_assume_privileged,
559)))]
560compile_error!(
561    "`portable_atomic_s_mode` cfg (`s-mode` feature) may only be used together with `portable_atomic_unsafe_assume_{single_core,privileged}` cfg (`unsafe-assume-{single-core,privileged}` feature)"
562);
563#[cfg(portable_atomic_force_amo)]
564#[cfg(not(portable_atomic_unsafe_assume_single_core))]
565compile_error!(
566    "`portable_atomic_force_amo` cfg (`force-amo` feature) may only be used together with `portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature)"
567);
568#[cfg(portable_atomic_unsafe_assume_privileged)]
569#[cfg(not(feature = "fallback"))]
570compile_error!(
571    "`portable_atomic_unsafe_assume_privileged` cfg (`unsafe-assume-privileged` feature) may only be used together with `fallback` feature"
572);
573
574#[cfg(all(
575    any(portable_atomic_unsafe_assume_single_core, portable_atomic_unsafe_assume_privileged),
576    feature = "critical-section"
577))]
578compile_error!(
579    "you may not enable `critical-section` feature and `portable_atomic_unsafe_assume_{single_core,privileged}` cfg (`unsafe-assume-{single-core,privileged}` feature) at the same time"
580);
581
582#[cfg(feature = "require-cas")]
583#[cfg_attr(
584    portable_atomic_no_cfg_target_has_atomic,
585    cfg(not(any(
586        not(portable_atomic_no_atomic_cas),
587        target_arch = "avr",
588        target_arch = "msp430",
589        feature = "critical-section",
590        portable_atomic_unsafe_assume_single_core,
591    )))
592)]
593#[cfg_attr(
594    not(portable_atomic_no_cfg_target_has_atomic),
595    cfg(not(any(
596        target_has_atomic = "ptr",
597        target_arch = "avr",
598        target_arch = "msp430",
599        feature = "critical-section",
600        portable_atomic_unsafe_assume_single_core,
601    )))
602)]
603compile_error!(
604    "dependents require atomic CAS but not available on this target by default;\n\
605    consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg).\n\
606    see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
607);
608
609#[macro_use]
610mod utils;
611
612#[cfg(test)]
613#[macro_use]
614mod tests;
615
616#[doc(no_inline)]
617pub use core::sync::atomic::Ordering;
618
619cfg_sel!({
620    // LLVM doesn't support fence/compiler_fence for MSP430.
621    #[cfg(target_arch = "msp430")]
622    {
623        pub use self::imp::msp430::{compiler_fence, fence};
624    }
625    #[cfg(else)]
626    {
627        #[doc(no_inline)]
628        pub use core::sync::atomic::{compiler_fence, fence};
629    }
630});
631
632mod imp;
633
634pub mod hint {
635    //! Re-export of the [`core::hint`] module.
636    //!
637    //! The only difference from the [`core::hint`] module is that [`spin_loop`]
638    //! is available in all rust versions that this crate supports.
639    //!
640    //! ```
641    //! use portable_atomic::hint;
642    //!
643    //! hint::spin_loop();
644    //! ```
645
646    #[doc(no_inline)]
647    pub use core::hint::*;
648
649    /// Emits a machine instruction to signal the processor that it is running in
650    /// a busy-wait spin-loop ("spin lock").
651    ///
652    /// Upon receiving the spin-loop signal the processor can optimize its behavior by,
653    /// for example, saving power or switching hyper-threads.
654    ///
655    /// This function is different from [`thread::yield_now`] which directly
656    /// yields to the system's scheduler, whereas `spin_loop` does not interact
657    /// with the operating system.
658    ///
659    /// A common use case for `spin_loop` is implementing bounded optimistic
660    /// spinning in a CAS loop in synchronization primitives. To avoid problems
661    /// like priority inversion, it is strongly recommended that the spin loop is
662    /// terminated after a finite amount of iterations and an appropriate blocking
663    /// syscall is made.
664    ///
665    /// **Note:** On platforms that do not support receiving spin-loop hints this
666    /// function does not do anything at all.
667    ///
668    /// [`thread::yield_now`]: https://doc.rust-lang.org/std/thread/fn.yield_now.html
669    #[inline]
670    pub fn spin_loop() {
671        #[allow(deprecated)]
672        core::sync::atomic::spin_loop_hint();
673    }
674}
675
676#[cfg(doc)]
677use core::sync::atomic::Ordering::{AcqRel, Acquire, Relaxed, Release, SeqCst};
678use core::{fmt, ptr};
679
680cfg_has_atomic_8! {
681/// A boolean type which can be safely shared between threads.
682///
683/// This type has the same in-memory representation as a [`bool`].
684///
685/// If the compiler and the platform support atomic loads and stores of `u8`,
686/// this type is a wrapper for the standard library's
687/// [`AtomicBool`](core::sync::atomic::AtomicBool). If the platform supports it
688/// but the compiler does not, atomic operations are implemented using inline
689/// assembly.
690#[repr(C, align(1))]
691pub struct AtomicBool {
692    v: core::cell::UnsafeCell<u8>,
693}
694
695impl Default for AtomicBool {
696    /// Creates an `AtomicBool` initialized to `false`.
697    #[inline]
698    fn default() -> Self {
699        Self::new(false)
700    }
701}
702
703impl From<bool> for AtomicBool {
704    /// Converts a `bool` into an `AtomicBool`.
705    #[inline]
706    fn from(b: bool) -> Self {
707        Self::new(b)
708    }
709}
710
711// Send is implicitly implemented.
712// SAFETY: any data races are prevented by disabling interrupts or
713// atomic intrinsics (see module-level comments).
714unsafe impl Sync for AtomicBool {}
715
716// UnwindSafe is implicitly implemented.
717#[cfg(not(portable_atomic_no_core_unwind_safe))]
718impl core::panic::RefUnwindSafe for AtomicBool {}
719#[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
720impl std::panic::RefUnwindSafe for AtomicBool {}
721
722impl_debug_and_serde!(AtomicBool);
723
724impl AtomicBool {
725    /// Creates a new `AtomicBool`.
726    ///
727    /// # Examples
728    ///
729    /// ```
730    /// use portable_atomic::AtomicBool;
731    ///
732    /// let atomic_true = AtomicBool::new(true);
733    /// let atomic_false = AtomicBool::new(false);
734    /// ```
735    #[inline]
736    #[must_use]
737    pub const fn new(v: bool) -> Self {
738        static_assert_layout!(AtomicBool, bool);
739        Self { v: core::cell::UnsafeCell::new(v as u8) }
740    }
741
742    // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
743    const_fn! {
744        const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
745        /// Creates a new `AtomicBool` from a pointer.
746        ///
747        /// This is `const fn` on Rust 1.83+.
748        ///
749        /// # Safety
750        ///
751        /// * `ptr` must be aligned to `align_of::<AtomicBool>()` (note that on some platforms this can
752        ///   be bigger than `align_of::<bool>()`).
753        /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
754        /// * If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
755        ///   behind `ptr` must have a happens-before relationship with atomic accesses via the returned
756        ///   value (or vice-versa).
757        ///   * In other words, time periods where the value is accessed atomically may not overlap
758        ///     with periods where the value is accessed non-atomically.
759        ///   * This requirement is trivially satisfied if `ptr` is never used non-atomically for the
760        ///     duration of lifetime `'a`. Most use cases should be able to follow this guideline.
761        ///   * This requirement is also trivially satisfied if all accesses (atomic or not) are done
762        ///     from the same thread.
763        /// * If this atomic type is *not* lock-free:
764        ///   * Any accesses to the value behind `ptr` must have a happens-before relationship
765        ///     with accesses via the returned value (or vice-versa).
766        ///   * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
767        ///     be compatible with operations performed by this atomic type.
768        /// * This method must not be used to create overlapping or mixed-size atomic accesses, as
769        ///   these are not supported by the memory model.
770        ///
771        /// [valid]: core::ptr#safety
772        #[inline]
773        #[must_use]
774        pub const unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a Self {
775            #[allow(clippy::cast_ptr_alignment)]
776            // SAFETY: guaranteed by the caller
777            unsafe { &*(ptr as *mut Self) }
778        }
779    }
780
781    /// Returns `true` if operations on values of this type are lock-free.
782    ///
783    /// If the compiler or the platform doesn't support the necessary
784    /// atomic instructions, global locks for every potentially
785    /// concurrent atomic operation will be used.
786    ///
787    /// # Examples
788    ///
789    /// ```
790    /// use portable_atomic::AtomicBool;
791    ///
792    /// let is_lock_free = AtomicBool::is_lock_free();
793    /// ```
794    #[inline]
795    #[must_use]
796    pub fn is_lock_free() -> bool {
797        imp::AtomicU8::is_lock_free()
798    }
799
800    /// Returns `true` if operations on values of this type are lock-free.
801    ///
802    /// If the compiler or the platform doesn't support the necessary
803    /// atomic instructions, global locks for every potentially
804    /// concurrent atomic operation will be used.
805    ///
806    /// **Note:** If the atomic operation relies on dynamic CPU feature detection,
807    /// this type may be lock-free even if the function returns false.
808    ///
809    /// # Examples
810    ///
811    /// ```
812    /// use portable_atomic::AtomicBool;
813    ///
814    /// const IS_ALWAYS_LOCK_FREE: bool = AtomicBool::is_always_lock_free();
815    /// ```
816    #[inline]
817    #[must_use]
818    pub const fn is_always_lock_free() -> bool {
819        imp::AtomicU8::IS_ALWAYS_LOCK_FREE
820    }
821    #[cfg(test)]
822    const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
823
824    const_fn! {
825        const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
826        /// Returns a mutable reference to the underlying [`bool`].
827        ///
828        /// This is safe because the mutable reference guarantees that no other threads are
829        /// concurrently accessing the atomic data.
830        ///
831        /// This is `const fn` on Rust 1.83+.
832        ///
833        /// # Examples
834        ///
835        /// ```
836        /// use portable_atomic::{AtomicBool, Ordering};
837        ///
838        /// let mut some_bool = AtomicBool::new(true);
839        /// assert_eq!(*some_bool.get_mut(), true);
840        /// *some_bool.get_mut() = false;
841        /// assert_eq!(some_bool.load(Ordering::SeqCst), false);
842        /// ```
843        #[inline]
844        pub const fn get_mut(&mut self) -> &mut bool {
845            // SAFETY: the mutable reference guarantees unique ownership.
846            unsafe { &mut *self.as_ptr() }
847        }
848    }
849
850    // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
851    // https://github.com/rust-lang/rust/issues/76314
852
853    const_fn! {
854        const_if: #[cfg(not(portable_atomic_no_const_transmute))];
855        /// Consumes the atomic and returns the contained value.
856        ///
857        /// This is safe because passing `self` by value guarantees that no other threads are
858        /// concurrently accessing the atomic data.
859        ///
860        /// This is `const fn` on Rust 1.56+.
861        ///
862        /// # Examples
863        ///
864        /// ```
865        /// use portable_atomic::AtomicBool;
866        ///
867        /// let some_bool = AtomicBool::new(true);
868        /// assert_eq!(some_bool.into_inner(), true);
869        /// ```
870        #[inline]
871        pub const fn into_inner(self) -> bool {
872            // SAFETY: AtomicBool and u8 have the same size and in-memory representations,
873            // so they can be safely transmuted.
874            // (const UnsafeCell::into_inner is unstable)
875            unsafe { core::mem::transmute::<AtomicBool, u8>(self) != 0 }
876        }
877    }
878
879    /// Loads a value from the bool.
880    ///
881    /// `load` takes an [`Ordering`] argument which describes the memory ordering
882    /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
883    ///
884    /// # Panics
885    ///
886    /// Panics if `order` is [`Release`] or [`AcqRel`].
887    ///
888    /// # Examples
889    ///
890    /// ```
891    /// use portable_atomic::{AtomicBool, Ordering};
892    ///
893    /// let some_bool = AtomicBool::new(true);
894    ///
895    /// assert_eq!(some_bool.load(Ordering::Relaxed), true);
896    /// ```
897    #[inline]
898    #[cfg_attr(
899        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
900        track_caller
901    )]
902    pub fn load(&self, order: Ordering) -> bool {
903        self.as_atomic_u8().load(order) != 0
904    }
905
906    /// Stores a value into the bool.
907    ///
908    /// `store` takes an [`Ordering`] argument which describes the memory ordering
909    /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
910    ///
911    /// # Panics
912    ///
913    /// Panics if `order` is [`Acquire`] or [`AcqRel`].
914    ///
915    /// # Examples
916    ///
917    /// ```
918    /// use portable_atomic::{AtomicBool, Ordering};
919    ///
920    /// let some_bool = AtomicBool::new(true);
921    ///
922    /// some_bool.store(false, Ordering::Relaxed);
923    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
924    /// ```
925    #[inline]
926    #[cfg_attr(
927        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
928        track_caller
929    )]
930    pub fn store(&self, val: bool, order: Ordering) {
931        self.as_atomic_u8().store(val as u8, order);
932    }
933
934    cfg_has_atomic_cas_or_amo32! {
935    /// Stores a value into the bool, returning the previous value.
936    ///
937    /// `swap` takes an [`Ordering`] argument which describes the memory ordering
938    /// of this operation. All ordering modes are possible. Note that using
939    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
940    /// using [`Release`] makes the load part [`Relaxed`].
941    ///
942    /// # Examples
943    ///
944    /// ```
945    /// use portable_atomic::{AtomicBool, Ordering};
946    ///
947    /// let some_bool = AtomicBool::new(true);
948    ///
949    /// assert_eq!(some_bool.swap(false, Ordering::Relaxed), true);
950    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
951    /// ```
952    #[inline]
953    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
954    pub fn swap(&self, val: bool, order: Ordering) -> bool {
955        #[cfg(any(
956            target_arch = "riscv32",
957            target_arch = "riscv64",
958            target_arch = "loongarch32",
959            target_arch = "loongarch64",
960        ))]
961        {
962            // See https://github.com/rust-lang/rust/pull/114034 for details.
963            // https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L249
964            // https://godbolt.org/z/ofbGGdx44
965            if val { self.fetch_or(true, order) } else { self.fetch_and(false, order) }
966        }
967        #[cfg(not(any(
968            target_arch = "riscv32",
969            target_arch = "riscv64",
970            target_arch = "loongarch32",
971            target_arch = "loongarch64",
972        )))]
973        {
974            self.as_atomic_u8().swap(val as u8, order) != 0
975        }
976    }
977
978    /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
979    ///
980    /// The return value is a result indicating whether the new value was written and containing
981    /// the previous value. On success this value is guaranteed to be equal to `current`.
982    ///
983    /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
984    /// ordering of this operation. `success` describes the required ordering for the
985    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
986    /// `failure` describes the required ordering for the load operation that takes place when
987    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
988    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
989    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
990    ///
991    /// # Panics
992    ///
993    /// Panics if `failure` is [`Release`], [`AcqRel`].
994    ///
995    /// # Examples
996    ///
997    /// ```
998    /// use portable_atomic::{AtomicBool, Ordering};
999    ///
1000    /// let some_bool = AtomicBool::new(true);
1001    ///
1002    /// assert_eq!(
1003    ///     some_bool.compare_exchange(true, false, Ordering::Acquire, Ordering::Relaxed),
1004    ///     Ok(true)
1005    /// );
1006    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
1007    ///
1008    /// assert_eq!(
1009    ///     some_bool.compare_exchange(true, true, Ordering::SeqCst, Ordering::Acquire),
1010    ///     Err(false)
1011    /// );
1012    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
1013    /// ```
1014    #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
1015    #[inline]
1016    #[cfg_attr(
1017        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1018        track_caller
1019    )]
1020    pub fn compare_exchange(
1021        &self,
1022        current: bool,
1023        new: bool,
1024        success: Ordering,
1025        failure: Ordering,
1026    ) -> Result<bool, bool> {
1027        #[cfg(any(
1028            target_arch = "riscv32",
1029            target_arch = "riscv64",
1030            target_arch = "loongarch32",
1031            target_arch = "loongarch64",
1032        ))]
1033        {
1034            // See https://github.com/rust-lang/rust/pull/114034 for details.
1035            // https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L249
1036            // https://godbolt.org/z/ofbGGdx44
1037            crate::utils::assert_compare_exchange_ordering(success, failure);
1038            let order = crate::utils::upgrade_success_ordering(success, failure);
1039            let old = if current == new {
1040                // This is a no-op, but we still need to perform the operation
1041                // for memory ordering reasons.
1042                self.fetch_or(false, order)
1043            } else {
1044                // This sets the value to the new one and returns the old one.
1045                self.swap(new, order)
1046            };
1047            if old == current { Ok(old) } else { Err(old) }
1048        }
1049        #[cfg(not(any(
1050            target_arch = "riscv32",
1051            target_arch = "riscv64",
1052            target_arch = "loongarch32",
1053            target_arch = "loongarch64",
1054        )))]
1055        {
1056            match self.as_atomic_u8().compare_exchange(current as u8, new as u8, success, failure) {
1057                Ok(x) => Ok(x != 0),
1058                Err(x) => Err(x != 0),
1059            }
1060        }
1061    }
1062
1063    /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
1064    ///
1065    /// Unlike [`AtomicBool::compare_exchange`], this function is allowed to spuriously fail even when the
1066    /// comparison succeeds, which can result in more efficient code on some platforms. The
1067    /// return value is a result indicating whether the new value was written and containing the
1068    /// previous value.
1069    ///
1070    /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
1071    /// ordering of this operation. `success` describes the required ordering for the
1072    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1073    /// `failure` describes the required ordering for the load operation that takes place when
1074    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1075    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1076    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1077    ///
1078    /// # Panics
1079    ///
1080    /// Panics if `failure` is [`Release`], [`AcqRel`].
1081    ///
1082    /// # Examples
1083    ///
1084    /// ```
1085    /// use portable_atomic::{AtomicBool, Ordering};
1086    ///
1087    /// let val = AtomicBool::new(false);
1088    ///
1089    /// let new = true;
1090    /// let mut old = val.load(Ordering::Relaxed);
1091    /// loop {
1092    ///     match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
1093    ///         Ok(_) => break,
1094    ///         Err(x) => old = x,
1095    ///     }
1096    /// }
1097    /// ```
1098    #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
1099    #[inline]
1100    #[cfg_attr(
1101        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1102        track_caller
1103    )]
1104    pub fn compare_exchange_weak(
1105        &self,
1106        current: bool,
1107        new: bool,
1108        success: Ordering,
1109        failure: Ordering,
1110    ) -> Result<bool, bool> {
1111        #[cfg(any(
1112            target_arch = "riscv32",
1113            target_arch = "riscv64",
1114            target_arch = "loongarch32",
1115            target_arch = "loongarch64",
1116        ))]
1117        {
1118            // See https://github.com/rust-lang/rust/pull/114034 for details.
1119            // https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L249
1120            // https://godbolt.org/z/ofbGGdx44
1121            self.compare_exchange(current, new, success, failure)
1122        }
1123        #[cfg(not(any(
1124            target_arch = "riscv32",
1125            target_arch = "riscv64",
1126            target_arch = "loongarch32",
1127            target_arch = "loongarch64",
1128        )))]
1129        {
1130            match self
1131                .as_atomic_u8()
1132                .compare_exchange_weak(current as u8, new as u8, success, failure)
1133            {
1134                Ok(x) => Ok(x != 0),
1135                Err(x) => Err(x != 0),
1136            }
1137        }
1138    }
1139
1140    /// Logical "and" with a boolean value.
1141    ///
1142    /// Performs a logical "and" operation on the current value and the argument `val`, and sets
1143    /// the new value to the result.
1144    ///
1145    /// Returns the previous value.
1146    ///
1147    /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
1148    /// of this operation. All ordering modes are possible. Note that using
1149    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1150    /// using [`Release`] makes the load part [`Relaxed`].
1151    ///
1152    /// # Examples
1153    ///
1154    /// ```
1155    /// use portable_atomic::{AtomicBool, Ordering};
1156    ///
1157    /// let foo = AtomicBool::new(true);
1158    /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), true);
1159    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1160    ///
1161    /// let foo = AtomicBool::new(true);
1162    /// assert_eq!(foo.fetch_and(true, Ordering::SeqCst), true);
1163    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1164    ///
1165    /// let foo = AtomicBool::new(false);
1166    /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), false);
1167    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1168    /// ```
1169    #[inline]
1170    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1171    pub fn fetch_and(&self, val: bool, order: Ordering) -> bool {
1172        self.as_atomic_u8().fetch_and(val as u8, order) != 0
1173    }
1174
1175    /// Logical "and" with a boolean value.
1176    ///
1177    /// Performs a logical "and" operation on the current value and the argument `val`, and sets
1178    /// the new value to the result.
1179    ///
1180    /// Unlike `fetch_and`, this does not return the previous value.
1181    ///
1182    /// `and` takes an [`Ordering`] argument which describes the memory ordering
1183    /// of this operation. All ordering modes are possible. Note that using
1184    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1185    /// using [`Release`] makes the load part [`Relaxed`].
1186    ///
1187    /// This function may generate more efficient code than `fetch_and` on some platforms.
1188    ///
1189    /// - x86/x86_64: `lock and` instead of `cmpxchg` loop
1190    /// - MSP430: `and` instead of disabling interrupts
1191    ///
1192    /// Note: On x86/x86_64, the use of either function should not usually
1193    /// affect the generated code, because LLVM can properly optimize the case
1194    /// where the result is unused.
1195    ///
1196    /// # Examples
1197    ///
1198    /// ```
1199    /// use portable_atomic::{AtomicBool, Ordering};
1200    ///
1201    /// let foo = AtomicBool::new(true);
1202    /// foo.and(false, Ordering::SeqCst);
1203    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1204    ///
1205    /// let foo = AtomicBool::new(true);
1206    /// foo.and(true, Ordering::SeqCst);
1207    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1208    ///
1209    /// let foo = AtomicBool::new(false);
1210    /// foo.and(false, Ordering::SeqCst);
1211    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1212    /// ```
1213    #[inline]
1214    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1215    pub fn and(&self, val: bool, order: Ordering) {
1216        self.as_atomic_u8().and(val as u8, order);
1217    }
1218
1219    /// Logical "nand" with a boolean value.
1220    ///
1221    /// Performs a logical "nand" operation on the current value and the argument `val`, and sets
1222    /// the new value to the result.
1223    ///
1224    /// Returns the previous value.
1225    ///
1226    /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
1227    /// of this operation. All ordering modes are possible. Note that using
1228    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1229    /// using [`Release`] makes the load part [`Relaxed`].
1230    ///
1231    /// # Examples
1232    ///
1233    /// ```
1234    /// use portable_atomic::{AtomicBool, Ordering};
1235    ///
1236    /// let foo = AtomicBool::new(true);
1237    /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), true);
1238    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1239    ///
1240    /// let foo = AtomicBool::new(true);
1241    /// assert_eq!(foo.fetch_nand(true, Ordering::SeqCst), true);
1242    /// assert_eq!(foo.load(Ordering::SeqCst) as usize, 0);
1243    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1244    ///
1245    /// let foo = AtomicBool::new(false);
1246    /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), false);
1247    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1248    /// ```
1249    #[inline]
1250    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1251    pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool {
1252        // https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L973-L985
1253        if val {
1254            // !(x & true) == !x
1255            // We must invert the bool.
1256            self.fetch_xor(true, order)
1257        } else {
1258            // !(x & false) == true
1259            // We must set the bool to true.
1260            self.swap(true, order)
1261        }
1262    }
1263
1264    /// Logical "or" with a boolean value.
1265    ///
1266    /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
1267    /// new value to the result.
1268    ///
1269    /// Returns the previous value.
1270    ///
1271    /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
1272    /// of this operation. All ordering modes are possible. Note that using
1273    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1274    /// using [`Release`] makes the load part [`Relaxed`].
1275    ///
1276    /// # Examples
1277    ///
1278    /// ```
1279    /// use portable_atomic::{AtomicBool, Ordering};
1280    ///
1281    /// let foo = AtomicBool::new(true);
1282    /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), true);
1283    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1284    ///
1285    /// let foo = AtomicBool::new(true);
1286    /// assert_eq!(foo.fetch_or(true, Ordering::SeqCst), true);
1287    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1288    ///
1289    /// let foo = AtomicBool::new(false);
1290    /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), false);
1291    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1292    /// ```
1293    #[inline]
1294    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1295    pub fn fetch_or(&self, val: bool, order: Ordering) -> bool {
1296        self.as_atomic_u8().fetch_or(val as u8, order) != 0
1297    }
1298
1299    /// Logical "or" with a boolean value.
1300    ///
1301    /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
1302    /// new value to the result.
1303    ///
1304    /// Unlike `fetch_or`, this does not return the previous value.
1305    ///
1306    /// `or` takes an [`Ordering`] argument which describes the memory ordering
1307    /// of this operation. All ordering modes are possible. Note that using
1308    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1309    /// using [`Release`] makes the load part [`Relaxed`].
1310    ///
1311    /// This function may generate more efficient code than `fetch_or` on some platforms.
1312    ///
1313    /// - x86/x86_64: `lock or` instead of `cmpxchg` loop
1314    /// - MSP430: `bis` instead of disabling interrupts
1315    ///
1316    /// Note: On x86/x86_64, the use of either function should not usually
1317    /// affect the generated code, because LLVM can properly optimize the case
1318    /// where the result is unused.
1319    ///
1320    /// # Examples
1321    ///
1322    /// ```
1323    /// use portable_atomic::{AtomicBool, Ordering};
1324    ///
1325    /// let foo = AtomicBool::new(true);
1326    /// foo.or(false, Ordering::SeqCst);
1327    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1328    ///
1329    /// let foo = AtomicBool::new(true);
1330    /// foo.or(true, Ordering::SeqCst);
1331    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1332    ///
1333    /// let foo = AtomicBool::new(false);
1334    /// foo.or(false, Ordering::SeqCst);
1335    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1336    /// ```
1337    #[inline]
1338    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1339    pub fn or(&self, val: bool, order: Ordering) {
1340        self.as_atomic_u8().or(val as u8, order);
1341    }
1342
1343    /// Logical "xor" with a boolean value.
1344    ///
1345    /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1346    /// the new value to the result.
1347    ///
1348    /// Returns the previous value.
1349    ///
1350    /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
1351    /// of this operation. All ordering modes are possible. Note that using
1352    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1353    /// using [`Release`] makes the load part [`Relaxed`].
1354    ///
1355    /// # Examples
1356    ///
1357    /// ```
1358    /// use portable_atomic::{AtomicBool, Ordering};
1359    ///
1360    /// let foo = AtomicBool::new(true);
1361    /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), true);
1362    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1363    ///
1364    /// let foo = AtomicBool::new(true);
1365    /// assert_eq!(foo.fetch_xor(true, Ordering::SeqCst), true);
1366    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1367    ///
1368    /// let foo = AtomicBool::new(false);
1369    /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), false);
1370    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1371    /// ```
1372    #[inline]
1373    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1374    pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool {
1375        self.as_atomic_u8().fetch_xor(val as u8, order) != 0
1376    }
1377
1378    /// Logical "xor" with a boolean value.
1379    ///
1380    /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1381    /// the new value to the result.
1382    ///
1383    /// Unlike `fetch_xor`, this does not return the previous value.
1384    ///
1385    /// `xor` takes an [`Ordering`] argument which describes the memory ordering
1386    /// of this operation. All ordering modes are possible. Note that using
1387    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1388    /// using [`Release`] makes the load part [`Relaxed`].
1389    ///
1390    /// This function may generate more efficient code than `fetch_xor` on some platforms.
1391    ///
1392    /// - x86/x86_64: `lock xor` instead of `cmpxchg` loop
1393    /// - MSP430: `xor` instead of disabling interrupts
1394    ///
1395    /// Note: On x86/x86_64, the use of either function should not usually
1396    /// affect the generated code, because LLVM can properly optimize the case
1397    /// where the result is unused.
1398    ///
1399    /// # Examples
1400    ///
1401    /// ```
1402    /// use portable_atomic::{AtomicBool, Ordering};
1403    ///
1404    /// let foo = AtomicBool::new(true);
1405    /// foo.xor(false, Ordering::SeqCst);
1406    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1407    ///
1408    /// let foo = AtomicBool::new(true);
1409    /// foo.xor(true, Ordering::SeqCst);
1410    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1411    ///
1412    /// let foo = AtomicBool::new(false);
1413    /// foo.xor(false, Ordering::SeqCst);
1414    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1415    /// ```
1416    #[inline]
1417    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1418    pub fn xor(&self, val: bool, order: Ordering) {
1419        self.as_atomic_u8().xor(val as u8, order);
1420    }
1421
1422    /// Logical "not" with a boolean value.
1423    ///
1424    /// Performs a logical "not" operation on the current value, and sets
1425    /// the new value to the result.
1426    ///
1427    /// Returns the previous value.
1428    ///
1429    /// `fetch_not` takes an [`Ordering`] argument which describes the memory ordering
1430    /// of this operation. All ordering modes are possible. Note that using
1431    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1432    /// using [`Release`] makes the load part [`Relaxed`].
1433    ///
1434    /// # Examples
1435    ///
1436    /// ```
1437    /// use portable_atomic::{AtomicBool, Ordering};
1438    ///
1439    /// let foo = AtomicBool::new(true);
1440    /// assert_eq!(foo.fetch_not(Ordering::SeqCst), true);
1441    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1442    ///
1443    /// let foo = AtomicBool::new(false);
1444    /// assert_eq!(foo.fetch_not(Ordering::SeqCst), false);
1445    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1446    /// ```
1447    #[inline]
1448    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1449    pub fn fetch_not(&self, order: Ordering) -> bool {
1450        self.fetch_xor(true, order)
1451    }
1452
1453    /// Logical "not" with a boolean value.
1454    ///
1455    /// Performs a logical "not" operation on the current value, and sets
1456    /// the new value to the result.
1457    ///
1458    /// Unlike `fetch_not`, this does not return the previous value.
1459    ///
1460    /// `not` takes an [`Ordering`] argument which describes the memory ordering
1461    /// of this operation. All ordering modes are possible. Note that using
1462    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1463    /// using [`Release`] makes the load part [`Relaxed`].
1464    ///
1465    /// This function may generate more efficient code than `fetch_not` on some platforms.
1466    ///
1467    /// - x86/x86_64: `lock xor` instead of `cmpxchg` loop
1468    /// - MSP430: `xor` instead of disabling interrupts
1469    ///
1470    /// Note: On x86/x86_64, the use of either function should not usually
1471    /// affect the generated code, because LLVM can properly optimize the case
1472    /// where the result is unused.
1473    ///
1474    /// # Examples
1475    ///
1476    /// ```
1477    /// use portable_atomic::{AtomicBool, Ordering};
1478    ///
1479    /// let foo = AtomicBool::new(true);
1480    /// foo.not(Ordering::SeqCst);
1481    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1482    ///
1483    /// let foo = AtomicBool::new(false);
1484    /// foo.not(Ordering::SeqCst);
1485    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1486    /// ```
1487    #[inline]
1488    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1489    pub fn not(&self, order: Ordering) {
1490        self.xor(true, order);
1491    }
1492
1493    /// Fetches the value, and applies a function to it that returns an optional
1494    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1495    /// returned `Some(_)`, else `Err(previous_value)`.
1496    ///
1497    /// Note: This may call the function multiple times if the value has been
1498    /// changed from other threads in the meantime, as long as the function
1499    /// returns `Some(_)`, but the function will have been applied only once to
1500    /// the stored value.
1501    ///
1502    /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1503    /// ordering of this operation. The first describes the required ordering for
1504    /// when the operation finally succeeds while the second describes the
1505    /// required ordering for loads. These correspond to the success and failure
1506    /// orderings of [`compare_exchange`](Self::compare_exchange) respectively.
1507    ///
1508    /// Using [`Acquire`] as success ordering makes the store part of this
1509    /// operation [`Relaxed`], and using [`Release`] makes the final successful
1510    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1511    /// [`Acquire`] or [`Relaxed`].
1512    ///
1513    /// # Considerations
1514    ///
1515    /// This method is not magic; it is not provided by the hardware.
1516    /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
1517    /// and suffers from the same drawbacks.
1518    /// In particular, this method will not circumvent the [ABA Problem].
1519    ///
1520    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1521    ///
1522    /// # Panics
1523    ///
1524    /// Panics if `fetch_order` is [`Release`], [`AcqRel`].
1525    ///
1526    /// # Examples
1527    ///
1528    /// ```
1529    /// use portable_atomic::{AtomicBool, Ordering};
1530    ///
1531    /// let x = AtomicBool::new(false);
1532    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
1533    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
1534    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
1535    /// assert_eq!(x.load(Ordering::SeqCst), false);
1536    /// ```
1537    #[inline]
1538    #[cfg_attr(
1539        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1540        track_caller
1541    )]
1542    pub fn fetch_update<F>(
1543        &self,
1544        set_order: Ordering,
1545        fetch_order: Ordering,
1546        mut f: F,
1547    ) -> Result<bool, bool>
1548    where
1549        F: FnMut(bool) -> Option<bool>,
1550    {
1551        let mut prev = self.load(fetch_order);
1552        while let Some(next) = f(prev) {
1553            match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1554                x @ Ok(_) => return x,
1555                Err(next_prev) => prev = next_prev,
1556            }
1557        }
1558        Err(prev)
1559    }
1560    } // cfg_has_atomic_cas_or_amo32!
1561
1562    const_fn! {
1563        // This function is actually `const fn`-compatible on Rust 1.32+,
1564        // but makes `const fn` only on Rust 1.58+ to match other atomic types.
1565        const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
1566        /// Returns a mutable pointer to the underlying [`bool`].
1567        ///
1568        /// Returning an `*mut` pointer from a shared reference to this atomic is
1569        /// safe because the atomic types work with interior mutability. Any use of
1570        /// the returned raw pointer requires an `unsafe` block and has to uphold
1571        /// the safety requirements. If there is concurrent access, note the following
1572        /// additional safety requirements:
1573        ///
1574        /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
1575        ///   operations on it must be atomic.
1576        /// - Otherwise, any concurrent operations on it must be compatible with
1577        ///   operations performed by this atomic type.
1578        ///
1579        /// This is `const fn` on Rust 1.58+.
1580        #[inline]
1581        pub const fn as_ptr(&self) -> *mut bool {
1582            self.v.get() as *mut bool
1583        }
1584    }
1585
1586    #[inline(always)]
1587    fn as_atomic_u8(&self) -> &imp::AtomicU8 {
1588        // SAFETY: AtomicBool and imp::AtomicU8 have the same layout,
1589        // and both access data in the same way.
1590        unsafe { &*(self as *const Self as *const imp::AtomicU8) }
1591    }
1592}
1593// See https://github.com/taiki-e/portable-atomic/issues/180
1594#[cfg(not(feature = "require-cas"))]
1595cfg_no_atomic_cas! {
1596#[doc(hidden)]
1597#[allow(unused_variables, clippy::unused_self, clippy::extra_unused_lifetimes)]
1598impl<'a> AtomicBool {
1599    cfg_no_atomic_cas_or_amo32! {
1600    #[inline]
1601    pub fn swap(&self, val: bool, order: Ordering) -> bool
1602    where
1603        &'a Self: HasSwap,
1604    {
1605        unimplemented!()
1606    }
1607    #[inline]
1608    pub fn compare_exchange(
1609        &self,
1610        current: bool,
1611        new: bool,
1612        success: Ordering,
1613        failure: Ordering,
1614    ) -> Result<bool, bool>
1615    where
1616        &'a Self: HasCompareExchange,
1617    {
1618        unimplemented!()
1619    }
1620    #[inline]
1621    pub fn compare_exchange_weak(
1622        &self,
1623        current: bool,
1624        new: bool,
1625        success: Ordering,
1626        failure: Ordering,
1627    ) -> Result<bool, bool>
1628    where
1629        &'a Self: HasCompareExchangeWeak,
1630    {
1631        unimplemented!()
1632    }
1633    #[inline]
1634    pub fn fetch_and(&self, val: bool, order: Ordering) -> bool
1635    where
1636        &'a Self: HasFetchAnd,
1637    {
1638        unimplemented!()
1639    }
1640    #[inline]
1641    pub fn and(&self, val: bool, order: Ordering)
1642    where
1643        &'a Self: HasAnd,
1644    {
1645        unimplemented!()
1646    }
1647    #[inline]
1648    pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool
1649    where
1650        &'a Self: HasFetchNand,
1651    {
1652        unimplemented!()
1653    }
1654    #[inline]
1655    pub fn fetch_or(&self, val: bool, order: Ordering) -> bool
1656    where
1657        &'a Self: HasFetchOr,
1658    {
1659        unimplemented!()
1660    }
1661    #[inline]
1662    pub fn or(&self, val: bool, order: Ordering)
1663    where
1664        &'a Self: HasOr,
1665    {
1666        unimplemented!()
1667    }
1668    #[inline]
1669    pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool
1670    where
1671        &'a Self: HasFetchXor,
1672    {
1673        unimplemented!()
1674    }
1675    #[inline]
1676    pub fn xor(&self, val: bool, order: Ordering)
1677    where
1678        &'a Self: HasXor,
1679    {
1680        unimplemented!()
1681    }
1682    #[inline]
1683    pub fn fetch_not(&self, order: Ordering) -> bool
1684    where
1685        &'a Self: HasFetchNot,
1686    {
1687        unimplemented!()
1688    }
1689    #[inline]
1690    pub fn not(&self, order: Ordering)
1691    where
1692        &'a Self: HasNot,
1693    {
1694        unimplemented!()
1695    }
1696    #[inline]
1697    pub fn fetch_update<F>(
1698        &self,
1699        set_order: Ordering,
1700        fetch_order: Ordering,
1701        f: F,
1702    ) -> Result<bool, bool>
1703    where
1704        F: FnMut(bool) -> Option<bool>,
1705        &'a Self: HasFetchUpdate,
1706    {
1707        unimplemented!()
1708    }
1709    } // cfg_no_atomic_cas_or_amo32!
1710}
1711} // cfg_no_atomic_cas!
1712} // cfg_has_atomic_8!
1713
1714cfg_has_atomic_ptr! {
1715/// A raw pointer type which can be safely shared between threads.
1716///
1717/// This type has the same in-memory representation as a `*mut T`.
1718///
1719/// If the compiler and the platform support atomic loads and stores of pointers,
1720/// this type is a wrapper for the standard library's
1721/// [`AtomicPtr`](core::sync::atomic::AtomicPtr). If the platform supports it
1722/// but the compiler does not, atomic operations are implemented using inline
1723/// assembly.
1724// We can use #[repr(transparent)] here, but #[repr(C, align(N))]
1725// will show clearer docs.
1726#[cfg_attr(target_pointer_width = "16", repr(C, align(2)))]
1727#[cfg_attr(target_pointer_width = "32", repr(C, align(4)))]
1728#[cfg_attr(target_pointer_width = "64", repr(C, align(8)))]
1729#[cfg_attr(target_pointer_width = "128", repr(C, align(16)))]
1730pub struct AtomicPtr<T> {
1731    inner: imp::AtomicPtr<T>,
1732}
1733
1734impl<T> Default for AtomicPtr<T> {
1735    /// Creates a null `AtomicPtr<T>`.
1736    #[inline]
1737    fn default() -> Self {
1738        Self::new(ptr::null_mut())
1739    }
1740}
1741
1742impl<T> From<*mut T> for AtomicPtr<T> {
1743    #[inline]
1744    fn from(p: *mut T) -> Self {
1745        Self::new(p)
1746    }
1747}
1748
1749impl<T> fmt::Debug for AtomicPtr<T> {
1750    #[inline] // fmt is not hot path, but #[inline] on fmt seems to still be useful: https://github.com/rust-lang/rust/pull/117727
1751    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1752        // std atomic types use Relaxed in Debug::fmt: https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L2188
1753        fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
1754    }
1755}
1756
1757impl<T> fmt::Pointer for AtomicPtr<T> {
1758    #[inline] // fmt is not hot path, but #[inline] on fmt seems to still be useful: https://github.com/rust-lang/rust/pull/117727
1759    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1760        // std atomic types use Relaxed in Debug::fmt: https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L2188
1761        fmt::Pointer::fmt(&self.load(Ordering::Relaxed), f)
1762    }
1763}
1764
1765// UnwindSafe is implicitly implemented.
1766#[cfg(not(portable_atomic_no_core_unwind_safe))]
1767impl<T> core::panic::RefUnwindSafe for AtomicPtr<T> {}
1768#[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
1769impl<T> std::panic::RefUnwindSafe for AtomicPtr<T> {}
1770
1771impl<T> AtomicPtr<T> {
1772    /// Creates a new `AtomicPtr`.
1773    ///
1774    /// # Examples
1775    ///
1776    /// ```
1777    /// use portable_atomic::AtomicPtr;
1778    ///
1779    /// let ptr = &mut 5;
1780    /// let atomic_ptr = AtomicPtr::new(ptr);
1781    /// ```
1782    #[inline]
1783    #[must_use]
1784    pub const fn new(p: *mut T) -> Self {
1785        static_assert_layout!(AtomicPtr<()>, *mut ());
1786        Self { inner: imp::AtomicPtr::new(p) }
1787    }
1788
1789    // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
1790    const_fn! {
1791        const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
1792        /// Creates a new `AtomicPtr` from a pointer.
1793        ///
1794        /// This is `const fn` on Rust 1.83+.
1795        ///
1796        /// # Safety
1797        ///
1798        /// * `ptr` must be aligned to `align_of::<AtomicPtr<T>>()` (note that on some platforms this
1799        ///   can be bigger than `align_of::<*mut T>()`).
1800        /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
1801        /// * If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
1802        ///   behind `ptr` must have a happens-before relationship with atomic accesses via the returned
1803        ///   value (or vice-versa).
1804        ///   * In other words, time periods where the value is accessed atomically may not overlap
1805        ///     with periods where the value is accessed non-atomically.
1806        ///   * This requirement is trivially satisfied if `ptr` is never used non-atomically for the
1807        ///     duration of lifetime `'a`. Most use cases should be able to follow this guideline.
1808        ///   * This requirement is also trivially satisfied if all accesses (atomic or not) are done
1809        ///     from the same thread.
1810        /// * If this atomic type is *not* lock-free:
1811        ///   * Any accesses to the value behind `ptr` must have a happens-before relationship
1812        ///     with accesses via the returned value (or vice-versa).
1813        ///   * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
1814        ///     be compatible with operations performed by this atomic type.
1815        /// * This method must not be used to create overlapping or mixed-size atomic accesses, as
1816        ///   these are not supported by the memory model.
1817        ///
1818        /// [valid]: core::ptr#safety
1819        #[inline]
1820        #[must_use]
1821        pub const unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a Self {
1822            #[allow(clippy::cast_ptr_alignment)]
1823            // SAFETY: guaranteed by the caller
1824            unsafe { &*(ptr as *mut Self) }
1825        }
1826    }
1827
1828    /// Returns `true` if operations on values of this type are lock-free.
1829    ///
1830    /// If the compiler or the platform doesn't support the necessary
1831    /// atomic instructions, global locks for every potentially
1832    /// concurrent atomic operation will be used.
1833    ///
1834    /// # Examples
1835    ///
1836    /// ```
1837    /// use portable_atomic::AtomicPtr;
1838    ///
1839    /// let is_lock_free = AtomicPtr::<()>::is_lock_free();
1840    /// ```
1841    #[inline]
1842    #[must_use]
1843    pub fn is_lock_free() -> bool {
1844        <imp::AtomicPtr<T>>::is_lock_free()
1845    }
1846
1847    /// Returns `true` if operations on values of this type are lock-free.
1848    ///
1849    /// If the compiler or the platform doesn't support the necessary
1850    /// atomic instructions, global locks for every potentially
1851    /// concurrent atomic operation will be used.
1852    ///
1853    /// **Note:** If the atomic operation relies on dynamic CPU feature detection,
1854    /// this type may be lock-free even if the function returns false.
1855    ///
1856    /// # Examples
1857    ///
1858    /// ```
1859    /// use portable_atomic::AtomicPtr;
1860    ///
1861    /// const IS_ALWAYS_LOCK_FREE: bool = AtomicPtr::<()>::is_always_lock_free();
1862    /// ```
1863    #[inline]
1864    #[must_use]
1865    pub const fn is_always_lock_free() -> bool {
1866        <imp::AtomicPtr<T>>::IS_ALWAYS_LOCK_FREE
1867    }
1868    #[cfg(test)]
1869    const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
1870
1871    const_fn! {
1872        const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
1873        /// Returns a mutable reference to the underlying pointer.
1874        ///
1875        /// This is safe because the mutable reference guarantees that no other threads are
1876        /// concurrently accessing the atomic data.
1877        ///
1878        /// This is `const fn` on Rust 1.83+.
1879        ///
1880        /// # Examples
1881        ///
1882        /// ```
1883        /// use portable_atomic::{AtomicPtr, Ordering};
1884        ///
1885        /// let mut data = 10;
1886        /// let mut atomic_ptr = AtomicPtr::new(&mut data);
1887        /// let mut other_data = 5;
1888        /// *atomic_ptr.get_mut() = &mut other_data;
1889        /// assert_eq!(unsafe { *atomic_ptr.load(Ordering::SeqCst) }, 5);
1890        /// ```
1891        #[inline]
1892        pub const fn get_mut(&mut self) -> &mut *mut T {
1893            // SAFETY: the mutable reference guarantees unique ownership.
1894            // (core::sync::atomic::Atomic*::get_mut is not const yet)
1895            unsafe { &mut *self.as_ptr() }
1896        }
1897    }
1898
1899    // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
1900    // https://github.com/rust-lang/rust/issues/76314
1901
1902    const_fn! {
1903        const_if: #[cfg(not(portable_atomic_no_const_transmute))];
1904        /// Consumes the atomic and returns the contained value.
1905        ///
1906        /// This is safe because passing `self` by value guarantees that no other threads are
1907        /// concurrently accessing the atomic data.
1908        ///
1909        /// This is `const fn` on Rust 1.56+.
1910        ///
1911        /// # Examples
1912        ///
1913        /// ```
1914        /// use portable_atomic::AtomicPtr;
1915        ///
1916        /// let mut data = 5;
1917        /// let atomic_ptr = AtomicPtr::new(&mut data);
1918        /// assert_eq!(unsafe { *atomic_ptr.into_inner() }, 5);
1919        /// ```
1920        #[inline]
1921        pub const fn into_inner(self) -> *mut T {
1922            // SAFETY: AtomicPtr<T> and *mut T have the same size and in-memory representations,
1923            // so they can be safely transmuted.
1924            // (const UnsafeCell::into_inner is unstable)
1925            unsafe { core::mem::transmute(self) }
1926        }
1927    }
1928
1929    /// Loads a value from the pointer.
1930    ///
1931    /// `load` takes an [`Ordering`] argument which describes the memory ordering
1932    /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
1933    ///
1934    /// # Panics
1935    ///
1936    /// Panics if `order` is [`Release`] or [`AcqRel`].
1937    ///
1938    /// # Examples
1939    ///
1940    /// ```
1941    /// use portable_atomic::{AtomicPtr, Ordering};
1942    ///
1943    /// let ptr = &mut 5;
1944    /// let some_ptr = AtomicPtr::new(ptr);
1945    ///
1946    /// let value = some_ptr.load(Ordering::Relaxed);
1947    /// ```
1948    #[inline]
1949    #[cfg_attr(
1950        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1951        track_caller
1952    )]
1953    pub fn load(&self, order: Ordering) -> *mut T {
1954        self.inner.load(order)
1955    }
1956
1957    /// Stores a value into the pointer.
1958    ///
1959    /// `store` takes an [`Ordering`] argument which describes the memory ordering
1960    /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
1961    ///
1962    /// # Panics
1963    ///
1964    /// Panics if `order` is [`Acquire`] or [`AcqRel`].
1965    ///
1966    /// # Examples
1967    ///
1968    /// ```
1969    /// use portable_atomic::{AtomicPtr, Ordering};
1970    ///
1971    /// let ptr = &mut 5;
1972    /// let some_ptr = AtomicPtr::new(ptr);
1973    ///
1974    /// let other_ptr = &mut 10;
1975    ///
1976    /// some_ptr.store(other_ptr, Ordering::Relaxed);
1977    /// ```
1978    #[inline]
1979    #[cfg_attr(
1980        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1981        track_caller
1982    )]
1983    pub fn store(&self, ptr: *mut T, order: Ordering) {
1984        self.inner.store(ptr, order);
1985    }
1986
1987    cfg_has_atomic_cas_or_amo32! {
1988    /// Stores a value into the pointer, returning the previous value.
1989    ///
1990    /// `swap` takes an [`Ordering`] argument which describes the memory ordering
1991    /// of this operation. All ordering modes are possible. Note that using
1992    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1993    /// using [`Release`] makes the load part [`Relaxed`].
1994    ///
1995    /// # Examples
1996    ///
1997    /// ```
1998    /// use portable_atomic::{AtomicPtr, Ordering};
1999    ///
2000    /// let ptr = &mut 5;
2001    /// let some_ptr = AtomicPtr::new(ptr);
2002    ///
2003    /// let other_ptr = &mut 10;
2004    ///
2005    /// let value = some_ptr.swap(other_ptr, Ordering::Relaxed);
2006    /// ```
2007    #[inline]
2008    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2009    pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T {
2010        self.inner.swap(ptr, order)
2011    }
2012
2013    cfg_has_atomic_cas! {
2014    /// Stores a value into the pointer if the current value is the same as the `current` value.
2015    ///
2016    /// The return value is a result indicating whether the new value was written and containing
2017    /// the previous value. On success this value is guaranteed to be equal to `current`.
2018    ///
2019    /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
2020    /// ordering of this operation. `success` describes the required ordering for the
2021    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
2022    /// `failure` describes the required ordering for the load operation that takes place when
2023    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
2024    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
2025    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2026    ///
2027    /// # Panics
2028    ///
2029    /// Panics if `failure` is [`Release`], [`AcqRel`].
2030    ///
2031    /// # Examples
2032    ///
2033    /// ```
2034    /// use portable_atomic::{AtomicPtr, Ordering};
2035    ///
2036    /// let ptr = &mut 5;
2037    /// let some_ptr = AtomicPtr::new(ptr);
2038    ///
2039    /// let other_ptr = &mut 10;
2040    ///
2041    /// let value = some_ptr.compare_exchange(ptr, other_ptr, Ordering::SeqCst, Ordering::Relaxed);
2042    /// ```
2043    #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
2044    #[inline]
2045    #[cfg_attr(
2046        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
2047        track_caller
2048    )]
2049    pub fn compare_exchange(
2050        &self,
2051        current: *mut T,
2052        new: *mut T,
2053        success: Ordering,
2054        failure: Ordering,
2055    ) -> Result<*mut T, *mut T> {
2056        self.inner.compare_exchange(current, new, success, failure)
2057    }
2058
2059    /// Stores a value into the pointer if the current value is the same as the `current` value.
2060    ///
2061    /// Unlike [`AtomicPtr::compare_exchange`], this function is allowed to spuriously fail even when the
2062    /// comparison succeeds, which can result in more efficient code on some platforms. The
2063    /// return value is a result indicating whether the new value was written and containing the
2064    /// previous value.
2065    ///
2066    /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
2067    /// ordering of this operation. `success` describes the required ordering for the
2068    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
2069    /// `failure` describes the required ordering for the load operation that takes place when
2070    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
2071    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
2072    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2073    ///
2074    /// # Panics
2075    ///
2076    /// Panics if `failure` is [`Release`], [`AcqRel`].
2077    ///
2078    /// # Examples
2079    ///
2080    /// ```
2081    /// use portable_atomic::{AtomicPtr, Ordering};
2082    ///
2083    /// let some_ptr = AtomicPtr::new(&mut 5);
2084    ///
2085    /// let new = &mut 10;
2086    /// let mut old = some_ptr.load(Ordering::Relaxed);
2087    /// loop {
2088    ///     match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
2089    ///         Ok(_) => break,
2090    ///         Err(x) => old = x,
2091    ///     }
2092    /// }
2093    /// ```
2094    #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
2095    #[inline]
2096    #[cfg_attr(
2097        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
2098        track_caller
2099    )]
2100    pub fn compare_exchange_weak(
2101        &self,
2102        current: *mut T,
2103        new: *mut T,
2104        success: Ordering,
2105        failure: Ordering,
2106    ) -> Result<*mut T, *mut T> {
2107        self.inner.compare_exchange_weak(current, new, success, failure)
2108    }
2109
2110    /// Fetches the value, and applies a function to it that returns an optional
2111    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
2112    /// returned `Some(_)`, else `Err(previous_value)`.
2113    ///
2114    /// Note: This may call the function multiple times if the value has been
2115    /// changed from other threads in the meantime, as long as the function
2116    /// returns `Some(_)`, but the function will have been applied only once to
2117    /// the stored value.
2118    ///
2119    /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
2120    /// ordering of this operation. The first describes the required ordering for
2121    /// when the operation finally succeeds while the second describes the
2122    /// required ordering for loads. These correspond to the success and failure
2123    /// orderings of [`compare_exchange`](Self::compare_exchange) respectively.
2124    ///
2125    /// Using [`Acquire`] as success ordering makes the store part of this
2126    /// operation [`Relaxed`], and using [`Release`] makes the final successful
2127    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
2128    /// [`Acquire`] or [`Relaxed`].
2129    ///
2130    /// # Panics
2131    ///
2132    /// Panics if `fetch_order` is [`Release`], [`AcqRel`].
2133    ///
2134    /// # Considerations
2135    ///
2136    /// This method is not magic; it is not provided by the hardware.
2137    /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
2138    /// and suffers from the same drawbacks.
2139    /// In particular, this method will not circumvent the [ABA Problem].
2140    ///
2141    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
2142    ///
2143    /// # Examples
2144    ///
2145    /// ```
2146    /// use portable_atomic::{AtomicPtr, Ordering};
2147    ///
2148    /// let ptr: *mut _ = &mut 5;
2149    /// let some_ptr = AtomicPtr::new(ptr);
2150    ///
2151    /// let new: *mut _ = &mut 10;
2152    /// assert_eq!(some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
2153    /// let result = some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
2154    ///     if x == ptr {
2155    ///         Some(new)
2156    ///     } else {
2157    ///         None
2158    ///     }
2159    /// });
2160    /// assert_eq!(result, Ok(ptr));
2161    /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
2162    /// ```
2163    #[inline]
2164    #[cfg_attr(
2165        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
2166        track_caller
2167    )]
2168    pub fn fetch_update<F>(
2169        &self,
2170        set_order: Ordering,
2171        fetch_order: Ordering,
2172        mut f: F,
2173    ) -> Result<*mut T, *mut T>
2174    where
2175        F: FnMut(*mut T) -> Option<*mut T>,
2176    {
2177        let mut prev = self.load(fetch_order);
2178        while let Some(next) = f(prev) {
2179            match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
2180                x @ Ok(_) => return x,
2181                Err(next_prev) => prev = next_prev,
2182            }
2183        }
2184        Err(prev)
2185    }
2186    } // cfg_has_atomic_cas!
2187
2188    /// Offsets the pointer's address by adding `val` (in units of `T`),
2189    /// returning the previous pointer.
2190    ///
2191    /// This is equivalent to using [`wrapping_add`] to atomically perform the
2192    /// equivalent of `ptr = ptr.wrapping_add(val);`.
2193    ///
2194    /// This method operates in units of `T`, which means that it cannot be used
2195    /// to offset the pointer by an amount which is not a multiple of
2196    /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2197    /// work with a deliberately misaligned pointer. In such cases, you may use
2198    /// the [`fetch_byte_add`](Self::fetch_byte_add) method instead.
2199    ///
2200    /// `fetch_ptr_add` takes an [`Ordering`] argument which describes the
2201    /// memory ordering of this operation. All ordering modes are possible. Note
2202    /// that using [`Acquire`] makes the store part of this operation
2203    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2204    ///
2205    /// [`wrapping_add`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_add
2206    ///
2207    /// # Examples
2208    ///
2209    /// ```
2210    /// # #![allow(unstable_name_collisions)]
2211    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2212    /// use portable_atomic::{AtomicPtr, Ordering};
2213    ///
2214    /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2215    /// assert_eq!(atom.fetch_ptr_add(1, Ordering::Relaxed).addr(), 0);
2216    /// // Note: units of `size_of::<i64>()`.
2217    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 8);
2218    /// ```
2219    #[inline]
2220    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2221    pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T {
2222        self.fetch_byte_add(val.wrapping_mul(core::mem::size_of::<T>()), order)
2223    }
2224
2225    /// Offsets the pointer's address by subtracting `val` (in units of `T`),
2226    /// returning the previous pointer.
2227    ///
2228    /// This is equivalent to using [`wrapping_sub`] to atomically perform the
2229    /// equivalent of `ptr = ptr.wrapping_sub(val);`.
2230    ///
2231    /// This method operates in units of `T`, which means that it cannot be used
2232    /// to offset the pointer by an amount which is not a multiple of
2233    /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2234    /// work with a deliberately misaligned pointer. In such cases, you may use
2235    /// the [`fetch_byte_sub`](Self::fetch_byte_sub) method instead.
2236    ///
2237    /// `fetch_ptr_sub` takes an [`Ordering`] argument which describes the memory
2238    /// ordering of this operation. All ordering modes are possible. Note that
2239    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2240    /// and using [`Release`] makes the load part [`Relaxed`].
2241    ///
2242    /// [`wrapping_sub`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_sub
2243    ///
2244    /// # Examples
2245    ///
2246    /// ```
2247    /// use portable_atomic::{AtomicPtr, Ordering};
2248    ///
2249    /// let array = [1i32, 2i32];
2250    /// let atom = AtomicPtr::new(array.as_ptr().wrapping_add(1) as *mut _);
2251    ///
2252    /// assert!(core::ptr::eq(atom.fetch_ptr_sub(1, Ordering::Relaxed), &array[1]));
2253    /// assert!(core::ptr::eq(atom.load(Ordering::Relaxed), &array[0]));
2254    /// ```
2255    #[inline]
2256    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2257    pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T {
2258        self.fetch_byte_sub(val.wrapping_mul(core::mem::size_of::<T>()), order)
2259    }
2260
2261    /// Offsets the pointer's address by adding `val` *bytes*, returning the
2262    /// previous pointer.
2263    ///
2264    /// This is equivalent to using [`wrapping_add`] and [`cast`] to atomically
2265    /// perform `ptr = ptr.cast::<u8>().wrapping_add(val).cast::<T>()`.
2266    ///
2267    /// `fetch_byte_add` takes an [`Ordering`] argument which describes the
2268    /// memory ordering of this operation. All ordering modes are possible. Note
2269    /// that using [`Acquire`] makes the store part of this operation
2270    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2271    ///
2272    /// [`wrapping_add`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_add
2273    /// [`cast`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.cast
2274    ///
2275    /// # Examples
2276    ///
2277    /// ```
2278    /// # #![allow(unstable_name_collisions)]
2279    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2280    /// use portable_atomic::{AtomicPtr, Ordering};
2281    ///
2282    /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2283    /// assert_eq!(atom.fetch_byte_add(1, Ordering::Relaxed).addr(), 0);
2284    /// // Note: in units of bytes, not `size_of::<i64>()`.
2285    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 1);
2286    /// ```
2287    #[inline]
2288    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2289    pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T {
2290        self.inner.fetch_byte_add(val, order)
2291    }
2292
2293    /// Offsets the pointer's address by subtracting `val` *bytes*, returning the
2294    /// previous pointer.
2295    ///
2296    /// This is equivalent to using [`wrapping_sub`] and [`cast`] to atomically
2297    /// perform `ptr = ptr.cast::<u8>().wrapping_sub(val).cast::<T>()`.
2298    ///
2299    /// `fetch_byte_sub` takes an [`Ordering`] argument which describes the
2300    /// memory ordering of this operation. All ordering modes are possible. Note
2301    /// that using [`Acquire`] makes the store part of this operation
2302    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2303    ///
2304    /// [`wrapping_sub`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_sub
2305    /// [`cast`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.cast
2306    ///
2307    /// # Examples
2308    ///
2309    /// ```
2310    /// # #![allow(unstable_name_collisions)]
2311    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2312    /// use portable_atomic::{AtomicPtr, Ordering};
2313    ///
2314    /// let atom = AtomicPtr::<i64>::new(sptr::invalid_mut(1));
2315    /// assert_eq!(atom.fetch_byte_sub(1, Ordering::Relaxed).addr(), 1);
2316    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 0);
2317    /// ```
2318    #[inline]
2319    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2320    pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T {
2321        self.inner.fetch_byte_sub(val, order)
2322    }
2323
2324    /// Performs a bitwise "or" operation on the address of the current pointer,
2325    /// and the argument `val`, and stores a pointer with provenance of the
2326    /// current pointer and the resulting address.
2327    ///
2328    /// This is equivalent to using [`map_addr`] to atomically perform
2329    /// `ptr = ptr.map_addr(|a| a | val)`. This can be used in tagged
2330    /// pointer schemes to atomically set tag bits.
2331    ///
2332    /// **Caveat**: This operation returns the previous value. To compute the
2333    /// stored value without losing provenance, you may use [`map_addr`]. For
2334    /// example: `a.fetch_or(val).map_addr(|a| a | val)`.
2335    ///
2336    /// `fetch_or` takes an [`Ordering`] argument which describes the memory
2337    /// ordering of this operation. All ordering modes are possible. Note that
2338    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2339    /// and using [`Release`] makes the load part [`Relaxed`].
2340    ///
2341    /// This API and its claimed semantics are part of the Strict Provenance
2342    /// experiment, see the [module documentation for `ptr`][core::ptr] for
2343    /// details.
2344    ///
2345    /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr
2346    ///
2347    /// # Examples
2348    ///
2349    /// ```
2350    /// # #![allow(unstable_name_collisions)]
2351    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2352    /// use portable_atomic::{AtomicPtr, Ordering};
2353    ///
2354    /// let pointer = &mut 3i64 as *mut i64;
2355    ///
2356    /// let atom = AtomicPtr::<i64>::new(pointer);
2357    /// // Tag the bottom bit of the pointer.
2358    /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 0);
2359    /// // Extract and untag.
2360    /// let tagged = atom.load(Ordering::Relaxed);
2361    /// assert_eq!(tagged.addr() & 1, 1);
2362    /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
2363    /// ```
2364    #[inline]
2365    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2366    pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T {
2367        self.inner.fetch_or(val, order)
2368    }
2369
2370    /// Performs a bitwise "and" operation on the address of the current
2371    /// pointer, and the argument `val`, and stores a pointer with provenance of
2372    /// the current pointer and the resulting address.
2373    ///
2374    /// This is equivalent to using [`map_addr`] to atomically perform
2375    /// `ptr = ptr.map_addr(|a| a & val)`. This can be used in tagged
2376    /// pointer schemes to atomically unset tag bits.
2377    ///
2378    /// **Caveat**: This operation returns the previous value. To compute the
2379    /// stored value without losing provenance, you may use [`map_addr`]. For
2380    /// example: `a.fetch_and(val).map_addr(|a| a & val)`.
2381    ///
2382    /// `fetch_and` takes an [`Ordering`] argument which describes the memory
2383    /// ordering of this operation. All ordering modes are possible. Note that
2384    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2385    /// and using [`Release`] makes the load part [`Relaxed`].
2386    ///
2387    /// This API and its claimed semantics are part of the Strict Provenance
2388    /// experiment, see the [module documentation for `ptr`][core::ptr] for
2389    /// details.
2390    ///
2391    /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr
2392    ///
2393    /// # Examples
2394    ///
2395    /// ```
2396    /// # #![allow(unstable_name_collisions)]
2397    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2398    /// use portable_atomic::{AtomicPtr, Ordering};
2399    ///
2400    /// let pointer = &mut 3i64 as *mut i64;
2401    /// // A tagged pointer
2402    /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
2403    /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 1);
2404    /// // Untag, and extract the previously tagged pointer.
2405    /// let untagged = atom.fetch_and(!1, Ordering::Relaxed).map_addr(|a| a & !1);
2406    /// assert_eq!(untagged, pointer);
2407    /// ```
2408    #[inline]
2409    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2410    pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T {
2411        self.inner.fetch_and(val, order)
2412    }
2413
2414    /// Performs a bitwise "xor" operation on the address of the current
2415    /// pointer, and the argument `val`, and stores a pointer with provenance of
2416    /// the current pointer and the resulting address.
2417    ///
2418    /// This is equivalent to using [`map_addr`] to atomically perform
2419    /// `ptr = ptr.map_addr(|a| a ^ val)`. This can be used in tagged
2420    /// pointer schemes to atomically toggle tag bits.
2421    ///
2422    /// **Caveat**: This operation returns the previous value. To compute the
2423    /// stored value without losing provenance, you may use [`map_addr`]. For
2424    /// example: `a.fetch_xor(val).map_addr(|a| a ^ val)`.
2425    ///
2426    /// `fetch_xor` takes an [`Ordering`] argument which describes the memory
2427    /// ordering of this operation. All ordering modes are possible. Note that
2428    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2429    /// and using [`Release`] makes the load part [`Relaxed`].
2430    ///
2431    /// This API and its claimed semantics are part of the Strict Provenance
2432    /// experiment, see the [module documentation for `ptr`][core::ptr] for
2433    /// details.
2434    ///
2435    /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr
2436    ///
2437    /// # Examples
2438    ///
2439    /// ```
2440    /// # #![allow(unstable_name_collisions)]
2441    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2442    /// use portable_atomic::{AtomicPtr, Ordering};
2443    ///
2444    /// let pointer = &mut 3i64 as *mut i64;
2445    /// let atom = AtomicPtr::<i64>::new(pointer);
2446    ///
2447    /// // Toggle a tag bit on the pointer.
2448    /// atom.fetch_xor(1, Ordering::Relaxed);
2449    /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2450    /// ```
2451    #[inline]
2452    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2453    pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T {
2454        self.inner.fetch_xor(val, order)
2455    }
2456
2457    /// Sets the bit at the specified bit-position to 1.
2458    ///
2459    /// Returns `true` if the specified bit was previously set to 1.
2460    ///
2461    /// `bit_set` takes an [`Ordering`] argument which describes the memory ordering
2462    /// of this operation. All ordering modes are possible. Note that using
2463    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2464    /// using [`Release`] makes the load part [`Relaxed`].
2465    ///
2466    /// This corresponds to x86's `lock bts`, and the implementation calls them on x86/x86_64.
2467    ///
2468    /// # Examples
2469    ///
2470    /// ```
2471    /// # #![allow(unstable_name_collisions)]
2472    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2473    /// use portable_atomic::{AtomicPtr, Ordering};
2474    ///
2475    /// let pointer = &mut 3i64 as *mut i64;
2476    ///
2477    /// let atom = AtomicPtr::<i64>::new(pointer);
2478    /// // Tag the bottom bit of the pointer.
2479    /// assert!(!atom.bit_set(0, Ordering::Relaxed));
2480    /// // Extract and untag.
2481    /// let tagged = atom.load(Ordering::Relaxed);
2482    /// assert_eq!(tagged.addr() & 1, 1);
2483    /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
2484    /// ```
2485    #[inline]
2486    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2487    pub fn bit_set(&self, bit: u32, order: Ordering) -> bool {
2488        self.inner.bit_set(bit, order)
2489    }
2490
2491    /// Clears the bit at the specified bit-position to 1.
2492    ///
2493    /// Returns `true` if the specified bit was previously set to 1.
2494    ///
2495    /// `bit_clear` takes an [`Ordering`] argument which describes the memory ordering
2496    /// of this operation. All ordering modes are possible. Note that using
2497    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2498    /// using [`Release`] makes the load part [`Relaxed`].
2499    ///
2500    /// This corresponds to x86's `lock btr`, and the implementation calls them on x86/x86_64.
2501    ///
2502    /// # Examples
2503    ///
2504    /// ```
2505    /// # #![allow(unstable_name_collisions)]
2506    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2507    /// use portable_atomic::{AtomicPtr, Ordering};
2508    ///
2509    /// let pointer = &mut 3i64 as *mut i64;
2510    /// // A tagged pointer
2511    /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
2512    /// assert!(atom.bit_set(0, Ordering::Relaxed));
2513    /// // Untag
2514    /// assert!(atom.bit_clear(0, Ordering::Relaxed));
2515    /// ```
2516    #[inline]
2517    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2518    pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool {
2519        self.inner.bit_clear(bit, order)
2520    }
2521
2522    /// Toggles the bit at the specified bit-position.
2523    ///
2524    /// Returns `true` if the specified bit was previously set to 1.
2525    ///
2526    /// `bit_toggle` takes an [`Ordering`] argument which describes the memory ordering
2527    /// of this operation. All ordering modes are possible. Note that using
2528    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2529    /// using [`Release`] makes the load part [`Relaxed`].
2530    ///
2531    /// This corresponds to x86's `lock btc`, and the implementation calls them on x86/x86_64.
2532    ///
2533    /// # Examples
2534    ///
2535    /// ```
2536    /// # #![allow(unstable_name_collisions)]
2537    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2538    /// use portable_atomic::{AtomicPtr, Ordering};
2539    ///
2540    /// let pointer = &mut 3i64 as *mut i64;
2541    /// let atom = AtomicPtr::<i64>::new(pointer);
2542    ///
2543    /// // Toggle a tag bit on the pointer.
2544    /// atom.bit_toggle(0, Ordering::Relaxed);
2545    /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2546    /// ```
2547    #[inline]
2548    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2549    pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool {
2550        self.inner.bit_toggle(bit, order)
2551    }
2552    } // cfg_has_atomic_cas_or_amo32!
2553
2554    const_fn! {
2555        const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
2556        /// Returns a mutable pointer to the underlying pointer.
2557        ///
2558        /// Returning an `*mut` pointer from a shared reference to this atomic is
2559        /// safe because the atomic types work with interior mutability. Any use of
2560        /// the returned raw pointer requires an `unsafe` block and has to uphold
2561        /// the safety requirements. If there is concurrent access, note the following
2562        /// additional safety requirements:
2563        ///
2564        /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
2565        ///   operations on it must be atomic.
2566        /// - Otherwise, any concurrent operations on it must be compatible with
2567        ///   operations performed by this atomic type.
2568        ///
2569        /// This is `const fn` on Rust 1.58+.
2570        #[inline]
2571        pub const fn as_ptr(&self) -> *mut *mut T {
2572            self.inner.as_ptr()
2573        }
2574    }
2575}
2576// See https://github.com/taiki-e/portable-atomic/issues/180
2577#[cfg(not(feature = "require-cas"))]
2578cfg_no_atomic_cas! {
2579#[doc(hidden)]
2580#[allow(unused_variables, clippy::unused_self, clippy::extra_unused_lifetimes)]
2581impl<'a, T: 'a> AtomicPtr<T> {
2582    cfg_no_atomic_cas_or_amo32! {
2583    #[inline]
2584    pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T
2585    where
2586        &'a Self: HasSwap,
2587    {
2588        unimplemented!()
2589    }
2590    } // cfg_no_atomic_cas_or_amo32!
2591    #[inline]
2592    pub fn compare_exchange(
2593        &self,
2594        current: *mut T,
2595        new: *mut T,
2596        success: Ordering,
2597        failure: Ordering,
2598    ) -> Result<*mut T, *mut T>
2599    where
2600        &'a Self: HasCompareExchange,
2601    {
2602        unimplemented!()
2603    }
2604    #[inline]
2605    pub fn compare_exchange_weak(
2606        &self,
2607        current: *mut T,
2608        new: *mut T,
2609        success: Ordering,
2610        failure: Ordering,
2611    ) -> Result<*mut T, *mut T>
2612    where
2613        &'a Self: HasCompareExchangeWeak,
2614    {
2615        unimplemented!()
2616    }
2617    #[inline]
2618    pub fn fetch_update<F>(
2619        &self,
2620        set_order: Ordering,
2621        fetch_order: Ordering,
2622        f: F,
2623    ) -> Result<*mut T, *mut T>
2624    where
2625        F: FnMut(*mut T) -> Option<*mut T>,
2626        &'a Self: HasFetchUpdate,
2627    {
2628        unimplemented!()
2629    }
2630    cfg_no_atomic_cas_or_amo32! {
2631    #[inline]
2632    pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T
2633    where
2634        &'a Self: HasFetchPtrAdd,
2635    {
2636        unimplemented!()
2637    }
2638    #[inline]
2639    pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T
2640    where
2641        &'a Self: HasFetchPtrSub,
2642    {
2643        unimplemented!()
2644    }
2645    #[inline]
2646    pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T
2647    where
2648        &'a Self: HasFetchByteAdd,
2649    {
2650        unimplemented!()
2651    }
2652    #[inline]
2653    pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T
2654    where
2655        &'a Self: HasFetchByteSub,
2656    {
2657        unimplemented!()
2658    }
2659    #[inline]
2660    pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T
2661    where
2662        &'a Self: HasFetchOr,
2663    {
2664        unimplemented!()
2665    }
2666    #[inline]
2667    pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T
2668    where
2669        &'a Self: HasFetchAnd,
2670    {
2671        unimplemented!()
2672    }
2673    #[inline]
2674    pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T
2675    where
2676        &'a Self: HasFetchXor,
2677    {
2678        unimplemented!()
2679    }
2680    #[inline]
2681    pub fn bit_set(&self, bit: u32, order: Ordering) -> bool
2682    where
2683        &'a Self: HasBitSet,
2684    {
2685        unimplemented!()
2686    }
2687    #[inline]
2688    pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool
2689    where
2690        &'a Self: HasBitClear,
2691    {
2692        unimplemented!()
2693    }
2694    #[inline]
2695    pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool
2696    where
2697        &'a Self: HasBitToggle,
2698    {
2699        unimplemented!()
2700    }
2701    } // cfg_no_atomic_cas_or_amo32!
2702}
2703} // cfg_no_atomic_cas!
2704} // cfg_has_atomic_ptr!
2705
2706macro_rules! atomic_int {
2707    // Atomic{I,U}* impls
2708    ($atomic_type:ident, $int_type:ident, $align:literal,
2709        $cfg_has_atomic_cas_or_amo32_or_8:ident, $cfg_no_atomic_cas_or_amo32_or_8:ident
2710        $(, #[$cfg_float:meta] $atomic_float_type:ident, $float_type:ident)?
2711    ) => {
2712        doc_comment! {
2713            concat!("An integer type which can be safely shared between threads.
2714
2715This type has the same in-memory representation as the underlying integer type,
2716[`", stringify!($int_type), "`].
2717
2718If the compiler and the platform support atomic loads and stores of [`", stringify!($int_type),
2719"`], this type is a wrapper for the standard library's `", stringify!($atomic_type),
2720"`. If the platform supports it but the compiler does not, atomic operations are implemented using
2721inline assembly. Otherwise synchronizes using global locks.
2722You can call [`", stringify!($atomic_type), "::is_lock_free()`] to check whether
2723atomic instructions or locks will be used.
2724"
2725            ),
2726            // We can use #[repr(transparent)] here, but #[repr(C, align(N))]
2727            // will show clearer docs.
2728            #[repr(C, align($align))]
2729            pub struct $atomic_type {
2730                inner: imp::$atomic_type,
2731            }
2732        }
2733
2734        impl Default for $atomic_type {
2735            #[inline]
2736            fn default() -> Self {
2737                Self::new($int_type::default())
2738            }
2739        }
2740
2741        impl From<$int_type> for $atomic_type {
2742            #[inline]
2743            fn from(v: $int_type) -> Self {
2744                Self::new(v)
2745            }
2746        }
2747
2748        // UnwindSafe is implicitly implemented.
2749        #[cfg(not(portable_atomic_no_core_unwind_safe))]
2750        impl core::panic::RefUnwindSafe for $atomic_type {}
2751        #[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
2752        impl std::panic::RefUnwindSafe for $atomic_type {}
2753
2754        impl_debug_and_serde!($atomic_type);
2755
2756        impl $atomic_type {
2757            doc_comment! {
2758                concat!(
2759                    "Creates a new atomic integer.
2760
2761# Examples
2762
2763```
2764use portable_atomic::", stringify!($atomic_type), ";
2765
2766let atomic_forty_two = ", stringify!($atomic_type), "::new(42);
2767```"
2768                ),
2769                #[inline]
2770                #[must_use]
2771                pub const fn new(v: $int_type) -> Self {
2772                    static_assert_layout!($atomic_type, $int_type);
2773                    Self { inner: imp::$atomic_type::new(v) }
2774                }
2775            }
2776
2777            // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
2778            #[cfg(not(portable_atomic_no_const_mut_refs))]
2779            doc_comment! {
2780                concat!("Creates a new reference to an atomic integer from a pointer.
2781
2782This is `const fn` on Rust 1.83+.
2783
2784# Safety
2785
2786* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
2787  can be bigger than `align_of::<", stringify!($int_type), ">()`).
2788* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
2789* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
2790  behind `ptr` must have a happens-before relationship with atomic accesses via
2791  the returned value (or vice-versa).
2792  * In other words, time periods where the value is accessed atomically may not
2793    overlap with periods where the value is accessed non-atomically.
2794  * This requirement is trivially satisfied if `ptr` is never used non-atomically
2795    for the duration of lifetime `'a`. Most use cases should be able to follow
2796    this guideline.
2797  * This requirement is also trivially satisfied if all accesses (atomic or not) are
2798    done from the same thread.
2799* If this atomic type is *not* lock-free:
2800  * Any accesses to the value behind `ptr` must have a happens-before relationship
2801    with accesses via the returned value (or vice-versa).
2802  * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
2803    be compatible with operations performed by this atomic type.
2804* This method must not be used to create overlapping or mixed-size atomic
2805  accesses, as these are not supported by the memory model.
2806
2807[valid]: core::ptr#safety"),
2808                #[inline]
2809                #[must_use]
2810                pub const unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a Self {
2811                    #[allow(clippy::cast_ptr_alignment)]
2812                    // SAFETY: guaranteed by the caller
2813                    unsafe { &*(ptr as *mut Self) }
2814                }
2815            }
2816            #[cfg(portable_atomic_no_const_mut_refs)]
2817            doc_comment! {
2818                concat!("Creates a new reference to an atomic integer from a pointer.
2819
2820This is `const fn` on Rust 1.83+.
2821
2822# Safety
2823
2824* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
2825  can be bigger than `align_of::<", stringify!($int_type), ">()`).
2826* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
2827* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
2828  behind `ptr` must have a happens-before relationship with atomic accesses via
2829  the returned value (or vice-versa).
2830  * In other words, time periods where the value is accessed atomically may not
2831    overlap with periods where the value is accessed non-atomically.
2832  * This requirement is trivially satisfied if `ptr` is never used non-atomically
2833    for the duration of lifetime `'a`. Most use cases should be able to follow
2834    this guideline.
2835  * This requirement is also trivially satisfied if all accesses (atomic or not) are
2836    done from the same thread.
2837* If this atomic type is *not* lock-free:
2838  * Any accesses to the value behind `ptr` must have a happens-before relationship
2839    with accesses via the returned value (or vice-versa).
2840  * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
2841    be compatible with operations performed by this atomic type.
2842* This method must not be used to create overlapping or mixed-size atomic
2843  accesses, as these are not supported by the memory model.
2844
2845[valid]: core::ptr#safety"),
2846                #[inline]
2847                #[must_use]
2848                pub unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a Self {
2849                    #[allow(clippy::cast_ptr_alignment)]
2850                    // SAFETY: guaranteed by the caller
2851                    unsafe { &*(ptr as *mut Self) }
2852                }
2853            }
2854
2855            doc_comment! {
2856                concat!("Returns `true` if operations on values of this type are lock-free.
2857
2858If the compiler or the platform doesn't support the necessary
2859atomic instructions, global locks for every potentially
2860concurrent atomic operation will be used.
2861
2862# Examples
2863
2864```
2865use portable_atomic::", stringify!($atomic_type), ";
2866
2867let is_lock_free = ", stringify!($atomic_type), "::is_lock_free();
2868```"),
2869                #[inline]
2870                #[must_use]
2871                pub fn is_lock_free() -> bool {
2872                    <imp::$atomic_type>::is_lock_free()
2873                }
2874            }
2875
2876            doc_comment! {
2877                concat!("Returns `true` if operations on values of this type are lock-free.
2878
2879If the compiler or the platform doesn't support the necessary
2880atomic instructions, global locks for every potentially
2881concurrent atomic operation will be used.
2882
2883**Note:** If the atomic operation relies on dynamic CPU feature detection,
2884this type may be lock-free even if the function returns false.
2885
2886# Examples
2887
2888```
2889use portable_atomic::", stringify!($atomic_type), ";
2890
2891const IS_ALWAYS_LOCK_FREE: bool = ", stringify!($atomic_type), "::is_always_lock_free();
2892```"),
2893                #[inline]
2894                #[must_use]
2895                pub const fn is_always_lock_free() -> bool {
2896                    <imp::$atomic_type>::IS_ALWAYS_LOCK_FREE
2897                }
2898            }
2899            #[cfg(test)]
2900            #[cfg_attr(all(valgrind, target_arch = "powerpc64"), allow(dead_code))] // TODO(powerpc64): Hang (as of Valgrind 3.26)
2901            const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
2902
2903            #[cfg(not(portable_atomic_no_const_mut_refs))]
2904            doc_comment! {
2905                concat!("Returns a mutable reference to the underlying integer.\n
2906This is safe because the mutable reference guarantees that no other threads are
2907concurrently accessing the atomic data.
2908
2909This is `const fn` on Rust 1.83+.
2910
2911# Examples
2912
2913```
2914use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2915
2916let mut some_var = ", stringify!($atomic_type), "::new(10);
2917assert_eq!(*some_var.get_mut(), 10);
2918*some_var.get_mut() = 5;
2919assert_eq!(some_var.load(Ordering::SeqCst), 5);
2920```"),
2921                #[inline]
2922                pub const fn get_mut(&mut self) -> &mut $int_type {
2923                    // SAFETY: the mutable reference guarantees unique ownership.
2924                    // (core::sync::atomic::Atomic*::get_mut is not const yet)
2925                    unsafe { &mut *self.as_ptr() }
2926                }
2927            }
2928            #[cfg(portable_atomic_no_const_mut_refs)]
2929            doc_comment! {
2930                concat!("Returns a mutable reference to the underlying integer.\n
2931This is safe because the mutable reference guarantees that no other threads are
2932concurrently accessing the atomic data.
2933
2934This is `const fn` on Rust 1.83+.
2935
2936# Examples
2937
2938```
2939use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2940
2941let mut some_var = ", stringify!($atomic_type), "::new(10);
2942assert_eq!(*some_var.get_mut(), 10);
2943*some_var.get_mut() = 5;
2944assert_eq!(some_var.load(Ordering::SeqCst), 5);
2945```"),
2946                #[inline]
2947                pub fn get_mut(&mut self) -> &mut $int_type {
2948                    // SAFETY: the mutable reference guarantees unique ownership.
2949                    unsafe { &mut *self.as_ptr() }
2950                }
2951            }
2952
2953            // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
2954            // https://github.com/rust-lang/rust/issues/76314
2955
2956            #[cfg(not(portable_atomic_no_const_transmute))]
2957            doc_comment! {
2958                concat!("Consumes the atomic and returns the contained value.
2959
2960This is safe because passing `self` by value guarantees that no other threads are
2961concurrently accessing the atomic data.
2962
2963This is `const fn` on Rust 1.56+.
2964
2965# Examples
2966
2967```
2968use portable_atomic::", stringify!($atomic_type), ";
2969
2970let some_var = ", stringify!($atomic_type), "::new(5);
2971assert_eq!(some_var.into_inner(), 5);
2972```"),
2973                #[inline]
2974                pub const fn into_inner(self) -> $int_type {
2975                    // SAFETY: $atomic_type and $int_type have the same size and in-memory representations,
2976                    // so they can be safely transmuted.
2977                    // (const UnsafeCell::into_inner is unstable)
2978                    unsafe { core::mem::transmute(self) }
2979                }
2980            }
2981            #[cfg(portable_atomic_no_const_transmute)]
2982            doc_comment! {
2983                concat!("Consumes the atomic and returns the contained value.
2984
2985This is safe because passing `self` by value guarantees that no other threads are
2986concurrently accessing the atomic data.
2987
2988This is `const fn` on Rust 1.56+.
2989
2990# Examples
2991
2992```
2993use portable_atomic::", stringify!($atomic_type), ";
2994
2995let some_var = ", stringify!($atomic_type), "::new(5);
2996assert_eq!(some_var.into_inner(), 5);
2997```"),
2998                #[inline]
2999                pub fn into_inner(self) -> $int_type {
3000                    // SAFETY: $atomic_type and $int_type have the same size and in-memory representations,
3001                    // so they can be safely transmuted.
3002                    // (const UnsafeCell::into_inner is unstable)
3003                    unsafe { core::mem::transmute(self) }
3004                }
3005            }
3006
3007            doc_comment! {
3008                concat!("Loads a value from the atomic integer.
3009
3010`load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
3011Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
3012
3013# Panics
3014
3015Panics if `order` is [`Release`] or [`AcqRel`].
3016
3017# Examples
3018
3019```
3020use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3021
3022let some_var = ", stringify!($atomic_type), "::new(5);
3023
3024assert_eq!(some_var.load(Ordering::Relaxed), 5);
3025```"),
3026                #[inline]
3027                #[cfg_attr(
3028                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3029                    track_caller
3030                )]
3031                pub fn load(&self, order: Ordering) -> $int_type {
3032                    self.inner.load(order)
3033                }
3034            }
3035
3036            doc_comment! {
3037                concat!("Stores a value into the atomic integer.
3038
3039`store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
3040Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
3041
3042# Panics
3043
3044Panics if `order` is [`Acquire`] or [`AcqRel`].
3045
3046# Examples
3047
3048```
3049use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3050
3051let some_var = ", stringify!($atomic_type), "::new(5);
3052
3053some_var.store(10, Ordering::Relaxed);
3054assert_eq!(some_var.load(Ordering::Relaxed), 10);
3055```"),
3056                #[inline]
3057                #[cfg_attr(
3058                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3059                    track_caller
3060                )]
3061                pub fn store(&self, val: $int_type, order: Ordering) {
3062                    self.inner.store(val, order)
3063                }
3064            }
3065
3066            cfg_has_atomic_cas_or_amo32! {
3067            $cfg_has_atomic_cas_or_amo32_or_8! {
3068            doc_comment! {
3069                concat!("Stores a value into the atomic integer, returning the previous value.
3070
3071`swap` takes an [`Ordering`] argument which describes the memory ordering
3072of this operation. All ordering modes are possible. Note that using
3073[`Acquire`] makes the store part of this operation [`Relaxed`], and
3074using [`Release`] makes the load part [`Relaxed`].
3075
3076# Examples
3077
3078```
3079use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3080
3081let some_var = ", stringify!($atomic_type), "::new(5);
3082
3083assert_eq!(some_var.swap(10, Ordering::Relaxed), 5);
3084```"),
3085                #[inline]
3086                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3087                pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type {
3088                    self.inner.swap(val, order)
3089                }
3090            }
3091            } // $cfg_has_atomic_cas_or_amo32_or_8!
3092
3093            cfg_has_atomic_cas! {
3094            doc_comment! {
3095                concat!("Stores a value into the atomic integer if the current value is the same as
3096the `current` value.
3097
3098The return value is a result indicating whether the new value was written and
3099containing the previous value. On success this value is guaranteed to be equal to
3100`current`.
3101
3102`compare_exchange` takes two [`Ordering`] arguments to describe the memory
3103ordering of this operation. `success` describes the required ordering for the
3104read-modify-write operation that takes place if the comparison with `current` succeeds.
3105`failure` describes the required ordering for the load operation that takes place when
3106the comparison fails. Using [`Acquire`] as success ordering makes the store part
3107of this operation [`Relaxed`], and using [`Release`] makes the successful load
3108[`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3109
3110# Panics
3111
3112Panics if `failure` is [`Release`], [`AcqRel`].
3113
3114# Examples
3115
3116```
3117use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3118
3119let some_var = ", stringify!($atomic_type), "::new(5);
3120
3121assert_eq!(
3122    some_var.compare_exchange(5, 10, Ordering::Acquire, Ordering::Relaxed),
3123    Ok(5),
3124);
3125assert_eq!(some_var.load(Ordering::Relaxed), 10);
3126
3127assert_eq!(
3128    some_var.compare_exchange(6, 12, Ordering::SeqCst, Ordering::Acquire),
3129    Err(10),
3130);
3131assert_eq!(some_var.load(Ordering::Relaxed), 10);
3132```"),
3133                #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
3134                #[inline]
3135                #[cfg_attr(
3136                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3137                    track_caller
3138                )]
3139                pub fn compare_exchange(
3140                    &self,
3141                    current: $int_type,
3142                    new: $int_type,
3143                    success: Ordering,
3144                    failure: Ordering,
3145                ) -> Result<$int_type, $int_type> {
3146                    self.inner.compare_exchange(current, new, success, failure)
3147                }
3148            }
3149
3150            doc_comment! {
3151                concat!("Stores a value into the atomic integer if the current value is the same as
3152the `current` value.
3153Unlike [`compare_exchange`](Self::compare_exchange)
3154this function is allowed to spuriously fail even
3155when the comparison succeeds, which can result in more efficient code on some
3156platforms. The return value is a result indicating whether the new value was
3157written and containing the previous value.
3158
3159`compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
3160ordering of this operation. `success` describes the required ordering for the
3161read-modify-write operation that takes place if the comparison with `current` succeeds.
3162`failure` describes the required ordering for the load operation that takes place when
3163the comparison fails. Using [`Acquire`] as success ordering makes the store part
3164of this operation [`Relaxed`], and using [`Release`] makes the successful load
3165[`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3166
3167# Panics
3168
3169Panics if `failure` is [`Release`], [`AcqRel`].
3170
3171# Examples
3172
3173```
3174use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3175
3176let val = ", stringify!($atomic_type), "::new(4);
3177
3178let mut old = val.load(Ordering::Relaxed);
3179loop {
3180    let new = old * 2;
3181    match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
3182        Ok(_) => break,
3183        Err(x) => old = x,
3184    }
3185}
3186```"),
3187                #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
3188                #[inline]
3189                #[cfg_attr(
3190                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3191                    track_caller
3192                )]
3193                pub fn compare_exchange_weak(
3194                    &self,
3195                    current: $int_type,
3196                    new: $int_type,
3197                    success: Ordering,
3198                    failure: Ordering,
3199                ) -> Result<$int_type, $int_type> {
3200                    self.inner.compare_exchange_weak(current, new, success, failure)
3201                }
3202            }
3203            } // cfg_has_atomic_cas!
3204
3205            $cfg_has_atomic_cas_or_amo32_or_8! {
3206            doc_comment! {
3207                concat!("Adds to the current value, returning the previous value.
3208
3209This operation wraps around on overflow.
3210
3211`fetch_add` takes an [`Ordering`] argument which describes the memory ordering
3212of this operation. All ordering modes are possible. Note that using
3213[`Acquire`] makes the store part of this operation [`Relaxed`], and
3214using [`Release`] makes the load part [`Relaxed`].
3215
3216# Examples
3217
3218```
3219use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3220
3221let foo = ", stringify!($atomic_type), "::new(0);
3222assert_eq!(foo.fetch_add(10, Ordering::SeqCst), 0);
3223assert_eq!(foo.load(Ordering::SeqCst), 10);
3224```"),
3225                #[inline]
3226                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3227                pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type {
3228                    self.inner.fetch_add(val, order)
3229                }
3230            }
3231
3232            doc_comment! {
3233                concat!("Adds to the current value.
3234
3235This operation wraps around on overflow.
3236
3237Unlike `fetch_add`, this does not return the previous value.
3238
3239`add` takes an [`Ordering`] argument which describes the memory ordering
3240of this operation. All ordering modes are possible. Note that using
3241[`Acquire`] makes the store part of this operation [`Relaxed`], and
3242using [`Release`] makes the load part [`Relaxed`].
3243
3244This function may generate more efficient code than `fetch_add` on some platforms.
3245
3246- MSP430: `add` instead of disabling interrupts ({8,16}-bit atomics)
3247
3248# Examples
3249
3250```
3251use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3252
3253let foo = ", stringify!($atomic_type), "::new(0);
3254foo.add(10, Ordering::SeqCst);
3255assert_eq!(foo.load(Ordering::SeqCst), 10);
3256```"),
3257                #[inline]
3258                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3259                pub fn add(&self, val: $int_type, order: Ordering) {
3260                    self.inner.add(val, order);
3261                }
3262            }
3263
3264            doc_comment! {
3265                concat!("Subtracts from the current value, returning the previous value.
3266
3267This operation wraps around on overflow.
3268
3269`fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
3270of this operation. All ordering modes are possible. Note that using
3271[`Acquire`] makes the store part of this operation [`Relaxed`], and
3272using [`Release`] makes the load part [`Relaxed`].
3273
3274# Examples
3275
3276```
3277use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3278
3279let foo = ", stringify!($atomic_type), "::new(20);
3280assert_eq!(foo.fetch_sub(10, Ordering::SeqCst), 20);
3281assert_eq!(foo.load(Ordering::SeqCst), 10);
3282```"),
3283                #[inline]
3284                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3285                pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type {
3286                    self.inner.fetch_sub(val, order)
3287                }
3288            }
3289
3290            doc_comment! {
3291                concat!("Subtracts from the current value.
3292
3293This operation wraps around on overflow.
3294
3295Unlike `fetch_sub`, this does not return the previous value.
3296
3297`sub` takes an [`Ordering`] argument which describes the memory ordering
3298of this operation. All ordering modes are possible. Note that using
3299[`Acquire`] makes the store part of this operation [`Relaxed`], and
3300using [`Release`] makes the load part [`Relaxed`].
3301
3302This function may generate more efficient code than `fetch_sub` on some platforms.
3303
3304- MSP430: `sub` instead of disabling interrupts ({8,16}-bit atomics)
3305
3306# Examples
3307
3308```
3309use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3310
3311let foo = ", stringify!($atomic_type), "::new(20);
3312foo.sub(10, Ordering::SeqCst);
3313assert_eq!(foo.load(Ordering::SeqCst), 10);
3314```"),
3315                #[inline]
3316                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3317                pub fn sub(&self, val: $int_type, order: Ordering) {
3318                    self.inner.sub(val, order);
3319                }
3320            }
3321            } // $cfg_has_atomic_cas_or_amo32_or_8!
3322
3323            doc_comment! {
3324                concat!("Bitwise \"and\" with the current value.
3325
3326Performs a bitwise \"and\" operation on the current value and the argument `val`, and
3327sets the new value to the result.
3328
3329Returns the previous value.
3330
3331`fetch_and` takes an [`Ordering`] argument which describes the memory ordering
3332of this operation. All ordering modes are possible. Note that using
3333[`Acquire`] makes the store part of this operation [`Relaxed`], and
3334using [`Release`] makes the load part [`Relaxed`].
3335
3336# Examples
3337
3338```
3339use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3340
3341let foo = ", stringify!($atomic_type), "::new(0b101101);
3342assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
3343assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
3344```"),
3345                #[inline]
3346                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3347                pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type {
3348                    self.inner.fetch_and(val, order)
3349                }
3350            }
3351
3352            doc_comment! {
3353                concat!("Bitwise \"and\" with the current value.
3354
3355Performs a bitwise \"and\" operation on the current value and the argument `val`, and
3356sets the new value to the result.
3357
3358Unlike `fetch_and`, this does not return the previous value.
3359
3360`and` takes an [`Ordering`] argument which describes the memory ordering
3361of this operation. All ordering modes are possible. Note that using
3362[`Acquire`] makes the store part of this operation [`Relaxed`], and
3363using [`Release`] makes the load part [`Relaxed`].
3364
3365This function may generate more efficient code than `fetch_and` on some platforms.
3366
3367- x86/x86_64: `lock and` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3368- MSP430: `and` instead of disabling interrupts ({8,16}-bit atomics)
3369
3370Note: On x86/x86_64, the use of either function should not usually
3371affect the generated code, because LLVM can properly optimize the case
3372where the result is unused.
3373
3374# Examples
3375
3376```
3377use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3378
3379let foo = ", stringify!($atomic_type), "::new(0b101101);
3380assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
3381assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
3382```"),
3383                #[inline]
3384                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3385                pub fn and(&self, val: $int_type, order: Ordering) {
3386                    self.inner.and(val, order);
3387                }
3388            }
3389
3390            cfg_has_atomic_cas! {
3391            doc_comment! {
3392                concat!("Bitwise \"nand\" with the current value.
3393
3394Performs a bitwise \"nand\" operation on the current value and the argument `val`, and
3395sets the new value to the result.
3396
3397Returns the previous value.
3398
3399`fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
3400of this operation. All ordering modes are possible. Note that using
3401[`Acquire`] makes the store part of this operation [`Relaxed`], and
3402using [`Release`] makes the load part [`Relaxed`].
3403
3404# Examples
3405
3406```
3407use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3408
3409let foo = ", stringify!($atomic_type), "::new(0x13);
3410assert_eq!(foo.fetch_nand(0x31, Ordering::SeqCst), 0x13);
3411assert_eq!(foo.load(Ordering::SeqCst), !(0x13 & 0x31));
3412```"),
3413                #[inline]
3414                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3415                pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type {
3416                    self.inner.fetch_nand(val, order)
3417                }
3418            }
3419            } // cfg_has_atomic_cas!
3420
3421            doc_comment! {
3422                concat!("Bitwise \"or\" with the current value.
3423
3424Performs a bitwise \"or\" operation on the current value and the argument `val`, and
3425sets the new value to the result.
3426
3427Returns the previous value.
3428
3429`fetch_or` takes an [`Ordering`] argument which describes the memory ordering
3430of this operation. All ordering modes are possible. Note that using
3431[`Acquire`] makes the store part of this operation [`Relaxed`], and
3432using [`Release`] makes the load part [`Relaxed`].
3433
3434# Examples
3435
3436```
3437use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3438
3439let foo = ", stringify!($atomic_type), "::new(0b101101);
3440assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
3441assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
3442```"),
3443                #[inline]
3444                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3445                pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type {
3446                    self.inner.fetch_or(val, order)
3447                }
3448            }
3449
3450            doc_comment! {
3451                concat!("Bitwise \"or\" with the current value.
3452
3453Performs a bitwise \"or\" operation on the current value and the argument `val`, and
3454sets the new value to the result.
3455
3456Unlike `fetch_or`, this does not return the previous value.
3457
3458`or` takes an [`Ordering`] argument which describes the memory ordering
3459of this operation. All ordering modes are possible. Note that using
3460[`Acquire`] makes the store part of this operation [`Relaxed`], and
3461using [`Release`] makes the load part [`Relaxed`].
3462
3463This function may generate more efficient code than `fetch_or` on some platforms.
3464
3465- x86/x86_64: `lock or` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3466- MSP430: `or` instead of disabling interrupts ({8,16}-bit atomics)
3467
3468Note: On x86/x86_64, the use of either function should not usually
3469affect the generated code, because LLVM can properly optimize the case
3470where the result is unused.
3471
3472# Examples
3473
3474```
3475use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3476
3477let foo = ", stringify!($atomic_type), "::new(0b101101);
3478assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
3479assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
3480```"),
3481                #[inline]
3482                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3483                pub fn or(&self, val: $int_type, order: Ordering) {
3484                    self.inner.or(val, order);
3485                }
3486            }
3487
3488            doc_comment! {
3489                concat!("Bitwise \"xor\" with the current value.
3490
3491Performs a bitwise \"xor\" operation on the current value and the argument `val`, and
3492sets the new value to the result.
3493
3494Returns the previous value.
3495
3496`fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
3497of this operation. All ordering modes are possible. Note that using
3498[`Acquire`] makes the store part of this operation [`Relaxed`], and
3499using [`Release`] makes the load part [`Relaxed`].
3500
3501# Examples
3502
3503```
3504use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3505
3506let foo = ", stringify!($atomic_type), "::new(0b101101);
3507assert_eq!(foo.fetch_xor(0b110011, Ordering::SeqCst), 0b101101);
3508assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
3509```"),
3510                #[inline]
3511                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3512                pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type {
3513                    self.inner.fetch_xor(val, order)
3514                }
3515            }
3516
3517            doc_comment! {
3518                concat!("Bitwise \"xor\" with the current value.
3519
3520Performs a bitwise \"xor\" operation on the current value and the argument `val`, and
3521sets the new value to the result.
3522
3523Unlike `fetch_xor`, this does not return the previous value.
3524
3525`xor` takes an [`Ordering`] argument which describes the memory ordering
3526of this operation. All ordering modes are possible. Note that using
3527[`Acquire`] makes the store part of this operation [`Relaxed`], and
3528using [`Release`] makes the load part [`Relaxed`].
3529
3530This function may generate more efficient code than `fetch_xor` on some platforms.
3531
3532- x86/x86_64: `lock xor` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3533- MSP430: `xor` instead of disabling interrupts ({8,16}-bit atomics)
3534
3535Note: On x86/x86_64, the use of either function should not usually
3536affect the generated code, because LLVM can properly optimize the case
3537where the result is unused.
3538
3539# Examples
3540
3541```
3542use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3543
3544let foo = ", stringify!($atomic_type), "::new(0b101101);
3545foo.xor(0b110011, Ordering::SeqCst);
3546assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
3547```"),
3548                #[inline]
3549                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3550                pub fn xor(&self, val: $int_type, order: Ordering) {
3551                    self.inner.xor(val, order);
3552                }
3553            }
3554
3555            cfg_has_atomic_cas! {
3556            doc_comment! {
3557                concat!("Fetches the value, and applies a function to it that returns an optional
3558new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
3559`Err(previous_value)`.
3560
3561Note: This may call the function multiple times if the value has been changed from other threads in
3562the meantime, as long as the function returns `Some(_)`, but the function will have been applied
3563only once to the stored value.
3564
3565`fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3566The first describes the required ordering for when the operation finally succeeds while the second
3567describes the required ordering for loads. These correspond to the success and failure orderings of
3568[`compare_exchange`](Self::compare_exchange) respectively.
3569
3570Using [`Acquire`] as success ordering makes the store part
3571of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3572[`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3573
3574# Panics
3575
3576Panics if `fetch_order` is [`Release`], [`AcqRel`].
3577
3578# Considerations
3579
3580This method is not magic; it is not provided by the hardware.
3581It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
3582and suffers from the same drawbacks.
3583In particular, this method will not circumvent the [ABA Problem].
3584
3585[ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3586
3587# Examples
3588
3589```
3590use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3591
3592let x = ", stringify!($atomic_type), "::new(7);
3593assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
3594assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
3595assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
3596assert_eq!(x.load(Ordering::SeqCst), 9);
3597```"),
3598                #[inline]
3599                #[cfg_attr(
3600                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3601                    track_caller
3602                )]
3603                pub fn fetch_update<F>(
3604                    &self,
3605                    set_order: Ordering,
3606                    fetch_order: Ordering,
3607                    mut f: F,
3608                ) -> Result<$int_type, $int_type>
3609                where
3610                    F: FnMut($int_type) -> Option<$int_type>,
3611                {
3612                    let mut prev = self.load(fetch_order);
3613                    while let Some(next) = f(prev) {
3614                        match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
3615                            x @ Ok(_) => return x,
3616                            Err(next_prev) => prev = next_prev,
3617                        }
3618                    }
3619                    Err(prev)
3620                }
3621            }
3622            } // cfg_has_atomic_cas!
3623
3624            $cfg_has_atomic_cas_or_amo32_or_8! {
3625            doc_comment! {
3626                concat!("Maximum with the current value.
3627
3628Finds the maximum of the current value and the argument `val`, and
3629sets the new value to the result.
3630
3631Returns the previous value.
3632
3633`fetch_max` takes an [`Ordering`] argument which describes the memory ordering
3634of this operation. All ordering modes are possible. Note that using
3635[`Acquire`] makes the store part of this operation [`Relaxed`], and
3636using [`Release`] makes the load part [`Relaxed`].
3637
3638# Examples
3639
3640```
3641use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3642
3643let foo = ", stringify!($atomic_type), "::new(23);
3644assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
3645assert_eq!(foo.load(Ordering::SeqCst), 42);
3646```
3647
3648If you want to obtain the maximum value in one step, you can use the following:
3649
3650```
3651use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3652
3653let foo = ", stringify!($atomic_type), "::new(23);
3654let bar = 42;
3655let max_foo = foo.fetch_max(bar, Ordering::SeqCst).max(bar);
3656assert!(max_foo == 42);
3657```"),
3658                #[inline]
3659                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3660                pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type {
3661                    self.inner.fetch_max(val, order)
3662                }
3663            }
3664
3665            doc_comment! {
3666                concat!("Minimum with the current value.
3667
3668Finds the minimum of the current value and the argument `val`, and
3669sets the new value to the result.
3670
3671Returns the previous value.
3672
3673`fetch_min` takes an [`Ordering`] argument which describes the memory ordering
3674of this operation. All ordering modes are possible. Note that using
3675[`Acquire`] makes the store part of this operation [`Relaxed`], and
3676using [`Release`] makes the load part [`Relaxed`].
3677
3678# Examples
3679
3680```
3681use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3682
3683let foo = ", stringify!($atomic_type), "::new(23);
3684assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
3685assert_eq!(foo.load(Ordering::Relaxed), 23);
3686assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
3687assert_eq!(foo.load(Ordering::Relaxed), 22);
3688```
3689
3690If you want to obtain the minimum value in one step, you can use the following:
3691
3692```
3693use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3694
3695let foo = ", stringify!($atomic_type), "::new(23);
3696let bar = 12;
3697let min_foo = foo.fetch_min(bar, Ordering::SeqCst).min(bar);
3698assert_eq!(min_foo, 12);
3699```"),
3700                #[inline]
3701                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3702                pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type {
3703                    self.inner.fetch_min(val, order)
3704                }
3705            }
3706            } // $cfg_has_atomic_cas_or_amo32_or_8!
3707
3708            doc_comment! {
3709                concat!("Sets the bit at the specified bit-position to 1.
3710
3711Returns `true` if the specified bit was previously set to 1.
3712
3713`bit_set` takes an [`Ordering`] argument which describes the memory ordering
3714of this operation. All ordering modes are possible. Note that using
3715[`Acquire`] makes the store part of this operation [`Relaxed`], and
3716using [`Release`] makes the load part [`Relaxed`].
3717
3718This corresponds to x86's `lock bts`, and the implementation calls them on x86/x86_64.
3719
3720# Examples
3721
3722```
3723use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3724
3725let foo = ", stringify!($atomic_type), "::new(0b0000);
3726assert!(!foo.bit_set(0, Ordering::Relaxed));
3727assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
3728assert!(foo.bit_set(0, Ordering::Relaxed));
3729assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
3730```"),
3731                #[inline]
3732                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3733                pub fn bit_set(&self, bit: u32, order: Ordering) -> bool {
3734                    self.inner.bit_set(bit, order)
3735                }
3736            }
3737
3738            doc_comment! {
3739                concat!("Clears the bit at the specified bit-position to 1.
3740
3741Returns `true` if the specified bit was previously set to 1.
3742
3743`bit_clear` takes an [`Ordering`] argument which describes the memory ordering
3744of this operation. All ordering modes are possible. Note that using
3745[`Acquire`] makes the store part of this operation [`Relaxed`], and
3746using [`Release`] makes the load part [`Relaxed`].
3747
3748This corresponds to x86's `lock btr`, and the implementation calls them on x86/x86_64.
3749
3750# Examples
3751
3752```
3753use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3754
3755let foo = ", stringify!($atomic_type), "::new(0b0001);
3756assert!(foo.bit_clear(0, Ordering::Relaxed));
3757assert_eq!(foo.load(Ordering::Relaxed), 0b0000);
3758```"),
3759                #[inline]
3760                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3761                pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool {
3762                    self.inner.bit_clear(bit, order)
3763                }
3764            }
3765
3766            doc_comment! {
3767                concat!("Toggles the bit at the specified bit-position.
3768
3769Returns `true` if the specified bit was previously set to 1.
3770
3771`bit_toggle` takes an [`Ordering`] argument which describes the memory ordering
3772of this operation. All ordering modes are possible. Note that using
3773[`Acquire`] makes the store part of this operation [`Relaxed`], and
3774using [`Release`] makes the load part [`Relaxed`].
3775
3776This corresponds to x86's `lock btc`, and the implementation calls them on x86/x86_64.
3777
3778# Examples
3779
3780```
3781use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3782
3783let foo = ", stringify!($atomic_type), "::new(0b0000);
3784assert!(!foo.bit_toggle(0, Ordering::Relaxed));
3785assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
3786assert!(foo.bit_toggle(0, Ordering::Relaxed));
3787assert_eq!(foo.load(Ordering::Relaxed), 0b0000);
3788```"),
3789                #[inline]
3790                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3791                pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool {
3792                    self.inner.bit_toggle(bit, order)
3793                }
3794            }
3795
3796            doc_comment! {
3797                concat!("Logical negates the current value, and sets the new value to the result.
3798
3799Returns the previous value.
3800
3801`fetch_not` takes an [`Ordering`] argument which describes the memory ordering
3802of this operation. All ordering modes are possible. Note that using
3803[`Acquire`] makes the store part of this operation [`Relaxed`], and
3804using [`Release`] makes the load part [`Relaxed`].
3805
3806# Examples
3807
3808```
3809use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3810
3811let foo = ", stringify!($atomic_type), "::new(0);
3812assert_eq!(foo.fetch_not(Ordering::Relaxed), 0);
3813assert_eq!(foo.load(Ordering::Relaxed), !0);
3814```"),
3815                #[inline]
3816                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3817                pub fn fetch_not(&self, order: Ordering) -> $int_type {
3818                    self.inner.fetch_not(order)
3819                }
3820            }
3821
3822            doc_comment! {
3823                concat!("Logical negates the current value, and sets the new value to the result.
3824
3825Unlike `fetch_not`, this does not return the previous value.
3826
3827`not` takes an [`Ordering`] argument which describes the memory ordering
3828of this operation. All ordering modes are possible. Note that using
3829[`Acquire`] makes the store part of this operation [`Relaxed`], and
3830using [`Release`] makes the load part [`Relaxed`].
3831
3832This function may generate more efficient code than `fetch_not` on some platforms.
3833
3834- x86/x86_64: `lock not` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3835- MSP430: `inv` instead of disabling interrupts ({8,16}-bit atomics)
3836
3837# Examples
3838
3839```
3840use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3841
3842let foo = ", stringify!($atomic_type), "::new(0);
3843foo.not(Ordering::Relaxed);
3844assert_eq!(foo.load(Ordering::Relaxed), !0);
3845```"),
3846                #[inline]
3847                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3848                pub fn not(&self, order: Ordering) {
3849                    self.inner.not(order);
3850                }
3851            }
3852
3853            cfg_has_atomic_cas! {
3854            doc_comment! {
3855                concat!("Negates the current value, and sets the new value to the result.
3856
3857Returns the previous value.
3858
3859`fetch_neg` takes an [`Ordering`] argument which describes the memory ordering
3860of this operation. All ordering modes are possible. Note that using
3861[`Acquire`] makes the store part of this operation [`Relaxed`], and
3862using [`Release`] makes the load part [`Relaxed`].
3863
3864# Examples
3865
3866```
3867use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3868
3869let foo = ", stringify!($atomic_type), "::new(5);
3870assert_eq!(foo.fetch_neg(Ordering::Relaxed), 5);
3871assert_eq!(foo.load(Ordering::Relaxed), 5_", stringify!($int_type), ".wrapping_neg());
3872assert_eq!(foo.fetch_neg(Ordering::Relaxed), 5_", stringify!($int_type), ".wrapping_neg());
3873assert_eq!(foo.load(Ordering::Relaxed), 5);
3874```"),
3875                #[inline]
3876                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3877                pub fn fetch_neg(&self, order: Ordering) -> $int_type {
3878                    self.inner.fetch_neg(order)
3879                }
3880            }
3881
3882            doc_comment! {
3883                concat!("Negates the current value, and sets the new value to the result.
3884
3885Unlike `fetch_neg`, this does not return the previous value.
3886
3887`neg` takes an [`Ordering`] argument which describes the memory ordering
3888of this operation. All ordering modes are possible. Note that using
3889[`Acquire`] makes the store part of this operation [`Relaxed`], and
3890using [`Release`] makes the load part [`Relaxed`].
3891
3892This function may generate more efficient code than `fetch_neg` on some platforms.
3893
3894- x86/x86_64: `lock neg` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3895
3896# Examples
3897
3898```
3899use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3900
3901let foo = ", stringify!($atomic_type), "::new(5);
3902foo.neg(Ordering::Relaxed);
3903assert_eq!(foo.load(Ordering::Relaxed), 5_", stringify!($int_type), ".wrapping_neg());
3904foo.neg(Ordering::Relaxed);
3905assert_eq!(foo.load(Ordering::Relaxed), 5);
3906```"),
3907                #[inline]
3908                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3909                pub fn neg(&self, order: Ordering) {
3910                    self.inner.neg(order);
3911                }
3912            }
3913            } // cfg_has_atomic_cas!
3914            } // cfg_has_atomic_cas_or_amo32!
3915
3916            const_fn! {
3917                const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
3918                /// Returns a mutable pointer to the underlying integer.
3919                ///
3920                /// Returning an `*mut` pointer from a shared reference to this atomic is
3921                /// safe because the atomic types work with interior mutability. Any use of
3922                /// the returned raw pointer requires an `unsafe` block and has to uphold
3923                /// the safety requirements. If there is concurrent access, note the following
3924                /// additional safety requirements:
3925                ///
3926                /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
3927                ///   operations on it must be atomic.
3928                /// - Otherwise, any concurrent operations on it must be compatible with
3929                ///   operations performed by this atomic type.
3930                ///
3931                /// This is `const fn` on Rust 1.58+.
3932                #[inline]
3933                pub const fn as_ptr(&self) -> *mut $int_type {
3934                    self.inner.as_ptr()
3935                }
3936            }
3937        }
3938        // See https://github.com/taiki-e/portable-atomic/issues/180
3939        #[cfg(not(feature = "require-cas"))]
3940        cfg_no_atomic_cas! {
3941        #[doc(hidden)]
3942        #[allow(unused_variables, clippy::unused_self, clippy::extra_unused_lifetimes)]
3943        impl<'a> $atomic_type {
3944            $cfg_no_atomic_cas_or_amo32_or_8! {
3945            #[inline]
3946            pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type
3947            where
3948                &'a Self: HasSwap,
3949            {
3950                unimplemented!()
3951            }
3952            } // $cfg_no_atomic_cas_or_amo32_or_8!
3953            #[inline]
3954            pub fn compare_exchange(
3955                &self,
3956                current: $int_type,
3957                new: $int_type,
3958                success: Ordering,
3959                failure: Ordering,
3960            ) -> Result<$int_type, $int_type>
3961            where
3962                &'a Self: HasCompareExchange,
3963            {
3964                unimplemented!()
3965            }
3966            #[inline]
3967            pub fn compare_exchange_weak(
3968                &self,
3969                current: $int_type,
3970                new: $int_type,
3971                success: Ordering,
3972                failure: Ordering,
3973            ) -> Result<$int_type, $int_type>
3974            where
3975                &'a Self: HasCompareExchangeWeak,
3976            {
3977                unimplemented!()
3978            }
3979            $cfg_no_atomic_cas_or_amo32_or_8! {
3980            #[inline]
3981            pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type
3982            where
3983                &'a Self: HasFetchAdd,
3984            {
3985                unimplemented!()
3986            }
3987            #[inline]
3988            pub fn add(&self, val: $int_type, order: Ordering)
3989            where
3990                &'a Self: HasAdd,
3991            {
3992                unimplemented!()
3993            }
3994            #[inline]
3995            pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type
3996            where
3997                &'a Self: HasFetchSub,
3998            {
3999                unimplemented!()
4000            }
4001            #[inline]
4002            pub fn sub(&self, val: $int_type, order: Ordering)
4003            where
4004                &'a Self: HasSub,
4005            {
4006                unimplemented!()
4007            }
4008            } // $cfg_no_atomic_cas_or_amo32_or_8!
4009            cfg_no_atomic_cas_or_amo32! {
4010            #[inline]
4011            pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type
4012            where
4013                &'a Self: HasFetchAnd,
4014            {
4015                unimplemented!()
4016            }
4017            #[inline]
4018            pub fn and(&self, val: $int_type, order: Ordering)
4019            where
4020                &'a Self: HasAnd,
4021            {
4022                unimplemented!()
4023            }
4024            } // cfg_no_atomic_cas_or_amo32!
4025            #[inline]
4026            pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type
4027            where
4028                &'a Self: HasFetchNand,
4029            {
4030                unimplemented!()
4031            }
4032            cfg_no_atomic_cas_or_amo32! {
4033            #[inline]
4034            pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type
4035            where
4036                &'a Self: HasFetchOr,
4037            {
4038                unimplemented!()
4039            }
4040            #[inline]
4041            pub fn or(&self, val: $int_type, order: Ordering)
4042            where
4043                &'a Self: HasOr,
4044            {
4045                unimplemented!()
4046            }
4047            #[inline]
4048            pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type
4049            where
4050                &'a Self: HasFetchXor,
4051            {
4052                unimplemented!()
4053            }
4054            #[inline]
4055            pub fn xor(&self, val: $int_type, order: Ordering)
4056            where
4057                &'a Self: HasXor,
4058            {
4059                unimplemented!()
4060            }
4061            } // cfg_no_atomic_cas_or_amo32!
4062            #[inline]
4063            pub fn fetch_update<F>(
4064                &self,
4065                set_order: Ordering,
4066                fetch_order: Ordering,
4067                f: F,
4068            ) -> Result<$int_type, $int_type>
4069            where
4070                F: FnMut($int_type) -> Option<$int_type>,
4071                &'a Self: HasFetchUpdate,
4072            {
4073                unimplemented!()
4074            }
4075            $cfg_no_atomic_cas_or_amo32_or_8! {
4076            #[inline]
4077            pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type
4078            where
4079                &'a Self: HasFetchMax,
4080            {
4081                unimplemented!()
4082            }
4083            #[inline]
4084            pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type
4085            where
4086                &'a Self: HasFetchMin,
4087            {
4088                unimplemented!()
4089            }
4090            } // $cfg_no_atomic_cas_or_amo32_or_8!
4091            cfg_no_atomic_cas_or_amo32! {
4092            #[inline]
4093            pub fn bit_set(&self, bit: u32, order: Ordering) -> bool
4094            where
4095                &'a Self: HasBitSet,
4096            {
4097                unimplemented!()
4098            }
4099            #[inline]
4100            pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool
4101            where
4102                &'a Self: HasBitClear,
4103            {
4104                unimplemented!()
4105            }
4106            #[inline]
4107            pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool
4108            where
4109                &'a Self: HasBitToggle,
4110            {
4111                unimplemented!()
4112            }
4113            #[inline]
4114            pub fn fetch_not(&self, order: Ordering) -> $int_type
4115            where
4116                &'a Self: HasFetchNot,
4117            {
4118                unimplemented!()
4119            }
4120            #[inline]
4121            pub fn not(&self, order: Ordering)
4122            where
4123                &'a Self: HasNot,
4124            {
4125                unimplemented!()
4126            }
4127            } // cfg_no_atomic_cas_or_amo32!
4128            #[inline]
4129            pub fn fetch_neg(&self, order: Ordering) -> $int_type
4130            where
4131                &'a Self: HasFetchNeg,
4132            {
4133                unimplemented!()
4134            }
4135            #[inline]
4136            pub fn neg(&self, order: Ordering)
4137            where
4138                &'a Self: HasNeg,
4139            {
4140                unimplemented!()
4141            }
4142        }
4143        } // cfg_no_atomic_cas!
4144        $(
4145            #[$cfg_float]
4146            atomic_int!(float,
4147                #[$cfg_float] $atomic_float_type, $float_type, $atomic_type, $int_type, $align
4148            );
4149        )?
4150    };
4151
4152    // AtomicF* impls
4153    (float,
4154        #[$cfg_float:meta]
4155        $atomic_type:ident,
4156        $float_type:ident,
4157        $atomic_int_type:ident,
4158        $int_type:ident,
4159        $align:literal
4160    ) => {
4161        doc_comment! {
4162            concat!("A floating point type which can be safely shared between threads.
4163
4164This type has the same in-memory representation as the underlying floating point type,
4165[`", stringify!($float_type), "`].
4166"
4167            ),
4168            #[cfg_attr(docsrs, doc($cfg_float))]
4169            // We can use #[repr(transparent)] here, but #[repr(C, align(N))]
4170            // will show clearer docs.
4171            #[repr(C, align($align))]
4172            pub struct $atomic_type {
4173                inner: imp::float::$atomic_type,
4174            }
4175        }
4176
4177        impl Default for $atomic_type {
4178            #[inline]
4179            fn default() -> Self {
4180                Self::new($float_type::default())
4181            }
4182        }
4183
4184        impl From<$float_type> for $atomic_type {
4185            #[inline]
4186            fn from(v: $float_type) -> Self {
4187                Self::new(v)
4188            }
4189        }
4190
4191        // UnwindSafe is implicitly implemented.
4192        #[cfg(not(portable_atomic_no_core_unwind_safe))]
4193        impl core::panic::RefUnwindSafe for $atomic_type {}
4194        #[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
4195        impl std::panic::RefUnwindSafe for $atomic_type {}
4196
4197        impl_debug_and_serde!($atomic_type);
4198
4199        impl $atomic_type {
4200            /// Creates a new atomic float.
4201            #[inline]
4202            #[must_use]
4203            pub const fn new(v: $float_type) -> Self {
4204                static_assert_layout!($atomic_type, $float_type);
4205                Self { inner: imp::float::$atomic_type::new(v) }
4206            }
4207
4208            // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
4209            #[cfg(not(portable_atomic_no_const_mut_refs))]
4210            doc_comment! {
4211                concat!("Creates a new reference to an atomic float from a pointer.
4212
4213This is `const fn` on Rust 1.83+.
4214
4215# Safety
4216
4217* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
4218  can be bigger than `align_of::<", stringify!($float_type), ">()`).
4219* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
4220* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
4221  behind `ptr` must have a happens-before relationship with atomic accesses via
4222  the returned value (or vice-versa).
4223  * In other words, time periods where the value is accessed atomically may not
4224    overlap with periods where the value is accessed non-atomically.
4225  * This requirement is trivially satisfied if `ptr` is never used non-atomically
4226    for the duration of lifetime `'a`. Most use cases should be able to follow
4227    this guideline.
4228  * This requirement is also trivially satisfied if all accesses (atomic or not) are
4229    done from the same thread.
4230* If this atomic type is *not* lock-free:
4231  * Any accesses to the value behind `ptr` must have a happens-before relationship
4232    with accesses via the returned value (or vice-versa).
4233  * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
4234    be compatible with operations performed by this atomic type.
4235* This method must not be used to create overlapping or mixed-size atomic
4236  accesses, as these are not supported by the memory model.
4237
4238[valid]: core::ptr#safety"),
4239                #[inline]
4240                #[must_use]
4241                pub const unsafe fn from_ptr<'a>(ptr: *mut $float_type) -> &'a Self {
4242                    #[allow(clippy::cast_ptr_alignment)]
4243                    // SAFETY: guaranteed by the caller
4244                    unsafe { &*(ptr as *mut Self) }
4245                }
4246            }
4247            #[cfg(portable_atomic_no_const_mut_refs)]
4248            doc_comment! {
4249                concat!("Creates a new reference to an atomic float from a pointer.
4250
4251This is `const fn` on Rust 1.83+.
4252
4253# Safety
4254
4255* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
4256  can be bigger than `align_of::<", stringify!($float_type), ">()`).
4257* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
4258* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
4259  behind `ptr` must have a happens-before relationship with atomic accesses via
4260  the returned value (or vice-versa).
4261  * In other words, time periods where the value is accessed atomically may not
4262    overlap with periods where the value is accessed non-atomically.
4263  * This requirement is trivially satisfied if `ptr` is never used non-atomically
4264    for the duration of lifetime `'a`. Most use cases should be able to follow
4265    this guideline.
4266  * This requirement is also trivially satisfied if all accesses (atomic or not) are
4267    done from the same thread.
4268* If this atomic type is *not* lock-free:
4269  * Any accesses to the value behind `ptr` must have a happens-before relationship
4270    with accesses via the returned value (or vice-versa).
4271  * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
4272    be compatible with operations performed by this atomic type.
4273* This method must not be used to create overlapping or mixed-size atomic
4274  accesses, as these are not supported by the memory model.
4275
4276[valid]: core::ptr#safety"),
4277                #[inline]
4278                #[must_use]
4279                pub unsafe fn from_ptr<'a>(ptr: *mut $float_type) -> &'a Self {
4280                    #[allow(clippy::cast_ptr_alignment)]
4281                    // SAFETY: guaranteed by the caller
4282                    unsafe { &*(ptr as *mut Self) }
4283                }
4284            }
4285
4286            /// Returns `true` if operations on values of this type are lock-free.
4287            ///
4288            /// If the compiler or the platform doesn't support the necessary
4289            /// atomic instructions, global locks for every potentially
4290            /// concurrent atomic operation will be used.
4291            #[inline]
4292            #[must_use]
4293            pub fn is_lock_free() -> bool {
4294                <imp::float::$atomic_type>::is_lock_free()
4295            }
4296
4297            /// Returns `true` if operations on values of this type are lock-free.
4298            ///
4299            /// If the compiler or the platform doesn't support the necessary
4300            /// atomic instructions, global locks for every potentially
4301            /// concurrent atomic operation will be used.
4302            ///
4303            /// **Note:** If the atomic operation relies on dynamic CPU feature detection,
4304            /// this type may be lock-free even if the function returns false.
4305            #[inline]
4306            #[must_use]
4307            pub const fn is_always_lock_free() -> bool {
4308                <imp::float::$atomic_type>::IS_ALWAYS_LOCK_FREE
4309            }
4310            #[cfg(test)]
4311            const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
4312
4313            const_fn! {
4314                const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
4315                /// Returns a mutable reference to the underlying float.
4316                ///
4317                /// This is safe because the mutable reference guarantees that no other threads are
4318                /// concurrently accessing the atomic data.
4319                ///
4320                /// This is `const fn` on Rust 1.83+.
4321                #[inline]
4322                pub const fn get_mut(&mut self) -> &mut $float_type {
4323                    // SAFETY: the mutable reference guarantees unique ownership.
4324                    unsafe { &mut *self.as_ptr() }
4325                }
4326            }
4327
4328            // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
4329            // https://github.com/rust-lang/rust/issues/76314
4330
4331            const_fn! {
4332                const_if: #[cfg(not(portable_atomic_no_const_transmute))];
4333                /// Consumes the atomic and returns the contained value.
4334                ///
4335                /// This is safe because passing `self` by value guarantees that no other threads are
4336                /// concurrently accessing the atomic data.
4337                ///
4338                /// This is `const fn` on Rust 1.56+.
4339                #[inline]
4340                pub const fn into_inner(self) -> $float_type {
4341                    // SAFETY: $atomic_type and $float_type have the same size and in-memory representations,
4342                    // so they can be safely transmuted.
4343                    // (const UnsafeCell::into_inner is unstable)
4344                    unsafe { core::mem::transmute(self) }
4345                }
4346            }
4347
4348            /// Loads a value from the atomic float.
4349            ///
4350            /// `load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
4351            /// Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
4352            ///
4353            /// # Panics
4354            ///
4355            /// Panics if `order` is [`Release`] or [`AcqRel`].
4356            #[inline]
4357            #[cfg_attr(
4358                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4359                track_caller
4360            )]
4361            pub fn load(&self, order: Ordering) -> $float_type {
4362                self.inner.load(order)
4363            }
4364
4365            /// Stores a value into the atomic float.
4366            ///
4367            /// `store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
4368            ///  Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
4369            ///
4370            /// # Panics
4371            ///
4372            /// Panics if `order` is [`Acquire`] or [`AcqRel`].
4373            #[inline]
4374            #[cfg_attr(
4375                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4376                track_caller
4377            )]
4378            pub fn store(&self, val: $float_type, order: Ordering) {
4379                self.inner.store(val, order)
4380            }
4381
4382            cfg_has_atomic_cas_or_amo32! {
4383            /// Stores a value into the atomic float, returning the previous value.
4384            ///
4385            /// `swap` takes an [`Ordering`] argument which describes the memory ordering
4386            /// of this operation. All ordering modes are possible. Note that using
4387            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4388            /// using [`Release`] makes the load part [`Relaxed`].
4389            #[inline]
4390            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4391            pub fn swap(&self, val: $float_type, order: Ordering) -> $float_type {
4392                self.inner.swap(val, order)
4393            }
4394
4395            cfg_has_atomic_cas! {
4396            /// Stores a value into the atomic float if the current value is the same as
4397            /// the `current` value.
4398            ///
4399            /// The return value is a result indicating whether the new value was written and
4400            /// containing the previous value. On success this value is guaranteed to be equal to
4401            /// `current`.
4402            ///
4403            /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
4404            /// ordering of this operation. `success` describes the required ordering for the
4405            /// read-modify-write operation that takes place if the comparison with `current` succeeds.
4406            /// `failure` describes the required ordering for the load operation that takes place when
4407            /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
4408            /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
4409            /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
4410            ///
4411            /// # Panics
4412            ///
4413            /// Panics if `failure` is [`Release`], [`AcqRel`].
4414            #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
4415            #[inline]
4416            #[cfg_attr(
4417                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4418                track_caller
4419            )]
4420            pub fn compare_exchange(
4421                &self,
4422                current: $float_type,
4423                new: $float_type,
4424                success: Ordering,
4425                failure: Ordering,
4426            ) -> Result<$float_type, $float_type> {
4427                self.inner.compare_exchange(current, new, success, failure)
4428            }
4429
4430            /// Stores a value into the atomic float if the current value is the same as
4431            /// the `current` value.
4432            /// Unlike [`compare_exchange`](Self::compare_exchange)
4433            /// this function is allowed to spuriously fail even
4434            /// when the comparison succeeds, which can result in more efficient code on some
4435            /// platforms. The return value is a result indicating whether the new value was
4436            /// written and containing the previous value.
4437            ///
4438            /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
4439            /// ordering of this operation. `success` describes the required ordering for the
4440            /// read-modify-write operation that takes place if the comparison with `current` succeeds.
4441            /// `failure` describes the required ordering for the load operation that takes place when
4442            /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
4443            /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
4444            /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
4445            ///
4446            /// # Panics
4447            ///
4448            /// Panics if `failure` is [`Release`], [`AcqRel`].
4449            #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
4450            #[inline]
4451            #[cfg_attr(
4452                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4453                track_caller
4454            )]
4455            pub fn compare_exchange_weak(
4456                &self,
4457                current: $float_type,
4458                new: $float_type,
4459                success: Ordering,
4460                failure: Ordering,
4461            ) -> Result<$float_type, $float_type> {
4462                self.inner.compare_exchange_weak(current, new, success, failure)
4463            }
4464
4465            /// Adds to the current value, returning the previous value.
4466            ///
4467            /// This operation wraps around on overflow.
4468            ///
4469            /// `fetch_add` takes an [`Ordering`] argument which describes the memory ordering
4470            /// of this operation. All ordering modes are possible. Note that using
4471            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4472            /// using [`Release`] makes the load part [`Relaxed`].
4473            #[inline]
4474            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4475            pub fn fetch_add(&self, val: $float_type, order: Ordering) -> $float_type {
4476                self.inner.fetch_add(val, order)
4477            }
4478
4479            /// Subtracts from the current value, returning the previous value.
4480            ///
4481            /// This operation wraps around on overflow.
4482            ///
4483            /// `fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
4484            /// of this operation. All ordering modes are possible. Note that using
4485            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4486            /// using [`Release`] makes the load part [`Relaxed`].
4487            #[inline]
4488            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4489            pub fn fetch_sub(&self, val: $float_type, order: Ordering) -> $float_type {
4490                self.inner.fetch_sub(val, order)
4491            }
4492
4493            /// Fetches the value, and applies a function to it that returns an optional
4494            /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
4495            /// `Err(previous_value)`.
4496            ///
4497            /// Note: This may call the function multiple times if the value has been changed from other threads in
4498            /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
4499            /// only once to the stored value.
4500            ///
4501            /// `fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
4502            /// The first describes the required ordering for when the operation finally succeeds while the second
4503            /// describes the required ordering for loads. These correspond to the success and failure orderings of
4504            /// [`compare_exchange`](Self::compare_exchange) respectively.
4505            ///
4506            /// Using [`Acquire`] as success ordering makes the store part
4507            /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
4508            /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
4509            ///
4510            /// # Panics
4511            ///
4512            /// Panics if `fetch_order` is [`Release`], [`AcqRel`].
4513            ///
4514            /// # Considerations
4515            ///
4516            /// This method is not magic; it is not provided by the hardware.
4517            /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
4518            /// and suffers from the same drawbacks.
4519            /// In particular, this method will not circumvent the [ABA Problem].
4520            ///
4521            /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
4522            #[inline]
4523            #[cfg_attr(
4524                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4525                track_caller
4526            )]
4527            pub fn fetch_update<F>(
4528                &self,
4529                set_order: Ordering,
4530                fetch_order: Ordering,
4531                mut f: F,
4532            ) -> Result<$float_type, $float_type>
4533            where
4534                F: FnMut($float_type) -> Option<$float_type>,
4535            {
4536                let mut prev = self.load(fetch_order);
4537                while let Some(next) = f(prev) {
4538                    match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
4539                        x @ Ok(_) => return x,
4540                        Err(next_prev) => prev = next_prev,
4541                    }
4542                }
4543                Err(prev)
4544            }
4545
4546            /// Maximum with the current value.
4547            ///
4548            /// Finds the maximum of the current value and the argument `val`, and
4549            /// sets the new value to the result.
4550            ///
4551            /// Returns the previous value.
4552            ///
4553            /// `fetch_max` takes an [`Ordering`] argument which describes the memory ordering
4554            /// of this operation. All ordering modes are possible. Note that using
4555            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4556            /// using [`Release`] makes the load part [`Relaxed`].
4557            #[inline]
4558            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4559            pub fn fetch_max(&self, val: $float_type, order: Ordering) -> $float_type {
4560                self.inner.fetch_max(val, order)
4561            }
4562
4563            /// Minimum with the current value.
4564            ///
4565            /// Finds the minimum of the current value and the argument `val`, and
4566            /// sets the new value to the result.
4567            ///
4568            /// Returns the previous value.
4569            ///
4570            /// `fetch_min` takes an [`Ordering`] argument which describes the memory ordering
4571            /// of this operation. All ordering modes are possible. Note that using
4572            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4573            /// using [`Release`] makes the load part [`Relaxed`].
4574            #[inline]
4575            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4576            pub fn fetch_min(&self, val: $float_type, order: Ordering) -> $float_type {
4577                self.inner.fetch_min(val, order)
4578            }
4579            } // cfg_has_atomic_cas!
4580
4581            /// Negates the current value, and sets the new value to the result.
4582            ///
4583            /// Returns the previous value.
4584            ///
4585            /// `fetch_neg` takes an [`Ordering`] argument which describes the memory ordering
4586            /// of this operation. All ordering modes are possible. Note that using
4587            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4588            /// using [`Release`] makes the load part [`Relaxed`].
4589            #[inline]
4590            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4591            pub fn fetch_neg(&self, order: Ordering) -> $float_type {
4592                self.inner.fetch_neg(order)
4593            }
4594
4595            /// Computes the absolute value of the current value, and sets the
4596            /// new value to the result.
4597            ///
4598            /// Returns the previous value.
4599            ///
4600            /// `fetch_abs` takes an [`Ordering`] argument which describes the memory ordering
4601            /// of this operation. All ordering modes are possible. Note that using
4602            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4603            /// using [`Release`] makes the load part [`Relaxed`].
4604            #[inline]
4605            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4606            pub fn fetch_abs(&self, order: Ordering) -> $float_type {
4607                self.inner.fetch_abs(order)
4608            }
4609            } // cfg_has_atomic_cas_or_amo32!
4610
4611            #[cfg(not(portable_atomic_no_const_raw_ptr_deref))]
4612            doc_comment! {
4613                concat!("Raw transmutation to `&", stringify!($atomic_int_type), "`.
4614
4615See [`", stringify!($float_type) ,"::from_bits`] for some discussion of the
4616portability of this operation (there are almost no issues).
4617
4618This is `const fn` on Rust 1.58+."),
4619                #[inline]
4620                pub const fn as_bits(&self) -> &$atomic_int_type {
4621                    self.inner.as_bits()
4622                }
4623            }
4624            #[cfg(portable_atomic_no_const_raw_ptr_deref)]
4625            doc_comment! {
4626                concat!("Raw transmutation to `&", stringify!($atomic_int_type), "`.
4627
4628See [`", stringify!($float_type) ,"::from_bits`] for some discussion of the
4629portability of this operation (there are almost no issues).
4630
4631This is `const fn` on Rust 1.58+."),
4632                #[inline]
4633                pub fn as_bits(&self) -> &$atomic_int_type {
4634                    self.inner.as_bits()
4635                }
4636            }
4637
4638            const_fn! {
4639                const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
4640                /// Returns a mutable pointer to the underlying float.
4641                ///
4642                /// Returning an `*mut` pointer from a shared reference to this atomic is
4643                /// safe because the atomic types work with interior mutability. Any use of
4644                /// the returned raw pointer requires an `unsafe` block and has to uphold
4645                /// the safety requirements. If there is concurrent access, note the following
4646                /// additional safety requirements:
4647                ///
4648                /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
4649                ///   operations on it must be atomic.
4650                /// - Otherwise, any concurrent operations on it must be compatible with
4651                ///   operations performed by this atomic type.
4652                ///
4653                /// This is `const fn` on Rust 1.58+.
4654                #[inline]
4655                pub const fn as_ptr(&self) -> *mut $float_type {
4656                    self.inner.as_ptr()
4657                }
4658            }
4659        }
4660        // See https://github.com/taiki-e/portable-atomic/issues/180
4661        #[cfg(not(feature = "require-cas"))]
4662        cfg_no_atomic_cas! {
4663        #[doc(hidden)]
4664        #[allow(unused_variables, clippy::unused_self, clippy::extra_unused_lifetimes)]
4665        impl<'a> $atomic_type {
4666            cfg_no_atomic_cas_or_amo32! {
4667            #[inline]
4668            pub fn swap(&self, val: $float_type, order: Ordering) -> $float_type
4669            where
4670                &'a Self: HasSwap,
4671            {
4672                unimplemented!()
4673            }
4674            } // cfg_no_atomic_cas_or_amo32!
4675            #[inline]
4676            pub fn compare_exchange(
4677                &self,
4678                current: $float_type,
4679                new: $float_type,
4680                success: Ordering,
4681                failure: Ordering,
4682            ) -> Result<$float_type, $float_type>
4683            where
4684                &'a Self: HasCompareExchange,
4685            {
4686                unimplemented!()
4687            }
4688            #[inline]
4689            pub fn compare_exchange_weak(
4690                &self,
4691                current: $float_type,
4692                new: $float_type,
4693                success: Ordering,
4694                failure: Ordering,
4695            ) -> Result<$float_type, $float_type>
4696            where
4697                &'a Self: HasCompareExchangeWeak,
4698            {
4699                unimplemented!()
4700            }
4701            #[inline]
4702            pub fn fetch_add(&self, val: $float_type, order: Ordering) -> $float_type
4703            where
4704                &'a Self: HasFetchAdd,
4705            {
4706                unimplemented!()
4707            }
4708            #[inline]
4709            pub fn fetch_sub(&self, val: $float_type, order: Ordering) -> $float_type
4710            where
4711                &'a Self: HasFetchSub,
4712            {
4713                unimplemented!()
4714            }
4715            #[inline]
4716            pub fn fetch_update<F>(
4717                &self,
4718                set_order: Ordering,
4719                fetch_order: Ordering,
4720                f: F,
4721            ) -> Result<$float_type, $float_type>
4722            where
4723                F: FnMut($float_type) -> Option<$float_type>,
4724                &'a Self: HasFetchUpdate,
4725            {
4726                unimplemented!()
4727            }
4728            #[inline]
4729            pub fn fetch_max(&self, val: $float_type, order: Ordering) -> $float_type
4730            where
4731                &'a Self: HasFetchMax,
4732            {
4733                unimplemented!()
4734            }
4735            #[inline]
4736            pub fn fetch_min(&self, val: $float_type, order: Ordering) -> $float_type
4737            where
4738                &'a Self: HasFetchMin,
4739            {
4740                unimplemented!()
4741            }
4742            cfg_no_atomic_cas_or_amo32! {
4743            #[inline]
4744            pub fn fetch_neg(&self, order: Ordering) -> $float_type
4745            where
4746                &'a Self: HasFetchNeg,
4747            {
4748                unimplemented!()
4749            }
4750            #[inline]
4751            pub fn fetch_abs(&self, order: Ordering) -> $float_type
4752            where
4753                &'a Self: HasFetchAbs,
4754            {
4755                unimplemented!()
4756            }
4757            } // cfg_no_atomic_cas_or_amo32!
4758        }
4759        } // cfg_no_atomic_cas!
4760    };
4761}
4762
4763cfg_has_atomic_ptr! {
4764    #[cfg(target_pointer_width = "16")]
4765    atomic_int!(AtomicIsize, isize, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4766    #[cfg(target_pointer_width = "16")]
4767    atomic_int!(AtomicUsize, usize, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4768    #[cfg(target_pointer_width = "32")]
4769    atomic_int!(AtomicIsize, isize, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4770    #[cfg(target_pointer_width = "32")]
4771    atomic_int!(AtomicUsize, usize, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4772    #[cfg(target_pointer_width = "64")]
4773    atomic_int!(AtomicIsize, isize, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4774    #[cfg(target_pointer_width = "64")]
4775    atomic_int!(AtomicUsize, usize, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4776    #[cfg(target_pointer_width = "128")]
4777    atomic_int!(AtomicIsize, isize, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4778    #[cfg(target_pointer_width = "128")]
4779    atomic_int!(AtomicUsize, usize, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4780}
4781
4782cfg_has_atomic_8! {
4783    atomic_int!(AtomicI8, i8, 1, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4784    atomic_int!(AtomicU8, u8, 1, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4785}
4786cfg_has_atomic_16! {
4787    atomic_int!(AtomicI16, i16, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4788    atomic_int!(AtomicU16, u16, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8,
4789        #[cfg(all(feature = "float", portable_atomic_unstable_f16))] AtomicF16, f16);
4790}
4791cfg_has_atomic_32! {
4792    atomic_int!(AtomicI32, i32, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4793    atomic_int!(AtomicU32, u32, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32,
4794        #[cfg(feature = "float")] AtomicF32, f32);
4795}
4796cfg_has_atomic_64! {
4797    atomic_int!(AtomicI64, i64, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4798    atomic_int!(AtomicU64, u64, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32,
4799        #[cfg(feature = "float")] AtomicF64, f64);
4800}
4801cfg_has_atomic_128! {
4802    atomic_int!(AtomicI128, i128, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4803    atomic_int!(AtomicU128, u128, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32,
4804        #[cfg(all(feature = "float", portable_atomic_unstable_f128))] AtomicF128, f128);
4805}
4806
4807// See https://github.com/taiki-e/portable-atomic/issues/180
4808#[cfg(not(feature = "require-cas"))]
4809cfg_no_atomic_cas! {
4810cfg_no_atomic_cas_or_amo32! {
4811#[cfg(feature = "float")]
4812use self::diagnostic_helper::HasFetchAbs;
4813use self::diagnostic_helper::{
4814    HasAnd, HasBitClear, HasBitSet, HasBitToggle, HasFetchAnd, HasFetchByteAdd, HasFetchByteSub,
4815    HasFetchNot, HasFetchOr, HasFetchPtrAdd, HasFetchPtrSub, HasFetchXor, HasNot, HasOr, HasXor,
4816};
4817} // cfg_no_atomic_cas_or_amo32!
4818cfg_no_atomic_cas_or_amo8! {
4819use self::diagnostic_helper::{HasAdd, HasSub, HasSwap};
4820} // cfg_no_atomic_cas_or_amo8!
4821#[cfg_attr(not(feature = "float"), allow(unused_imports))]
4822use self::diagnostic_helper::{
4823    HasCompareExchange, HasCompareExchangeWeak, HasFetchAdd, HasFetchMax, HasFetchMin,
4824    HasFetchNand, HasFetchNeg, HasFetchSub, HasFetchUpdate, HasNeg,
4825};
4826#[cfg_attr(
4827    any(
4828        all(
4829            portable_atomic_no_atomic_load_store,
4830            not(any(
4831                target_arch = "avr",
4832                target_arch = "bpf",
4833                target_arch = "msp430",
4834                target_arch = "riscv32",
4835                target_arch = "riscv64",
4836                feature = "critical-section",
4837                portable_atomic_unsafe_assume_single_core,
4838            )),
4839        ),
4840        not(feature = "float"),
4841    ),
4842    allow(dead_code, unreachable_pub)
4843)]
4844#[allow(unknown_lints, unnameable_types)] // Not public API. unnameable_types is available on Rust 1.79+
4845mod diagnostic_helper {
4846    cfg_no_atomic_cas_or_amo8! {
4847    #[doc(hidden)]
4848    #[cfg_attr(
4849        not(portable_atomic_no_diagnostic_namespace),
4850        diagnostic::on_unimplemented(
4851            message = "`swap` requires atomic CAS but not available on this target by default",
4852            label = "this associated function is not available on this target by default",
4853            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4854            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4855        )
4856    )]
4857    pub trait HasSwap {}
4858    } // cfg_no_atomic_cas_or_amo8!
4859    #[doc(hidden)]
4860    #[cfg_attr(
4861        not(portable_atomic_no_diagnostic_namespace),
4862        diagnostic::on_unimplemented(
4863            message = "`compare_exchange` requires atomic CAS but not available on this target by default",
4864            label = "this associated function is not available on this target by default",
4865            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4866            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4867        )
4868    )]
4869    pub trait HasCompareExchange {}
4870    #[doc(hidden)]
4871    #[cfg_attr(
4872        not(portable_atomic_no_diagnostic_namespace),
4873        diagnostic::on_unimplemented(
4874            message = "`compare_exchange_weak` requires atomic CAS but not available on this target by default",
4875            label = "this associated function is not available on this target by default",
4876            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4877            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4878        )
4879    )]
4880    pub trait HasCompareExchangeWeak {}
4881    #[doc(hidden)]
4882    #[cfg_attr(
4883        not(portable_atomic_no_diagnostic_namespace),
4884        diagnostic::on_unimplemented(
4885            message = "`fetch_add` requires atomic CAS but not available on this target by default",
4886            label = "this associated function is not available on this target by default",
4887            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4888            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4889        )
4890    )]
4891    pub trait HasFetchAdd {}
4892    cfg_no_atomic_cas_or_amo8! {
4893    #[doc(hidden)]
4894    #[cfg_attr(
4895        not(portable_atomic_no_diagnostic_namespace),
4896        diagnostic::on_unimplemented(
4897            message = "`add` requires atomic CAS but not available on this target by default",
4898            label = "this associated function is not available on this target by default",
4899            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4900            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4901        )
4902    )]
4903    pub trait HasAdd {}
4904    } // cfg_no_atomic_cas_or_amo8!
4905    #[doc(hidden)]
4906    #[cfg_attr(
4907        not(portable_atomic_no_diagnostic_namespace),
4908        diagnostic::on_unimplemented(
4909            message = "`fetch_sub` requires atomic CAS but not available on this target by default",
4910            label = "this associated function is not available on this target by default",
4911            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4912            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4913        )
4914    )]
4915    pub trait HasFetchSub {}
4916    cfg_no_atomic_cas_or_amo8! {
4917    #[doc(hidden)]
4918    #[cfg_attr(
4919        not(portable_atomic_no_diagnostic_namespace),
4920        diagnostic::on_unimplemented(
4921            message = "`sub` requires atomic CAS but not available on this target by default",
4922            label = "this associated function is not available on this target by default",
4923            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4924            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4925        )
4926    )]
4927    pub trait HasSub {}
4928    } // cfg_no_atomic_cas_or_amo8!
4929    cfg_no_atomic_cas_or_amo32! {
4930    #[doc(hidden)]
4931    #[cfg_attr(
4932        not(portable_atomic_no_diagnostic_namespace),
4933        diagnostic::on_unimplemented(
4934            message = "`fetch_ptr_add` requires atomic CAS but not available on this target by default",
4935            label = "this associated function is not available on this target by default",
4936            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4937            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4938        )
4939    )]
4940    pub trait HasFetchPtrAdd {}
4941    #[doc(hidden)]
4942    #[cfg_attr(
4943        not(portable_atomic_no_diagnostic_namespace),
4944        diagnostic::on_unimplemented(
4945            message = "`fetch_ptr_sub` requires atomic CAS but not available on this target by default",
4946            label = "this associated function is not available on this target by default",
4947            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4948            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4949        )
4950    )]
4951    pub trait HasFetchPtrSub {}
4952    #[doc(hidden)]
4953    #[cfg_attr(
4954        not(portable_atomic_no_diagnostic_namespace),
4955        diagnostic::on_unimplemented(
4956            message = "`fetch_byte_add` requires atomic CAS but not available on this target by default",
4957            label = "this associated function is not available on this target by default",
4958            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4959            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4960        )
4961    )]
4962    pub trait HasFetchByteAdd {}
4963    #[doc(hidden)]
4964    #[cfg_attr(
4965        not(portable_atomic_no_diagnostic_namespace),
4966        diagnostic::on_unimplemented(
4967            message = "`fetch_byte_sub` requires atomic CAS but not available on this target by default",
4968            label = "this associated function is not available on this target by default",
4969            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4970            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4971        )
4972    )]
4973    pub trait HasFetchByteSub {}
4974    #[doc(hidden)]
4975    #[cfg_attr(
4976        not(portable_atomic_no_diagnostic_namespace),
4977        diagnostic::on_unimplemented(
4978            message = "`fetch_and` requires atomic CAS but not available on this target by default",
4979            label = "this associated function is not available on this target by default",
4980            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4981            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4982        )
4983    )]
4984    pub trait HasFetchAnd {}
4985    #[doc(hidden)]
4986    #[cfg_attr(
4987        not(portable_atomic_no_diagnostic_namespace),
4988        diagnostic::on_unimplemented(
4989            message = "`and` requires atomic CAS but not available on this target by default",
4990            label = "this associated function is not available on this target by default",
4991            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4992            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4993        )
4994    )]
4995    pub trait HasAnd {}
4996    } // cfg_no_atomic_cas_or_amo32!
4997    #[doc(hidden)]
4998    #[cfg_attr(
4999        not(portable_atomic_no_diagnostic_namespace),
5000        diagnostic::on_unimplemented(
5001            message = "`fetch_nand` requires atomic CAS but not available on this target by default",
5002            label = "this associated function is not available on this target by default",
5003            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5004            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5005        )
5006    )]
5007    pub trait HasFetchNand {}
5008    cfg_no_atomic_cas_or_amo32! {
5009    #[doc(hidden)]
5010    #[cfg_attr(
5011        not(portable_atomic_no_diagnostic_namespace),
5012        diagnostic::on_unimplemented(
5013            message = "`fetch_or` requires atomic CAS but not available on this target by default",
5014            label = "this associated function is not available on this target by default",
5015            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5016            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5017        )
5018    )]
5019    pub trait HasFetchOr {}
5020    #[doc(hidden)]
5021    #[cfg_attr(
5022        not(portable_atomic_no_diagnostic_namespace),
5023        diagnostic::on_unimplemented(
5024            message = "`or` requires atomic CAS but not available on this target by default",
5025            label = "this associated function is not available on this target by default",
5026            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5027            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5028        )
5029    )]
5030    pub trait HasOr {}
5031    #[doc(hidden)]
5032    #[cfg_attr(
5033        not(portable_atomic_no_diagnostic_namespace),
5034        diagnostic::on_unimplemented(
5035            message = "`fetch_xor` requires atomic CAS but not available on this target by default",
5036            label = "this associated function is not available on this target by default",
5037            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5038            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5039        )
5040    )]
5041    pub trait HasFetchXor {}
5042    #[doc(hidden)]
5043    #[cfg_attr(
5044        not(portable_atomic_no_diagnostic_namespace),
5045        diagnostic::on_unimplemented(
5046            message = "`xor` requires atomic CAS but not available on this target by default",
5047            label = "this associated function is not available on this target by default",
5048            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5049            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5050        )
5051    )]
5052    pub trait HasXor {}
5053    #[doc(hidden)]
5054    #[cfg_attr(
5055        not(portable_atomic_no_diagnostic_namespace),
5056        diagnostic::on_unimplemented(
5057            message = "`fetch_not` requires atomic CAS but not available on this target by default",
5058            label = "this associated function is not available on this target by default",
5059            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5060            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5061        )
5062    )]
5063    pub trait HasFetchNot {}
5064    #[doc(hidden)]
5065    #[cfg_attr(
5066        not(portable_atomic_no_diagnostic_namespace),
5067        diagnostic::on_unimplemented(
5068            message = "`not` requires atomic CAS but not available on this target by default",
5069            label = "this associated function is not available on this target by default",
5070            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5071            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5072        )
5073    )]
5074    pub trait HasNot {}
5075    } // cfg_no_atomic_cas_or_amo32!
5076    #[doc(hidden)]
5077    #[cfg_attr(
5078        not(portable_atomic_no_diagnostic_namespace),
5079        diagnostic::on_unimplemented(
5080            message = "`fetch_neg` requires atomic CAS but not available on this target by default",
5081            label = "this associated function is not available on this target by default",
5082            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5083            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5084        )
5085    )]
5086    pub trait HasFetchNeg {}
5087    #[doc(hidden)]
5088    #[cfg_attr(
5089        not(portable_atomic_no_diagnostic_namespace),
5090        diagnostic::on_unimplemented(
5091            message = "`neg` requires atomic CAS but not available on this target by default",
5092            label = "this associated function is not available on this target by default",
5093            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5094            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5095        )
5096    )]
5097    pub trait HasNeg {}
5098    cfg_no_atomic_cas_or_amo32! {
5099    #[cfg(feature = "float")]
5100    #[cfg_attr(target_pointer_width = "16", allow(dead_code, unreachable_pub))]
5101    #[doc(hidden)]
5102    #[cfg_attr(
5103        not(portable_atomic_no_diagnostic_namespace),
5104        diagnostic::on_unimplemented(
5105            message = "`fetch_abs` requires atomic CAS but not available on this target by default",
5106            label = "this associated function is not available on this target by default",
5107            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5108            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5109        )
5110    )]
5111    pub trait HasFetchAbs {}
5112    } // cfg_no_atomic_cas_or_amo32!
5113    #[doc(hidden)]
5114    #[cfg_attr(
5115        not(portable_atomic_no_diagnostic_namespace),
5116        diagnostic::on_unimplemented(
5117            message = "`fetch_min` requires atomic CAS but not available on this target by default",
5118            label = "this associated function is not available on this target by default",
5119            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5120            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5121        )
5122    )]
5123    pub trait HasFetchMin {}
5124    #[doc(hidden)]
5125    #[cfg_attr(
5126        not(portable_atomic_no_diagnostic_namespace),
5127        diagnostic::on_unimplemented(
5128            message = "`fetch_max` requires atomic CAS but not available on this target by default",
5129            label = "this associated function is not available on this target by default",
5130            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5131            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5132        )
5133    )]
5134    pub trait HasFetchMax {}
5135    #[doc(hidden)]
5136    #[cfg_attr(
5137        not(portable_atomic_no_diagnostic_namespace),
5138        diagnostic::on_unimplemented(
5139            message = "`fetch_update` requires atomic CAS but not available on this target by default",
5140            label = "this associated function is not available on this target by default",
5141            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5142            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5143        )
5144    )]
5145    pub trait HasFetchUpdate {}
5146    cfg_no_atomic_cas_or_amo32! {
5147    #[doc(hidden)]
5148    #[cfg_attr(
5149        not(portable_atomic_no_diagnostic_namespace),
5150        diagnostic::on_unimplemented(
5151            message = "`bit_set` requires atomic CAS but not available on this target by default",
5152            label = "this associated function is not available on this target by default",
5153            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5154            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5155        )
5156    )]
5157    pub trait HasBitSet {}
5158    #[doc(hidden)]
5159    #[cfg_attr(
5160        not(portable_atomic_no_diagnostic_namespace),
5161        diagnostic::on_unimplemented(
5162            message = "`bit_clear` requires atomic CAS but not available on this target by default",
5163            label = "this associated function is not available on this target by default",
5164            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5165            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5166        )
5167    )]
5168    pub trait HasBitClear {}
5169    #[doc(hidden)]
5170    #[cfg_attr(
5171        not(portable_atomic_no_diagnostic_namespace),
5172        diagnostic::on_unimplemented(
5173            message = "`bit_toggle` requires atomic CAS but not available on this target by default",
5174            label = "this associated function is not available on this target by default",
5175            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5176            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5177        )
5178    )]
5179    pub trait HasBitToggle {}
5180    } // cfg_no_atomic_cas_or_amo32!
5181}
5182} // cfg_no_atomic_cas!