portable_atomic/
lib.rs

1// SPDX-License-Identifier: Apache-2.0 OR MIT
2
3/*!
4<!-- Note: Document from sync-markdown-to-rustdoc:start through sync-markdown-to-rustdoc:end
5     is synchronized from README.md. Any changes to that range are not preserved. -->
6<!-- tidy:sync-markdown-to-rustdoc:start -->
7
8Portable atomic types including support for 128-bit atomics, atomic float, etc.
9
10- Provide all atomic integer types (`Atomic{I,U}{8,16,32,64}`) for all targets that can use atomic CAS. (i.e., all targets that can use `std`, and most no-std targets)
11- Provide `AtomicI128` and `AtomicU128`.
12- Provide `AtomicF32` and `AtomicF64`. ([optional, requires the `float` feature](#optional-features-float))
13- Provide `AtomicF16` and `AtomicF128` for [unstable `f16` and `f128`](https://github.com/rust-lang/rust/issues/116909). ([optional, requires the `float` feature and unstable cfgs](#optional-features-float))
14- Provide atomic load/store for targets where atomic is not available at all in the standard library. (RISC-V without A-extension, MSP430, AVR)
15- Provide atomic CAS for targets where atomic CAS is not available in the standard library. (thumbv6m, pre-v6 Arm, RISC-V without A-extension, MSP430, AVR, Xtensa, etc.) (always enabled for MSP430 and AVR, [optional](#optional-features-critical-section) otherwise)
16- Provide stable equivalents of the standard library's atomic types' unstable APIs, such as [`AtomicPtr::fetch_*`](https://github.com/rust-lang/rust/issues/99108).
17- Make features that require newer compilers, such as [`fetch_{max,min}`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.fetch_max), [`fetch_update`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.fetch_update), [`as_ptr`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.as_ptr), [`from_ptr`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.from_ptr), [`AtomicBool::fetch_not`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicBool.html#method.fetch_not) and [stronger CAS failure ordering](https://github.com/rust-lang/rust/pull/98383) available on Rust 1.34+.
18- Provide workaround for bugs in the standard library's atomic-related APIs, such as [rust-lang/rust#100650], `fence`/`compiler_fence` on MSP430 that cause LLVM error, etc.
19
20<!-- TODO:
21- mention Atomic{I,U}*::fetch_neg, Atomic{I*,U*,Ptr}::bit_*, etc.
22- mention optimizations not available in the standard library's equivalents
23-->
24
25portable-atomic version of `std::sync::Arc` is provided by the [portable-atomic-util](https://github.com/taiki-e/portable-atomic/tree/HEAD/portable-atomic-util) crate.
26
27## Usage
28
29Add this to your `Cargo.toml`:
30
31```toml
32[dependencies]
33portable-atomic = "1"
34```
35
36The default features are mainly for users who use atomics larger than the pointer width.
37If you don't need them, disabling the default features may reduce code size and compile time slightly.
38
39```toml
40[dependencies]
41portable-atomic = { version = "1", default-features = false }
42```
43
44If your crate supports no-std environment and requires atomic CAS, enabling the `require-cas` feature will allow the portable-atomic to display a [helpful error message](https://github.com/taiki-e/portable-atomic/pull/100) to users on targets requiring additional action on the user side to provide atomic CAS.
45
46```toml
47[dependencies]
48portable-atomic = { version = "1.3", default-features = false, features = ["require-cas"] }
49```
50
51(Since 1.8, portable-atomic can display a [helpful error message](https://github.com/taiki-e/portable-atomic/pull/181) even without the `require-cas` feature when the rustc version is 1.78+. However, the `require-cas` feature also allows rejecting builds at an earlier stage, we recommend enabling it unless enabling it causes [problems](https://github.com/matklad/once_cell/pull/267).)
52
53## 128-bit atomics support
54
55Native 128-bit atomic operations are available on x86_64 (Rust 1.59+), AArch64 (Rust 1.59+), riscv64 (Rust 1.59+), Arm64EC (Rust 1.84+), s390x (Rust 1.84+), and powerpc64 (nightly only), otherwise the fallback implementation is used.
56
57On x86_64, even if `cmpxchg16b` is not available at compile-time (Note: `cmpxchg16b` target feature is enabled by default only on Apple, Windows (except Windows 7), and Fuchsia targets), run-time detection checks whether `cmpxchg16b` is available. If `cmpxchg16b` is not available at either compile-time or run-time detection, the fallback implementation is used. See also [`portable_atomic_no_outline_atomics`](#optional-cfg-no-outline-atomics) cfg.
58
59They are usually implemented using inline assembly, and when using Miri or ThreadSanitizer that do not support inline assembly, core intrinsics are used instead of inline assembly if possible.
60
61See the [`atomic128` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/atomic128/README.md) for details.
62
63## <a name="optional-features"></a><a name="optional-cfg"></a>Optional features/cfgs
64
65portable-atomic provides features and cfgs to allow enabling specific APIs and customizing its behavior.
66
67Some options have both a feature and a cfg. When both exist, it indicates that the feature does not follow Cargo's recommendation that [features should be additive](https://doc.rust-lang.org/nightly/cargo/reference/features.html#feature-unification). Therefore, the maintainer's recommendation is to use cfg instead of feature. However, in the embedded ecosystem, it is very common to use features in such places, so these options provide both so you can choose based on your preference.
68
69<details>
70<summary>How to enable cfg (click to show)</summary>
71
72One of the ways to enable cfg is to set [rustflags in the cargo config](https://doc.rust-lang.org/cargo/reference/config.html#targettriplerustflags):
73
74```toml
75# .cargo/config.toml
76[target.<target>]
77rustflags = ["--cfg", "portable_atomic_unsafe_assume_single_core"]
78```
79
80Or set environment variable:
81
82```sh
83RUSTFLAGS="--cfg portable_atomic_unsafe_assume_single_core" cargo ...
84```
85
86</details>
87
88- <a name="optional-features-fallback"></a>**`fallback` feature** *(enabled by default)*<br>
89  Enable fallback implementations.
90
91  This enables atomic types with larger than the width supported by atomic instructions available on the current target. If the current target supports 128-bit atomics, this is no-op.
92
93  This uses lock-based fallback implementations by default. The following features/cfgs change this behavior:
94  - `unsafe-assume-single-core` feature / `portable_atomic_unsafe_assume_single_core` cfg: Use fallback implementations that disabling interrupts instead of using locks.
95
96- <a name="optional-features-float"></a>**`float` feature**<br>
97  Provide `AtomicF{32,64}`.
98
99  If you want atomic types for unstable float types ([`f16` and `f128`](https://github.com/rust-lang/rust/issues/116909)), enable unstable cfg (`portable_atomic_unstable_f16` cfg for `AtomicF16`, `portable_atomic_unstable_f128` cfg for `AtomicF128`, [there is no possibility that both feature and cfg will be provided for unstable options.](https://github.com/taiki-e/portable-atomic/pull/200#issuecomment-2682252991)).
100
101<div class="rustdoc-alert rustdoc-alert-note">
102
103> **ⓘ Note**
104>
105> - Atomic float's `fetch_{add,sub,min,max}` are usually implemented using CAS loops, which can be slower than equivalent operations of atomic integers. As an exception, AArch64 with FEAT_LSFE and GPU targets have atomic float instructions and we use them on AArch64 when `lsfe` target feature is available at compile-time. We [plan to use atomic float instructions for GPU targets as well in the future.](https://github.com/taiki-e/portable-atomic/issues/34)
106> - Unstable cfgs are outside of the normal semver guarantees and minor or patch versions of portable-atomic may make breaking changes to them at any time.
107
108</div>
109
110- <a name="optional-features-std"></a>**`std` feature**<br>
111  Use `std`.
112
113- <a name="optional-features-require-cas"></a>**`require-cas` feature**<br>
114  Emit compile error if atomic CAS is not available. See [Usage](#usage) section for usage of this feature.
115
116- <a name="optional-features-serde"></a>**`serde` feature**<br>
117  Implement `serde::{Serialize,Deserialize}` for atomic types.
118
119  Note:
120  - The MSRV when this feature is enabled depends on the MSRV of [serde].
121
122- <a name="optional-features-critical-section"></a>**`critical-section` feature**<br>
123  Use [critical-section] to provide atomic CAS for targets where atomic CAS is not available in the standard library.
124
125  `critical-section` support is useful to get atomic CAS when the [`unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)](#optional-features-unsafe-assume-single-core) can't be used,
126  such as multi-core targets, unprivileged code running under some RTOS, or environments where disabling interrupts
127  needs extra care due to e.g. real-time requirements.
128
129<div class="rustdoc-alert rustdoc-alert-note">
130
131> **ⓘ Note**
132>
133> - When enabling this feature, you should provide a suitable critical section implementation for the current target, see the [critical-section] documentation for details on how to do so.
134> - With this feature, critical sections are taken for all atomic operations, while with `unsafe-assume-single-core` feature [some operations](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/interrupt/README.md#no-disable-interrupts) don't require disabling interrupts. Therefore, for better performance, if all the `critical-section` implementation for your target does is disable interrupts, prefer using `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg) instead.
135> - It is usually **discouraged** to always enable this feature in libraries that depend on `portable-atomic`.
136>
137>   Enabling this feature will prevent the end user from having the chance to take advantage of other (potentially) efficient implementations (implementations provided by `unsafe-assume-single-core` feature mentioned above, implementation proposed in [#60], etc.). Also, targets that are currently unsupported may be supported in the future.
138>
139>   The recommended approach for libraries is to leave it up to the end user whether or not to enable this feature. (However, it may make sense to enable this feature by default for libraries specific to a platform where other implementations are known not to work.)
140>
141>   See also [](https://github.com/matklad/once_cell/issues/264#issuecomment-2352654806).
142>
143>   As an example, the end-user's `Cargo.toml` that uses a crate that provides a critical-section implementation and a crate that depends on portable-atomic as an option would be expected to look like this:
144>
145>   ```toml
146>   [dependencies]
147>   portable-atomic = { version = "1", default-features = false, features = ["critical-section"] }
148>   crate-provides-critical-section-impl = "..."
149>   crate-uses-portable-atomic-as-feature = { version = "...", features = ["portable-atomic"] }
150>   ```
151>
152> - Enabling both this feature and `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg) will result in a compile error.
153> - The MSRV when this feature is enabled depends on the MSRV of [critical-section].
154
155</div>
156
157- <a name="optional-features-unsafe-assume-single-core"></a><a name="optional-cfg-unsafe-assume-single-core"></a>**`unsafe-assume-single-core` feature / `portable_atomic_unsafe_assume_single_core` cfg**<br>
158  Assume that the target is single-core and privileged instructions required to disable interrupts are available.
159
160  - When this feature/cfg is enabled, this crate provides atomic CAS for targets where atomic CAS is not available in the standard library by disabling interrupts.
161  - When both this feature/cfg and enabled-by-default `fallback` feature is enabled, this crate provides atomic types with larger than the width supported by native instructions by disabling interrupts.
162
163<div class="rustdoc-alert rustdoc-alert-warning">
164
165> **⚠ Warning**
166>
167> This feature/cfg is `unsafe`, and note the following safety requirements:
168> - Enabling this feature/cfg for multi-core systems is always **unsound**.
169> - This uses privileged instructions to disable interrupts, so it usually doesn't work on unprivileged mode.
170>
171>   Enabling this feature/cfg in an environment where privileged instructions are not available, or if the instructions used are not sufficient to disable interrupts in the system, it is also usually considered **unsound**, although the details are system-dependent.
172>
173>   The following are known cases:
174>   - On pre-v6 Arm, this disables only IRQs by default. For many systems (e.g., GBA) this is enough. If the system need to disable both IRQs and FIQs, you need to enable the `disable-fiq` feature (or `portable_atomic_disable_fiq` cfg) together.
175>   - On RISC-V without A-extension, this generates code for machine-mode (M-mode) by default. If you enable the `s-mode` feature (or `portable_atomic_s_mode` cfg) together, this generates code for supervisor-mode (S-mode). In particular, `qemu-system-riscv*` uses [OpenSBI](https://github.com/riscv-software-src/opensbi) as the default firmware.
176>
177>   Consider using the [`critical-section` feature](#optional-features-critical-section) for systems that cannot use this feature/cfg.
178>
179>   See also the [`interrupt` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/interrupt/README.md).
180
181</div>
182
183<div class="rustdoc-alert rustdoc-alert-note">
184
185> **ⓘ Note**
186>
187> - It is **very strongly discouraged** to enable this feature/cfg in libraries that depend on `portable-atomic`.
188>
189>   The recommended approach for libraries is to leave it up to the end user whether or not to enable this feature/cfg. (However, it may make sense to enable this feature/cfg by default for libraries specific to a platform where it is guaranteed to always be sound, for example in a hardware abstraction layer targeting a single-core chip.)
190> - Enabling this feature/cfg for unsupported architectures will result in a compile error.
191>   - Armv6-M (thumbv6m), pre-v6 Arm (e.g., thumbv4t, thumbv5te), RISC-V without A-extension, and Xtensa are currently supported. (Since all MSP430 and AVR are single-core, we always provide atomic CAS for them without this feature/cfg.)
192>   - Feel free to [submit an issue](https://github.com/taiki-e/portable-atomic/issues/new) if your target is not supported yet.
193> - Enabling this feature/cfg for targets where privileged instructions are obviously unavailable (e.g., Linux) will result in a compile error.
194>   - Feel free to [submit an issue](https://github.com/taiki-e/portable-atomic/issues/new) if your target supports privileged instructions but the build rejected.
195> - Enabling both this feature/cfg and `critical-section` feature will result in a compile error.
196
197</div>
198
199- <a name="optional-cfg-no-outline-atomics"></a>**`portable_atomic_no_outline_atomics` cfg**<br>
200  Disable dynamic dispatching by run-time CPU feature detection.
201
202  Dynamic dispatching by run-time CPU feature detection allows maintaining support for older CPUs while using features that are not supported on older CPUs, such as CMPXCHG16B (x86_64) and FEAT_LSE/FEAT_LSE2 (AArch64).
203
204  See also the [`atomic128` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/atomic128/README.md).
205
206<div class="rustdoc-alert rustdoc-alert-note">
207
208> **ⓘ Note**
209>
210> - If the required target features are enabled at compile-time, dynamic dispatching is automatically disabled and the atomic operations are inlined.
211> - This is compatible with no-std (as with all features except `std`).
212> - On some targets, run-time detection is disabled by default mainly for compatibility with incomplete build environments or support for it is experimental, and can be enabled by `portable_atomic_outline_atomics` cfg. (When both cfg are enabled, `*_no_*` cfg is preferred.)
213> - Some AArch64 targets enable LLVM's `outline-atomics` target feature by default, so if you set this cfg, you may want to disable that as well. (However, portable-atomic's outline-atomics does not depend on the compiler-rt symbols, so even if you need to disable LLVM's outline-atomics, you may not need to disable portable-atomic's outline-atomics.)
214> - Dynamic detection is currently only supported in x86_64, AArch64, Arm, RISC-V, Arm64EC, and powerpc64. Enabling this cfg for unsupported architectures will result in a compile error.
215
216</div>
217
218## Related Projects
219
220- [atomic-maybe-uninit]: Atomic operations on potentially uninitialized integers.
221- [atomic-memcpy]: Byte-wise atomic memcpy.
222
223[#60]: https://github.com/taiki-e/portable-atomic/issues/60
224[atomic-maybe-uninit]: https://github.com/taiki-e/atomic-maybe-uninit
225[atomic-memcpy]: https://github.com/taiki-e/atomic-memcpy
226[critical-section]: https://github.com/rust-embedded/critical-section
227[rust-lang/rust#100650]: https://github.com/rust-lang/rust/issues/100650
228[serde]: https://github.com/serde-rs/serde
229
230<!-- tidy:sync-markdown-to-rustdoc:end -->
231*/
232
233#![no_std]
234#![doc(test(
235    no_crate_inject,
236    attr(
237        deny(warnings, rust_2018_idioms, single_use_lifetimes),
238        allow(dead_code, unused_variables)
239    )
240))]
241#![cfg_attr(not(portable_atomic_no_unsafe_op_in_unsafe_fn), warn(unsafe_op_in_unsafe_fn))] // unsafe_op_in_unsafe_fn requires Rust 1.52
242#![cfg_attr(portable_atomic_no_unsafe_op_in_unsafe_fn, allow(unused_unsafe))]
243#![warn(
244    // Lints that may help when writing public library.
245    missing_debug_implementations,
246    // missing_docs,
247    clippy::alloc_instead_of_core,
248    clippy::exhaustive_enums,
249    clippy::exhaustive_structs,
250    clippy::impl_trait_in_params,
251    clippy::missing_inline_in_public_items,
252    clippy::std_instead_of_alloc,
253    clippy::std_instead_of_core,
254    // Code outside of cfg(feature = "float") shouldn't use float.
255    clippy::float_arithmetic,
256)]
257#![cfg_attr(not(portable_atomic_no_asm), warn(missing_docs))] // module-level #![allow(missing_docs)] doesn't work for macros on old rustc
258#![cfg_attr(portable_atomic_no_strict_provenance, allow(unstable_name_collisions))]
259#![allow(clippy::inline_always, clippy::used_underscore_items)]
260// asm_experimental_arch
261// AVR, MSP430, and Xtensa are tier 3 platforms and require nightly anyway.
262// On tier 2 platforms (powerpc64), we use cfg set by build script to
263// determine whether this feature is available or not.
264#![cfg_attr(
265    all(
266        not(portable_atomic_no_asm),
267        any(
268            target_arch = "avr",
269            target_arch = "msp430",
270            all(target_arch = "xtensa", portable_atomic_unsafe_assume_single_core),
271            all(target_arch = "powerpc64", portable_atomic_unstable_asm_experimental_arch),
272        ),
273    ),
274    feature(asm_experimental_arch)
275)]
276// f16/f128
277// cfg is unstable and explicitly enabled by the user
278#![cfg_attr(portable_atomic_unstable_f16, feature(f16))]
279#![cfg_attr(portable_atomic_unstable_f128, feature(f128))]
280// Old nightly only
281// These features are already stabilized or have already been removed from compilers,
282// and can safely be enabled for old nightly as long as version detection works.
283// - cfg(target_has_atomic)
284// - asm! on AArch64, Arm, RISC-V, x86, x86_64, Arm64EC, s390x
285// - llvm_asm! on AVR (tier 3) and MSP430 (tier 3)
286// - #[instruction_set] on non-Linux/Android pre-v6 Arm (tier 3)
287// This also helps us test that our assembly code works with the minimum external
288// LLVM version of the first rustc version that inline assembly stabilized.
289#![cfg_attr(portable_atomic_unstable_cfg_target_has_atomic, feature(cfg_target_has_atomic))]
290#![cfg_attr(
291    all(
292        portable_atomic_unstable_asm,
293        any(
294            target_arch = "aarch64",
295            target_arch = "arm",
296            target_arch = "riscv32",
297            target_arch = "riscv64",
298            target_arch = "x86",
299            target_arch = "x86_64",
300        ),
301    ),
302    feature(asm)
303)]
304#![cfg_attr(
305    all(
306        portable_atomic_unstable_asm_experimental_arch,
307        any(target_arch = "arm64ec", target_arch = "s390x"),
308    ),
309    feature(asm_experimental_arch)
310)]
311#![cfg_attr(
312    all(any(target_arch = "avr", target_arch = "msp430"), portable_atomic_no_asm),
313    feature(llvm_asm)
314)]
315#![cfg_attr(
316    all(
317        target_arch = "arm",
318        portable_atomic_unstable_isa_attribute,
319        any(test, portable_atomic_unsafe_assume_single_core),
320        not(any(target_feature = "v6", portable_atomic_target_feature = "v6")),
321        not(target_has_atomic = "ptr"),
322    ),
323    feature(isa_attribute)
324)]
325// Miri and/or ThreadSanitizer only
326// They do not support inline assembly, so we need to use unstable features instead.
327// Since they require nightly compilers anyway, we can use the unstable features.
328// This is not an ideal situation, but it is still better than always using lock-based
329// fallback and causing memory ordering problems to be missed by these checkers.
330#![cfg_attr(
331    all(
332        any(
333            target_arch = "aarch64",
334            target_arch = "arm64ec",
335            target_arch = "powerpc64",
336            target_arch = "s390x",
337        ),
338        any(miri, portable_atomic_sanitize_thread),
339    ),
340    allow(internal_features)
341)]
342#![cfg_attr(
343    all(
344        any(
345            target_arch = "aarch64",
346            target_arch = "arm64ec",
347            target_arch = "powerpc64",
348            target_arch = "s390x",
349        ),
350        any(miri, portable_atomic_sanitize_thread),
351    ),
352    feature(core_intrinsics)
353)]
354// docs.rs only (cfg is enabled by docs.rs, not build script)
355#![cfg_attr(docsrs, feature(doc_cfg))]
356#![cfg_attr(docsrs, doc(auto_cfg = false))]
357#![cfg_attr(
358    all(
359        portable_atomic_no_atomic_load_store,
360        not(any(
361            target_arch = "avr",
362            target_arch = "bpf",
363            target_arch = "msp430",
364            target_arch = "riscv32",
365            target_arch = "riscv64",
366            feature = "critical-section",
367            portable_atomic_unsafe_assume_single_core,
368        )),
369    ),
370    allow(unused_imports, unused_macros, clippy::unused_trait_names)
371)]
372
373// There are currently no 128-bit or higher builtin targets.
374// (Although some of our generic code is written with the future
375// addition of 128-bit targets in mind.)
376// Note that Rust (and C99) pointers must be at least 16-bit (i.e., 8-bit targets are impossible): https://github.com/rust-lang/rust/pull/49305
377#[cfg(not(any(
378    target_pointer_width = "16",
379    target_pointer_width = "32",
380    target_pointer_width = "64",
381)))]
382compile_error!(
383    "portable-atomic currently only supports targets with {16,32,64}-bit pointer width; \
384     if you need support for others, \
385     please submit an issue at <https://github.com/taiki-e/portable-atomic>"
386);
387
388// Reject unsupported architectures.
389#[cfg(portable_atomic_unsafe_assume_single_core)]
390#[cfg(not(any(
391    target_arch = "arm",
392    target_arch = "avr",
393    target_arch = "msp430",
394    target_arch = "riscv32",
395    target_arch = "riscv64",
396    target_arch = "xtensa",
397)))]
398compile_error!(
399    "`portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature) \
400     is not supported yet on this architecture;\n\
401     if you need unsafe-assume-single-core support for this target,\n\
402     please submit an issue at <https://github.com/taiki-e/portable-atomic/issues/new>"
403);
404// Reject targets where privileged instructions are obviously unavailable.
405// TODO: Some embedded OSes should probably be accepted here.
406#[cfg(portable_atomic_unsafe_assume_single_core)]
407#[cfg(any(
408    target_arch = "arm",
409    target_arch = "avr",
410    target_arch = "msp430",
411    target_arch = "riscv32",
412    target_arch = "riscv64",
413    target_arch = "xtensa",
414))]
415#[cfg_attr(
416    portable_atomic_no_cfg_target_has_atomic,
417    cfg(all(not(portable_atomic_no_atomic_cas), not(target_os = "none")))
418)]
419#[cfg_attr(
420    not(portable_atomic_no_cfg_target_has_atomic),
421    cfg(all(target_has_atomic = "ptr", not(target_os = "none")))
422)]
423compile_error!(
424    "`portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature) \
425     is not compatible targets where privileged instructions are obviously unavailable;\n\
426     if you need unsafe-assume-single-core support for this target,\n\
427     please submit an issue at <https://github.com/taiki-e/portable-atomic/issues/new>\n\
428     see also <https://github.com/taiki-e/portable-atomic/issues/148> for troubleshooting"
429);
430
431#[cfg(portable_atomic_no_outline_atomics)]
432#[cfg(not(any(
433    target_arch = "aarch64",
434    target_arch = "arm",
435    target_arch = "arm64ec",
436    target_arch = "powerpc64",
437    target_arch = "riscv32",
438    target_arch = "riscv64",
439    target_arch = "x86_64",
440)))]
441compile_error!("`portable_atomic_no_outline_atomics` cfg does not compatible with this target");
442#[cfg(portable_atomic_outline_atomics)]
443#[cfg(not(any(
444    target_arch = "aarch64",
445    target_arch = "powerpc64",
446    target_arch = "riscv32",
447    target_arch = "riscv64",
448)))]
449compile_error!("`portable_atomic_outline_atomics` cfg does not compatible with this target");
450
451#[cfg(portable_atomic_disable_fiq)]
452#[cfg(not(all(
453    target_arch = "arm",
454    not(any(target_feature = "mclass", portable_atomic_target_feature = "mclass")),
455)))]
456compile_error!(
457    "`portable_atomic_disable_fiq` cfg (`disable-fiq` feature) is only available on pre-v6 Arm"
458);
459#[cfg(portable_atomic_s_mode)]
460#[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64")))]
461compile_error!("`portable_atomic_s_mode` cfg (`s-mode` feature) is only available on RISC-V");
462#[cfg(portable_atomic_force_amo)]
463#[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64")))]
464compile_error!("`portable_atomic_force_amo` cfg (`force-amo` feature) is only available on RISC-V");
465
466#[cfg(portable_atomic_disable_fiq)]
467#[cfg(not(portable_atomic_unsafe_assume_single_core))]
468compile_error!(
469    "`portable_atomic_disable_fiq` cfg (`disable-fiq` feature) may only be used together with `portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature)"
470);
471#[cfg(portable_atomic_s_mode)]
472#[cfg(not(portable_atomic_unsafe_assume_single_core))]
473compile_error!(
474    "`portable_atomic_s_mode` cfg (`s-mode` feature) may only be used together with `portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature)"
475);
476#[cfg(portable_atomic_force_amo)]
477#[cfg(not(portable_atomic_unsafe_assume_single_core))]
478compile_error!(
479    "`portable_atomic_force_amo` cfg (`force-amo` feature) may only be used together with `portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature)"
480);
481
482#[cfg(all(portable_atomic_unsafe_assume_single_core, feature = "critical-section"))]
483compile_error!(
484    "you may not enable `critical-section` feature and `portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature) at the same time"
485);
486
487#[cfg(feature = "require-cas")]
488#[cfg_attr(
489    portable_atomic_no_cfg_target_has_atomic,
490    cfg(not(any(
491        not(portable_atomic_no_atomic_cas),
492        portable_atomic_unsafe_assume_single_core,
493        feature = "critical-section",
494        target_arch = "avr",
495        target_arch = "msp430",
496    )))
497)]
498#[cfg_attr(
499    not(portable_atomic_no_cfg_target_has_atomic),
500    cfg(not(any(
501        target_has_atomic = "ptr",
502        portable_atomic_unsafe_assume_single_core,
503        feature = "critical-section",
504        target_arch = "avr",
505        target_arch = "msp430",
506    )))
507)]
508compile_error!(
509    "dependents require atomic CAS but not available on this target by default;\n\
510    consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg).\n\
511    see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
512);
513
514#[cfg(any(test, feature = "std"))]
515extern crate std;
516
517#[macro_use]
518mod cfgs;
519#[cfg(target_pointer_width = "16")]
520pub use self::{cfg_has_atomic_16 as cfg_has_atomic_ptr, cfg_no_atomic_16 as cfg_no_atomic_ptr};
521#[cfg(target_pointer_width = "32")]
522pub use self::{cfg_has_atomic_32 as cfg_has_atomic_ptr, cfg_no_atomic_32 as cfg_no_atomic_ptr};
523#[cfg(target_pointer_width = "64")]
524pub use self::{cfg_has_atomic_64 as cfg_has_atomic_ptr, cfg_no_atomic_64 as cfg_no_atomic_ptr};
525#[cfg(target_pointer_width = "128")]
526pub use self::{cfg_has_atomic_128 as cfg_has_atomic_ptr, cfg_no_atomic_128 as cfg_no_atomic_ptr};
527
528#[macro_use]
529mod utils;
530
531#[cfg(test)]
532#[macro_use]
533mod tests;
534
535#[doc(no_inline)]
536pub use core::sync::atomic::Ordering;
537
538// LLVM doesn't support fence/compiler_fence for MSP430.
539#[cfg(target_arch = "msp430")]
540pub use self::imp::msp430::{compiler_fence, fence};
541#[doc(no_inline)]
542#[cfg(not(target_arch = "msp430"))]
543pub use core::sync::atomic::{compiler_fence, fence};
544
545mod imp;
546
547pub mod hint {
548    //! Re-export of the [`core::hint`] module.
549    //!
550    //! The only difference from the [`core::hint`] module is that [`spin_loop`]
551    //! is available in all rust versions that this crate supports.
552    //!
553    //! ```
554    //! use portable_atomic::hint;
555    //!
556    //! hint::spin_loop();
557    //! ```
558
559    #[doc(no_inline)]
560    pub use core::hint::*;
561
562    /// Emits a machine instruction to signal the processor that it is running in
563    /// a busy-wait spin-loop ("spin lock").
564    ///
565    /// Upon receiving the spin-loop signal the processor can optimize its behavior by,
566    /// for example, saving power or switching hyper-threads.
567    ///
568    /// This function is different from [`thread::yield_now`] which directly
569    /// yields to the system's scheduler, whereas `spin_loop` does not interact
570    /// with the operating system.
571    ///
572    /// A common use case for `spin_loop` is implementing bounded optimistic
573    /// spinning in a CAS loop in synchronization primitives. To avoid problems
574    /// like priority inversion, it is strongly recommended that the spin loop is
575    /// terminated after a finite amount of iterations and an appropriate blocking
576    /// syscall is made.
577    ///
578    /// **Note:** On platforms that do not support receiving spin-loop hints this
579    /// function does not do anything at all.
580    ///
581    /// [`thread::yield_now`]: https://doc.rust-lang.org/std/thread/fn.yield_now.html
582    #[inline]
583    pub fn spin_loop() {
584        #[allow(deprecated)]
585        core::sync::atomic::spin_loop_hint();
586    }
587}
588
589#[cfg(doc)]
590use core::sync::atomic::Ordering::{AcqRel, Acquire, Relaxed, Release, SeqCst};
591use core::{fmt, ptr};
592
593#[cfg(portable_atomic_no_strict_provenance)]
594#[cfg(miri)]
595use self::utils::ptr::PtrExt as _;
596
597cfg_has_atomic_8! {
598/// A boolean type which can be safely shared between threads.
599///
600/// This type has the same in-memory representation as a [`bool`].
601///
602/// If the compiler and the platform support atomic loads and stores of `u8`,
603/// this type is a wrapper for the standard library's
604/// [`AtomicBool`](core::sync::atomic::AtomicBool). If the platform supports it
605/// but the compiler does not, atomic operations are implemented using inline
606/// assembly.
607#[repr(C, align(1))]
608pub struct AtomicBool {
609    v: core::cell::UnsafeCell<u8>,
610}
611
612impl Default for AtomicBool {
613    /// Creates an `AtomicBool` initialized to `false`.
614    #[inline]
615    fn default() -> Self {
616        Self::new(false)
617    }
618}
619
620impl From<bool> for AtomicBool {
621    /// Converts a `bool` into an `AtomicBool`.
622    #[inline]
623    fn from(b: bool) -> Self {
624        Self::new(b)
625    }
626}
627
628// Send is implicitly implemented.
629// SAFETY: any data races are prevented by disabling interrupts or
630// atomic intrinsics (see module-level comments).
631unsafe impl Sync for AtomicBool {}
632
633// UnwindSafe is implicitly implemented.
634#[cfg(not(portable_atomic_no_core_unwind_safe))]
635impl core::panic::RefUnwindSafe for AtomicBool {}
636#[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
637impl std::panic::RefUnwindSafe for AtomicBool {}
638
639impl_debug_and_serde!(AtomicBool);
640
641impl AtomicBool {
642    /// Creates a new `AtomicBool`.
643    ///
644    /// # Examples
645    ///
646    /// ```
647    /// use portable_atomic::AtomicBool;
648    ///
649    /// let atomic_true = AtomicBool::new(true);
650    /// let atomic_false = AtomicBool::new(false);
651    /// ```
652    #[inline]
653    #[must_use]
654    pub const fn new(v: bool) -> Self {
655        static_assert_layout!(AtomicBool, bool);
656        Self { v: core::cell::UnsafeCell::new(v as u8) }
657    }
658
659    // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
660    const_fn! {
661        const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
662        /// Creates a new `AtomicBool` from a pointer.
663        ///
664        /// This is `const fn` on Rust 1.83+.
665        ///
666        /// # Safety
667        ///
668        /// * `ptr` must be aligned to `align_of::<AtomicBool>()` (note that on some platforms this can
669        ///   be bigger than `align_of::<bool>()`).
670        /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
671        /// * If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
672        ///   behind `ptr` must have a happens-before relationship with atomic accesses via the returned
673        ///   value (or vice-versa).
674        ///   * In other words, time periods where the value is accessed atomically may not overlap
675        ///     with periods where the value is accessed non-atomically.
676        ///   * This requirement is trivially satisfied if `ptr` is never used non-atomically for the
677        ///     duration of lifetime `'a`. Most use cases should be able to follow this guideline.
678        ///   * This requirement is also trivially satisfied if all accesses (atomic or not) are done
679        ///     from the same thread.
680        /// * If this atomic type is *not* lock-free:
681        ///   * Any accesses to the value behind `ptr` must have a happens-before relationship
682        ///     with accesses via the returned value (or vice-versa).
683        ///   * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
684        ///     be compatible with operations performed by this atomic type.
685        /// * This method must not be used to create overlapping or mixed-size atomic accesses, as
686        ///   these are not supported by the memory model.
687        ///
688        /// [valid]: core::ptr#safety
689        #[inline]
690        #[must_use]
691        pub const unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a Self {
692            #[allow(clippy::cast_ptr_alignment)]
693            // SAFETY: guaranteed by the caller
694            unsafe { &*(ptr as *mut Self) }
695        }
696    }
697
698    /// Returns `true` if operations on values of this type are lock-free.
699    ///
700    /// If the compiler or the platform doesn't support the necessary
701    /// atomic instructions, global locks for every potentially
702    /// concurrent atomic operation will be used.
703    ///
704    /// # Examples
705    ///
706    /// ```
707    /// use portable_atomic::AtomicBool;
708    ///
709    /// let is_lock_free = AtomicBool::is_lock_free();
710    /// ```
711    #[inline]
712    #[must_use]
713    pub fn is_lock_free() -> bool {
714        imp::AtomicU8::is_lock_free()
715    }
716
717    /// Returns `true` if operations on values of this type are lock-free.
718    ///
719    /// If the compiler or the platform doesn't support the necessary
720    /// atomic instructions, global locks for every potentially
721    /// concurrent atomic operation will be used.
722    ///
723    /// **Note:** If the atomic operation relies on dynamic CPU feature detection,
724    /// this type may be lock-free even if the function returns false.
725    ///
726    /// # Examples
727    ///
728    /// ```
729    /// use portable_atomic::AtomicBool;
730    ///
731    /// const IS_ALWAYS_LOCK_FREE: bool = AtomicBool::is_always_lock_free();
732    /// ```
733    #[inline]
734    #[must_use]
735    pub const fn is_always_lock_free() -> bool {
736        imp::AtomicU8::IS_ALWAYS_LOCK_FREE
737    }
738    #[cfg(test)]
739    const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
740
741    const_fn! {
742        const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
743        /// Returns a mutable reference to the underlying [`bool`].
744        ///
745        /// This is safe because the mutable reference guarantees that no other threads are
746        /// concurrently accessing the atomic data.
747        ///
748        /// This is `const fn` on Rust 1.83+.
749        ///
750        /// # Examples
751        ///
752        /// ```
753        /// use portable_atomic::{AtomicBool, Ordering};
754        ///
755        /// let mut some_bool = AtomicBool::new(true);
756        /// assert_eq!(*some_bool.get_mut(), true);
757        /// *some_bool.get_mut() = false;
758        /// assert_eq!(some_bool.load(Ordering::SeqCst), false);
759        /// ```
760        #[inline]
761        pub const fn get_mut(&mut self) -> &mut bool {
762            // SAFETY: the mutable reference guarantees unique ownership.
763            unsafe { &mut *self.as_ptr() }
764        }
765    }
766
767    // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
768    // https://github.com/rust-lang/rust/issues/76314
769
770    const_fn! {
771        const_if: #[cfg(not(portable_atomic_no_const_transmute))];
772        /// Consumes the atomic and returns the contained value.
773        ///
774        /// This is safe because passing `self` by value guarantees that no other threads are
775        /// concurrently accessing the atomic data.
776        ///
777        /// This is `const fn` on Rust 1.56+.
778        ///
779        /// # Examples
780        ///
781        /// ```
782        /// use portable_atomic::AtomicBool;
783        ///
784        /// let some_bool = AtomicBool::new(true);
785        /// assert_eq!(some_bool.into_inner(), true);
786        /// ```
787        #[inline]
788        pub const fn into_inner(self) -> bool {
789            // SAFETY: AtomicBool and u8 have the same size and in-memory representations,
790            // so they can be safely transmuted.
791            // (const UnsafeCell::into_inner is unstable)
792            unsafe { core::mem::transmute::<AtomicBool, u8>(self) != 0 }
793        }
794    }
795
796    /// Loads a value from the bool.
797    ///
798    /// `load` takes an [`Ordering`] argument which describes the memory ordering
799    /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
800    ///
801    /// # Panics
802    ///
803    /// Panics if `order` is [`Release`] or [`AcqRel`].
804    ///
805    /// # Examples
806    ///
807    /// ```
808    /// use portable_atomic::{AtomicBool, Ordering};
809    ///
810    /// let some_bool = AtomicBool::new(true);
811    ///
812    /// assert_eq!(some_bool.load(Ordering::Relaxed), true);
813    /// ```
814    #[inline]
815    #[cfg_attr(
816        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
817        track_caller
818    )]
819    pub fn load(&self, order: Ordering) -> bool {
820        self.as_atomic_u8().load(order) != 0
821    }
822
823    /// Stores a value into the bool.
824    ///
825    /// `store` takes an [`Ordering`] argument which describes the memory ordering
826    /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
827    ///
828    /// # Panics
829    ///
830    /// Panics if `order` is [`Acquire`] or [`AcqRel`].
831    ///
832    /// # Examples
833    ///
834    /// ```
835    /// use portable_atomic::{AtomicBool, Ordering};
836    ///
837    /// let some_bool = AtomicBool::new(true);
838    ///
839    /// some_bool.store(false, Ordering::Relaxed);
840    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
841    /// ```
842    #[inline]
843    #[cfg_attr(
844        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
845        track_caller
846    )]
847    pub fn store(&self, val: bool, order: Ordering) {
848        self.as_atomic_u8().store(val as u8, order);
849    }
850
851    cfg_has_atomic_cas_or_amo32! {
852    /// Stores a value into the bool, returning the previous value.
853    ///
854    /// `swap` takes an [`Ordering`] argument which describes the memory ordering
855    /// of this operation. All ordering modes are possible. Note that using
856    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
857    /// using [`Release`] makes the load part [`Relaxed`].
858    ///
859    /// # Examples
860    ///
861    /// ```
862    /// use portable_atomic::{AtomicBool, Ordering};
863    ///
864    /// let some_bool = AtomicBool::new(true);
865    ///
866    /// assert_eq!(some_bool.swap(false, Ordering::Relaxed), true);
867    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
868    /// ```
869    #[inline]
870    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
871    pub fn swap(&self, val: bool, order: Ordering) -> bool {
872        #[cfg(any(
873            target_arch = "riscv32",
874            target_arch = "riscv64",
875            target_arch = "loongarch32",
876            target_arch = "loongarch64",
877        ))]
878        {
879            // See https://github.com/rust-lang/rust/pull/114034 for details.
880            // https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L249
881            // https://godbolt.org/z/ofbGGdx44
882            if val { self.fetch_or(true, order) } else { self.fetch_and(false, order) }
883        }
884        #[cfg(not(any(
885            target_arch = "riscv32",
886            target_arch = "riscv64",
887            target_arch = "loongarch32",
888            target_arch = "loongarch64",
889        )))]
890        {
891            self.as_atomic_u8().swap(val as u8, order) != 0
892        }
893    }
894
895    /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
896    ///
897    /// The return value is a result indicating whether the new value was written and containing
898    /// the previous value. On success this value is guaranteed to be equal to `current`.
899    ///
900    /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
901    /// ordering of this operation. `success` describes the required ordering for the
902    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
903    /// `failure` describes the required ordering for the load operation that takes place when
904    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
905    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
906    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
907    ///
908    /// # Panics
909    ///
910    /// Panics if `failure` is [`Release`], [`AcqRel`].
911    ///
912    /// # Examples
913    ///
914    /// ```
915    /// use portable_atomic::{AtomicBool, Ordering};
916    ///
917    /// let some_bool = AtomicBool::new(true);
918    ///
919    /// assert_eq!(
920    ///     some_bool.compare_exchange(true, false, Ordering::Acquire, Ordering::Relaxed),
921    ///     Ok(true)
922    /// );
923    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
924    ///
925    /// assert_eq!(
926    ///     some_bool.compare_exchange(true, true, Ordering::SeqCst, Ordering::Acquire),
927    ///     Err(false)
928    /// );
929    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
930    /// ```
931    #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
932    #[inline]
933    #[cfg_attr(
934        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
935        track_caller
936    )]
937    pub fn compare_exchange(
938        &self,
939        current: bool,
940        new: bool,
941        success: Ordering,
942        failure: Ordering,
943    ) -> Result<bool, bool> {
944        #[cfg(any(
945            target_arch = "riscv32",
946            target_arch = "riscv64",
947            target_arch = "loongarch32",
948            target_arch = "loongarch64",
949        ))]
950        {
951            // See https://github.com/rust-lang/rust/pull/114034 for details.
952            // https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L249
953            // https://godbolt.org/z/ofbGGdx44
954            crate::utils::assert_compare_exchange_ordering(success, failure);
955            let order = crate::utils::upgrade_success_ordering(success, failure);
956            let old = if current == new {
957                // This is a no-op, but we still need to perform the operation
958                // for memory ordering reasons.
959                self.fetch_or(false, order)
960            } else {
961                // This sets the value to the new one and returns the old one.
962                self.swap(new, order)
963            };
964            if old == current { Ok(old) } else { Err(old) }
965        }
966        #[cfg(not(any(
967            target_arch = "riscv32",
968            target_arch = "riscv64",
969            target_arch = "loongarch32",
970            target_arch = "loongarch64",
971        )))]
972        {
973            match self.as_atomic_u8().compare_exchange(current as u8, new as u8, success, failure) {
974                Ok(x) => Ok(x != 0),
975                Err(x) => Err(x != 0),
976            }
977        }
978    }
979
980    /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
981    ///
982    /// Unlike [`AtomicBool::compare_exchange`], this function is allowed to spuriously fail even when the
983    /// comparison succeeds, which can result in more efficient code on some platforms. The
984    /// return value is a result indicating whether the new value was written and containing the
985    /// previous value.
986    ///
987    /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
988    /// ordering of this operation. `success` describes the required ordering for the
989    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
990    /// `failure` describes the required ordering for the load operation that takes place when
991    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
992    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
993    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
994    ///
995    /// # Panics
996    ///
997    /// Panics if `failure` is [`Release`], [`AcqRel`].
998    ///
999    /// # Examples
1000    ///
1001    /// ```
1002    /// use portable_atomic::{AtomicBool, Ordering};
1003    ///
1004    /// let val = AtomicBool::new(false);
1005    ///
1006    /// let new = true;
1007    /// let mut old = val.load(Ordering::Relaxed);
1008    /// loop {
1009    ///     match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
1010    ///         Ok(_) => break,
1011    ///         Err(x) => old = x,
1012    ///     }
1013    /// }
1014    /// ```
1015    #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
1016    #[inline]
1017    #[cfg_attr(
1018        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1019        track_caller
1020    )]
1021    pub fn compare_exchange_weak(
1022        &self,
1023        current: bool,
1024        new: bool,
1025        success: Ordering,
1026        failure: Ordering,
1027    ) -> Result<bool, bool> {
1028        #[cfg(any(
1029            target_arch = "riscv32",
1030            target_arch = "riscv64",
1031            target_arch = "loongarch32",
1032            target_arch = "loongarch64",
1033        ))]
1034        {
1035            // See https://github.com/rust-lang/rust/pull/114034 for details.
1036            // https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L249
1037            // https://godbolt.org/z/ofbGGdx44
1038            self.compare_exchange(current, new, success, failure)
1039        }
1040        #[cfg(not(any(
1041            target_arch = "riscv32",
1042            target_arch = "riscv64",
1043            target_arch = "loongarch32",
1044            target_arch = "loongarch64",
1045        )))]
1046        {
1047            match self
1048                .as_atomic_u8()
1049                .compare_exchange_weak(current as u8, new as u8, success, failure)
1050            {
1051                Ok(x) => Ok(x != 0),
1052                Err(x) => Err(x != 0),
1053            }
1054        }
1055    }
1056
1057    /// Logical "and" with a boolean value.
1058    ///
1059    /// Performs a logical "and" operation on the current value and the argument `val`, and sets
1060    /// the new value to the result.
1061    ///
1062    /// Returns the previous value.
1063    ///
1064    /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
1065    /// of this operation. All ordering modes are possible. Note that using
1066    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1067    /// using [`Release`] makes the load part [`Relaxed`].
1068    ///
1069    /// # Examples
1070    ///
1071    /// ```
1072    /// use portable_atomic::{AtomicBool, Ordering};
1073    ///
1074    /// let foo = AtomicBool::new(true);
1075    /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), true);
1076    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1077    ///
1078    /// let foo = AtomicBool::new(true);
1079    /// assert_eq!(foo.fetch_and(true, Ordering::SeqCst), true);
1080    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1081    ///
1082    /// let foo = AtomicBool::new(false);
1083    /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), false);
1084    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1085    /// ```
1086    #[inline]
1087    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1088    pub fn fetch_and(&self, val: bool, order: Ordering) -> bool {
1089        self.as_atomic_u8().fetch_and(val as u8, order) != 0
1090    }
1091
1092    /// Logical "and" with a boolean value.
1093    ///
1094    /// Performs a logical "and" operation on the current value and the argument `val`, and sets
1095    /// the new value to the result.
1096    ///
1097    /// Unlike `fetch_and`, this does not return the previous value.
1098    ///
1099    /// `and` takes an [`Ordering`] argument which describes the memory ordering
1100    /// of this operation. All ordering modes are possible. Note that using
1101    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1102    /// using [`Release`] makes the load part [`Relaxed`].
1103    ///
1104    /// This function may generate more efficient code than `fetch_and` on some platforms.
1105    ///
1106    /// - x86/x86_64: `lock and` instead of `cmpxchg` loop
1107    /// - MSP430: `and` instead of disabling interrupts
1108    ///
1109    /// Note: On x86/x86_64, the use of either function should not usually
1110    /// affect the generated code, because LLVM can properly optimize the case
1111    /// where the result is unused.
1112    ///
1113    /// # Examples
1114    ///
1115    /// ```
1116    /// use portable_atomic::{AtomicBool, Ordering};
1117    ///
1118    /// let foo = AtomicBool::new(true);
1119    /// foo.and(false, Ordering::SeqCst);
1120    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1121    ///
1122    /// let foo = AtomicBool::new(true);
1123    /// foo.and(true, Ordering::SeqCst);
1124    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1125    ///
1126    /// let foo = AtomicBool::new(false);
1127    /// foo.and(false, Ordering::SeqCst);
1128    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1129    /// ```
1130    #[inline]
1131    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1132    pub fn and(&self, val: bool, order: Ordering) {
1133        self.as_atomic_u8().and(val as u8, order);
1134    }
1135
1136    /// Logical "nand" with a boolean value.
1137    ///
1138    /// Performs a logical "nand" operation on the current value and the argument `val`, and sets
1139    /// the new value to the result.
1140    ///
1141    /// Returns the previous value.
1142    ///
1143    /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
1144    /// of this operation. All ordering modes are possible. Note that using
1145    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1146    /// using [`Release`] makes the load part [`Relaxed`].
1147    ///
1148    /// # Examples
1149    ///
1150    /// ```
1151    /// use portable_atomic::{AtomicBool, Ordering};
1152    ///
1153    /// let foo = AtomicBool::new(true);
1154    /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), true);
1155    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1156    ///
1157    /// let foo = AtomicBool::new(true);
1158    /// assert_eq!(foo.fetch_nand(true, Ordering::SeqCst), true);
1159    /// assert_eq!(foo.load(Ordering::SeqCst) as usize, 0);
1160    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1161    ///
1162    /// let foo = AtomicBool::new(false);
1163    /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), false);
1164    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1165    /// ```
1166    #[inline]
1167    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1168    pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool {
1169        // https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L973-L985
1170        if val {
1171            // !(x & true) == !x
1172            // We must invert the bool.
1173            self.fetch_xor(true, order)
1174        } else {
1175            // !(x & false) == true
1176            // We must set the bool to true.
1177            self.swap(true, order)
1178        }
1179    }
1180
1181    /// Logical "or" with a boolean value.
1182    ///
1183    /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
1184    /// new value to the result.
1185    ///
1186    /// Returns the previous value.
1187    ///
1188    /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
1189    /// of this operation. All ordering modes are possible. Note that using
1190    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1191    /// using [`Release`] makes the load part [`Relaxed`].
1192    ///
1193    /// # Examples
1194    ///
1195    /// ```
1196    /// use portable_atomic::{AtomicBool, Ordering};
1197    ///
1198    /// let foo = AtomicBool::new(true);
1199    /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), true);
1200    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1201    ///
1202    /// let foo = AtomicBool::new(true);
1203    /// assert_eq!(foo.fetch_or(true, Ordering::SeqCst), true);
1204    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1205    ///
1206    /// let foo = AtomicBool::new(false);
1207    /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), false);
1208    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1209    /// ```
1210    #[inline]
1211    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1212    pub fn fetch_or(&self, val: bool, order: Ordering) -> bool {
1213        self.as_atomic_u8().fetch_or(val as u8, order) != 0
1214    }
1215
1216    /// Logical "or" with a boolean value.
1217    ///
1218    /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
1219    /// new value to the result.
1220    ///
1221    /// Unlike `fetch_or`, this does not return the previous value.
1222    ///
1223    /// `or` takes an [`Ordering`] argument which describes the memory ordering
1224    /// of this operation. All ordering modes are possible. Note that using
1225    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1226    /// using [`Release`] makes the load part [`Relaxed`].
1227    ///
1228    /// This function may generate more efficient code than `fetch_or` on some platforms.
1229    ///
1230    /// - x86/x86_64: `lock or` instead of `cmpxchg` loop
1231    /// - MSP430: `bis` instead of disabling interrupts
1232    ///
1233    /// Note: On x86/x86_64, the use of either function should not usually
1234    /// affect the generated code, because LLVM can properly optimize the case
1235    /// where the result is unused.
1236    ///
1237    /// # Examples
1238    ///
1239    /// ```
1240    /// use portable_atomic::{AtomicBool, Ordering};
1241    ///
1242    /// let foo = AtomicBool::new(true);
1243    /// foo.or(false, Ordering::SeqCst);
1244    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1245    ///
1246    /// let foo = AtomicBool::new(true);
1247    /// foo.or(true, Ordering::SeqCst);
1248    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1249    ///
1250    /// let foo = AtomicBool::new(false);
1251    /// foo.or(false, Ordering::SeqCst);
1252    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1253    /// ```
1254    #[inline]
1255    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1256    pub fn or(&self, val: bool, order: Ordering) {
1257        self.as_atomic_u8().or(val as u8, order);
1258    }
1259
1260    /// Logical "xor" with a boolean value.
1261    ///
1262    /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1263    /// the new value to the result.
1264    ///
1265    /// Returns the previous value.
1266    ///
1267    /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
1268    /// of this operation. All ordering modes are possible. Note that using
1269    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1270    /// using [`Release`] makes the load part [`Relaxed`].
1271    ///
1272    /// # Examples
1273    ///
1274    /// ```
1275    /// use portable_atomic::{AtomicBool, Ordering};
1276    ///
1277    /// let foo = AtomicBool::new(true);
1278    /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), true);
1279    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1280    ///
1281    /// let foo = AtomicBool::new(true);
1282    /// assert_eq!(foo.fetch_xor(true, Ordering::SeqCst), true);
1283    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1284    ///
1285    /// let foo = AtomicBool::new(false);
1286    /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), false);
1287    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1288    /// ```
1289    #[inline]
1290    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1291    pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool {
1292        self.as_atomic_u8().fetch_xor(val as u8, order) != 0
1293    }
1294
1295    /// Logical "xor" with a boolean value.
1296    ///
1297    /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1298    /// the new value to the result.
1299    ///
1300    /// Unlike `fetch_xor`, this does not return the previous value.
1301    ///
1302    /// `xor` takes an [`Ordering`] argument which describes the memory ordering
1303    /// of this operation. All ordering modes are possible. Note that using
1304    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1305    /// using [`Release`] makes the load part [`Relaxed`].
1306    ///
1307    /// This function may generate more efficient code than `fetch_xor` on some platforms.
1308    ///
1309    /// - x86/x86_64: `lock xor` instead of `cmpxchg` loop
1310    /// - MSP430: `xor` instead of disabling interrupts
1311    ///
1312    /// Note: On x86/x86_64, the use of either function should not usually
1313    /// affect the generated code, because LLVM can properly optimize the case
1314    /// where the result is unused.
1315    ///
1316    /// # Examples
1317    ///
1318    /// ```
1319    /// use portable_atomic::{AtomicBool, Ordering};
1320    ///
1321    /// let foo = AtomicBool::new(true);
1322    /// foo.xor(false, Ordering::SeqCst);
1323    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1324    ///
1325    /// let foo = AtomicBool::new(true);
1326    /// foo.xor(true, Ordering::SeqCst);
1327    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1328    ///
1329    /// let foo = AtomicBool::new(false);
1330    /// foo.xor(false, Ordering::SeqCst);
1331    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1332    /// ```
1333    #[inline]
1334    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1335    pub fn xor(&self, val: bool, order: Ordering) {
1336        self.as_atomic_u8().xor(val as u8, order);
1337    }
1338
1339    /// Logical "not" with a boolean value.
1340    ///
1341    /// Performs a logical "not" operation on the current value, and sets
1342    /// the new value to the result.
1343    ///
1344    /// Returns the previous value.
1345    ///
1346    /// `fetch_not` takes an [`Ordering`] argument which describes the memory ordering
1347    /// of this operation. All ordering modes are possible. Note that using
1348    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1349    /// using [`Release`] makes the load part [`Relaxed`].
1350    ///
1351    /// # Examples
1352    ///
1353    /// ```
1354    /// use portable_atomic::{AtomicBool, Ordering};
1355    ///
1356    /// let foo = AtomicBool::new(true);
1357    /// assert_eq!(foo.fetch_not(Ordering::SeqCst), true);
1358    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1359    ///
1360    /// let foo = AtomicBool::new(false);
1361    /// assert_eq!(foo.fetch_not(Ordering::SeqCst), false);
1362    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1363    /// ```
1364    #[inline]
1365    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1366    pub fn fetch_not(&self, order: Ordering) -> bool {
1367        self.fetch_xor(true, order)
1368    }
1369
1370    /// Logical "not" with a boolean value.
1371    ///
1372    /// Performs a logical "not" operation on the current value, and sets
1373    /// the new value to the result.
1374    ///
1375    /// Unlike `fetch_not`, this does not return the previous value.
1376    ///
1377    /// `not` takes an [`Ordering`] argument which describes the memory ordering
1378    /// of this operation. All ordering modes are possible. Note that using
1379    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1380    /// using [`Release`] makes the load part [`Relaxed`].
1381    ///
1382    /// This function may generate more efficient code than `fetch_not` on some platforms.
1383    ///
1384    /// - x86/x86_64: `lock xor` instead of `cmpxchg` loop
1385    /// - MSP430: `xor` instead of disabling interrupts
1386    ///
1387    /// Note: On x86/x86_64, the use of either function should not usually
1388    /// affect the generated code, because LLVM can properly optimize the case
1389    /// where the result is unused.
1390    ///
1391    /// # Examples
1392    ///
1393    /// ```
1394    /// use portable_atomic::{AtomicBool, Ordering};
1395    ///
1396    /// let foo = AtomicBool::new(true);
1397    /// foo.not(Ordering::SeqCst);
1398    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1399    ///
1400    /// let foo = AtomicBool::new(false);
1401    /// foo.not(Ordering::SeqCst);
1402    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1403    /// ```
1404    #[inline]
1405    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1406    pub fn not(&self, order: Ordering) {
1407        self.xor(true, order);
1408    }
1409
1410    /// Fetches the value, and applies a function to it that returns an optional
1411    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1412    /// returned `Some(_)`, else `Err(previous_value)`.
1413    ///
1414    /// Note: This may call the function multiple times if the value has been
1415    /// changed from other threads in the meantime, as long as the function
1416    /// returns `Some(_)`, but the function will have been applied only once to
1417    /// the stored value.
1418    ///
1419    /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1420    /// ordering of this operation. The first describes the required ordering for
1421    /// when the operation finally succeeds while the second describes the
1422    /// required ordering for loads. These correspond to the success and failure
1423    /// orderings of [`compare_exchange`](Self::compare_exchange) respectively.
1424    ///
1425    /// Using [`Acquire`] as success ordering makes the store part of this
1426    /// operation [`Relaxed`], and using [`Release`] makes the final successful
1427    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1428    /// [`Acquire`] or [`Relaxed`].
1429    ///
1430    /// # Considerations
1431    ///
1432    /// This method is not magic; it is not provided by the hardware.
1433    /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
1434    /// and suffers from the same drawbacks.
1435    /// In particular, this method will not circumvent the [ABA Problem].
1436    ///
1437    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1438    ///
1439    /// # Panics
1440    ///
1441    /// Panics if `fetch_order` is [`Release`], [`AcqRel`].
1442    ///
1443    /// # Examples
1444    ///
1445    /// ```
1446    /// use portable_atomic::{AtomicBool, Ordering};
1447    ///
1448    /// let x = AtomicBool::new(false);
1449    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
1450    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
1451    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
1452    /// assert_eq!(x.load(Ordering::SeqCst), false);
1453    /// ```
1454    #[inline]
1455    #[cfg_attr(
1456        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1457        track_caller
1458    )]
1459    pub fn fetch_update<F>(
1460        &self,
1461        set_order: Ordering,
1462        fetch_order: Ordering,
1463        mut f: F,
1464    ) -> Result<bool, bool>
1465    where
1466        F: FnMut(bool) -> Option<bool>,
1467    {
1468        let mut prev = self.load(fetch_order);
1469        while let Some(next) = f(prev) {
1470            match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1471                x @ Ok(_) => return x,
1472                Err(next_prev) => prev = next_prev,
1473            }
1474        }
1475        Err(prev)
1476    }
1477    } // cfg_has_atomic_cas_or_amo32!
1478
1479    const_fn! {
1480        // This function is actually `const fn`-compatible on Rust 1.32+,
1481        // but makes `const fn` only on Rust 1.58+ to match other atomic types.
1482        const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
1483        /// Returns a mutable pointer to the underlying [`bool`].
1484        ///
1485        /// Returning an `*mut` pointer from a shared reference to this atomic is
1486        /// safe because the atomic types work with interior mutability. Any use of
1487        /// the returned raw pointer requires an `unsafe` block and has to uphold
1488        /// the safety requirements. If there is concurrent access, note the following
1489        /// additional safety requirements:
1490        ///
1491        /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
1492        ///   operations on it must be atomic.
1493        /// - Otherwise, any concurrent operations on it must be compatible with
1494        ///   operations performed by this atomic type.
1495        ///
1496        /// This is `const fn` on Rust 1.58+.
1497        #[inline]
1498        pub const fn as_ptr(&self) -> *mut bool {
1499            self.v.get() as *mut bool
1500        }
1501    }
1502
1503    #[inline(always)]
1504    fn as_atomic_u8(&self) -> &imp::AtomicU8 {
1505        // SAFETY: AtomicBool and imp::AtomicU8 have the same layout,
1506        // and both access data in the same way.
1507        unsafe { &*(self as *const Self as *const imp::AtomicU8) }
1508    }
1509}
1510// See https://github.com/taiki-e/portable-atomic/issues/180
1511#[cfg(not(feature = "require-cas"))]
1512cfg_no_atomic_cas! {
1513#[doc(hidden)]
1514#[allow(unused_variables, clippy::unused_self, clippy::extra_unused_lifetimes)]
1515impl<'a> AtomicBool {
1516    cfg_no_atomic_cas_or_amo32! {
1517    #[inline]
1518    pub fn swap(&self, val: bool, order: Ordering) -> bool
1519    where
1520        &'a Self: HasSwap,
1521    {
1522        unimplemented!()
1523    }
1524    #[inline]
1525    pub fn compare_exchange(
1526        &self,
1527        current: bool,
1528        new: bool,
1529        success: Ordering,
1530        failure: Ordering,
1531    ) -> Result<bool, bool>
1532    where
1533        &'a Self: HasCompareExchange,
1534    {
1535        unimplemented!()
1536    }
1537    #[inline]
1538    pub fn compare_exchange_weak(
1539        &self,
1540        current: bool,
1541        new: bool,
1542        success: Ordering,
1543        failure: Ordering,
1544    ) -> Result<bool, bool>
1545    where
1546        &'a Self: HasCompareExchangeWeak,
1547    {
1548        unimplemented!()
1549    }
1550    #[inline]
1551    pub fn fetch_and(&self, val: bool, order: Ordering) -> bool
1552    where
1553        &'a Self: HasFetchAnd,
1554    {
1555        unimplemented!()
1556    }
1557    #[inline]
1558    pub fn and(&self, val: bool, order: Ordering)
1559    where
1560        &'a Self: HasAnd,
1561    {
1562        unimplemented!()
1563    }
1564    #[inline]
1565    pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool
1566    where
1567        &'a Self: HasFetchNand,
1568    {
1569        unimplemented!()
1570    }
1571    #[inline]
1572    pub fn fetch_or(&self, val: bool, order: Ordering) -> bool
1573    where
1574        &'a Self: HasFetchOr,
1575    {
1576        unimplemented!()
1577    }
1578    #[inline]
1579    pub fn or(&self, val: bool, order: Ordering)
1580    where
1581        &'a Self: HasOr,
1582    {
1583        unimplemented!()
1584    }
1585    #[inline]
1586    pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool
1587    where
1588        &'a Self: HasFetchXor,
1589    {
1590        unimplemented!()
1591    }
1592    #[inline]
1593    pub fn xor(&self, val: bool, order: Ordering)
1594    where
1595        &'a Self: HasXor,
1596    {
1597        unimplemented!()
1598    }
1599    #[inline]
1600    pub fn fetch_not(&self, order: Ordering) -> bool
1601    where
1602        &'a Self: HasFetchNot,
1603    {
1604        unimplemented!()
1605    }
1606    #[inline]
1607    pub fn not(&self, order: Ordering)
1608    where
1609        &'a Self: HasNot,
1610    {
1611        unimplemented!()
1612    }
1613    #[inline]
1614    pub fn fetch_update<F>(
1615        &self,
1616        set_order: Ordering,
1617        fetch_order: Ordering,
1618        f: F,
1619    ) -> Result<bool, bool>
1620    where
1621        F: FnMut(bool) -> Option<bool>,
1622        &'a Self: HasFetchUpdate,
1623    {
1624        unimplemented!()
1625    }
1626    } // cfg_no_atomic_cas_or_amo32!
1627}
1628} // cfg_no_atomic_cas!
1629} // cfg_has_atomic_8!
1630
1631cfg_has_atomic_ptr! {
1632/// A raw pointer type which can be safely shared between threads.
1633///
1634/// This type has the same in-memory representation as a `*mut T`.
1635///
1636/// If the compiler and the platform support atomic loads and stores of pointers,
1637/// this type is a wrapper for the standard library's
1638/// [`AtomicPtr`](core::sync::atomic::AtomicPtr). If the platform supports it
1639/// but the compiler does not, atomic operations are implemented using inline
1640/// assembly.
1641// We can use #[repr(transparent)] here, but #[repr(C, align(N))]
1642// will show clearer docs.
1643#[cfg_attr(target_pointer_width = "16", repr(C, align(2)))]
1644#[cfg_attr(target_pointer_width = "32", repr(C, align(4)))]
1645#[cfg_attr(target_pointer_width = "64", repr(C, align(8)))]
1646#[cfg_attr(target_pointer_width = "128", repr(C, align(16)))]
1647pub struct AtomicPtr<T> {
1648    inner: imp::AtomicPtr<T>,
1649}
1650
1651impl<T> Default for AtomicPtr<T> {
1652    /// Creates a null `AtomicPtr<T>`.
1653    #[inline]
1654    fn default() -> Self {
1655        Self::new(ptr::null_mut())
1656    }
1657}
1658
1659impl<T> From<*mut T> for AtomicPtr<T> {
1660    #[inline]
1661    fn from(p: *mut T) -> Self {
1662        Self::new(p)
1663    }
1664}
1665
1666impl<T> fmt::Debug for AtomicPtr<T> {
1667    #[inline] // fmt is not hot path, but #[inline] on fmt seems to still be useful: https://github.com/rust-lang/rust/pull/117727
1668    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1669        // std atomic types use Relaxed in Debug::fmt: https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L2188
1670        fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
1671    }
1672}
1673
1674impl<T> fmt::Pointer for AtomicPtr<T> {
1675    #[inline] // fmt is not hot path, but #[inline] on fmt seems to still be useful: https://github.com/rust-lang/rust/pull/117727
1676    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1677        // std atomic types use Relaxed in Debug::fmt: https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L2188
1678        fmt::Pointer::fmt(&self.load(Ordering::Relaxed), f)
1679    }
1680}
1681
1682// UnwindSafe is implicitly implemented.
1683#[cfg(not(portable_atomic_no_core_unwind_safe))]
1684impl<T> core::panic::RefUnwindSafe for AtomicPtr<T> {}
1685#[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
1686impl<T> std::panic::RefUnwindSafe for AtomicPtr<T> {}
1687
1688impl<T> AtomicPtr<T> {
1689    /// Creates a new `AtomicPtr`.
1690    ///
1691    /// # Examples
1692    ///
1693    /// ```
1694    /// use portable_atomic::AtomicPtr;
1695    ///
1696    /// let ptr = &mut 5;
1697    /// let atomic_ptr = AtomicPtr::new(ptr);
1698    /// ```
1699    #[inline]
1700    #[must_use]
1701    pub const fn new(p: *mut T) -> Self {
1702        static_assert_layout!(AtomicPtr<()>, *mut ());
1703        Self { inner: imp::AtomicPtr::new(p) }
1704    }
1705
1706    // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
1707    const_fn! {
1708        const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
1709        /// Creates a new `AtomicPtr` from a pointer.
1710        ///
1711        /// This is `const fn` on Rust 1.83+.
1712        ///
1713        /// # Safety
1714        ///
1715        /// * `ptr` must be aligned to `align_of::<AtomicPtr<T>>()` (note that on some platforms this
1716        ///   can be bigger than `align_of::<*mut T>()`).
1717        /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
1718        /// * If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
1719        ///   behind `ptr` must have a happens-before relationship with atomic accesses via the returned
1720        ///   value (or vice-versa).
1721        ///   * In other words, time periods where the value is accessed atomically may not overlap
1722        ///     with periods where the value is accessed non-atomically.
1723        ///   * This requirement is trivially satisfied if `ptr` is never used non-atomically for the
1724        ///     duration of lifetime `'a`. Most use cases should be able to follow this guideline.
1725        ///   * This requirement is also trivially satisfied if all accesses (atomic or not) are done
1726        ///     from the same thread.
1727        /// * If this atomic type is *not* lock-free:
1728        ///   * Any accesses to the value behind `ptr` must have a happens-before relationship
1729        ///     with accesses via the returned value (or vice-versa).
1730        ///   * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
1731        ///     be compatible with operations performed by this atomic type.
1732        /// * This method must not be used to create overlapping or mixed-size atomic accesses, as
1733        ///   these are not supported by the memory model.
1734        ///
1735        /// [valid]: core::ptr#safety
1736        #[inline]
1737        #[must_use]
1738        pub const unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a Self {
1739            #[allow(clippy::cast_ptr_alignment)]
1740            // SAFETY: guaranteed by the caller
1741            unsafe { &*(ptr as *mut Self) }
1742        }
1743    }
1744
1745    /// Returns `true` if operations on values of this type are lock-free.
1746    ///
1747    /// If the compiler or the platform doesn't support the necessary
1748    /// atomic instructions, global locks for every potentially
1749    /// concurrent atomic operation will be used.
1750    ///
1751    /// # Examples
1752    ///
1753    /// ```
1754    /// use portable_atomic::AtomicPtr;
1755    ///
1756    /// let is_lock_free = AtomicPtr::<()>::is_lock_free();
1757    /// ```
1758    #[inline]
1759    #[must_use]
1760    pub fn is_lock_free() -> bool {
1761        <imp::AtomicPtr<T>>::is_lock_free()
1762    }
1763
1764    /// Returns `true` if operations on values of this type are lock-free.
1765    ///
1766    /// If the compiler or the platform doesn't support the necessary
1767    /// atomic instructions, global locks for every potentially
1768    /// concurrent atomic operation will be used.
1769    ///
1770    /// **Note:** If the atomic operation relies on dynamic CPU feature detection,
1771    /// this type may be lock-free even if the function returns false.
1772    ///
1773    /// # Examples
1774    ///
1775    /// ```
1776    /// use portable_atomic::AtomicPtr;
1777    ///
1778    /// const IS_ALWAYS_LOCK_FREE: bool = AtomicPtr::<()>::is_always_lock_free();
1779    /// ```
1780    #[inline]
1781    #[must_use]
1782    pub const fn is_always_lock_free() -> bool {
1783        <imp::AtomicPtr<T>>::IS_ALWAYS_LOCK_FREE
1784    }
1785    #[cfg(test)]
1786    const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
1787
1788    const_fn! {
1789        const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
1790        /// Returns a mutable reference to the underlying pointer.
1791        ///
1792        /// This is safe because the mutable reference guarantees that no other threads are
1793        /// concurrently accessing the atomic data.
1794        ///
1795        /// This is `const fn` on Rust 1.83+.
1796        ///
1797        /// # Examples
1798        ///
1799        /// ```
1800        /// use portable_atomic::{AtomicPtr, Ordering};
1801        ///
1802        /// let mut data = 10;
1803        /// let mut atomic_ptr = AtomicPtr::new(&mut data);
1804        /// let mut other_data = 5;
1805        /// *atomic_ptr.get_mut() = &mut other_data;
1806        /// assert_eq!(unsafe { *atomic_ptr.load(Ordering::SeqCst) }, 5);
1807        /// ```
1808        #[inline]
1809        pub const fn get_mut(&mut self) -> &mut *mut T {
1810            // SAFETY: the mutable reference guarantees unique ownership.
1811            // (core::sync::atomic::Atomic*::get_mut is not const yet)
1812            unsafe { &mut *self.as_ptr() }
1813        }
1814    }
1815
1816    // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
1817    // https://github.com/rust-lang/rust/issues/76314
1818
1819    const_fn! {
1820        const_if: #[cfg(not(portable_atomic_no_const_transmute))];
1821        /// Consumes the atomic and returns the contained value.
1822        ///
1823        /// This is safe because passing `self` by value guarantees that no other threads are
1824        /// concurrently accessing the atomic data.
1825        ///
1826        /// This is `const fn` on Rust 1.56+.
1827        ///
1828        /// # Examples
1829        ///
1830        /// ```
1831        /// use portable_atomic::AtomicPtr;
1832        ///
1833        /// let mut data = 5;
1834        /// let atomic_ptr = AtomicPtr::new(&mut data);
1835        /// assert_eq!(unsafe { *atomic_ptr.into_inner() }, 5);
1836        /// ```
1837        #[inline]
1838        pub const fn into_inner(self) -> *mut T {
1839            // SAFETY: AtomicPtr<T> and *mut T have the same size and in-memory representations,
1840            // so they can be safely transmuted.
1841            // (const UnsafeCell::into_inner is unstable)
1842            unsafe { core::mem::transmute(self) }
1843        }
1844    }
1845
1846    /// Loads a value from the pointer.
1847    ///
1848    /// `load` takes an [`Ordering`] argument which describes the memory ordering
1849    /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
1850    ///
1851    /// # Panics
1852    ///
1853    /// Panics if `order` is [`Release`] or [`AcqRel`].
1854    ///
1855    /// # Examples
1856    ///
1857    /// ```
1858    /// use portable_atomic::{AtomicPtr, Ordering};
1859    ///
1860    /// let ptr = &mut 5;
1861    /// let some_ptr = AtomicPtr::new(ptr);
1862    ///
1863    /// let value = some_ptr.load(Ordering::Relaxed);
1864    /// ```
1865    #[inline]
1866    #[cfg_attr(
1867        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1868        track_caller
1869    )]
1870    pub fn load(&self, order: Ordering) -> *mut T {
1871        self.inner.load(order)
1872    }
1873
1874    /// Stores a value into the pointer.
1875    ///
1876    /// `store` takes an [`Ordering`] argument which describes the memory ordering
1877    /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
1878    ///
1879    /// # Panics
1880    ///
1881    /// Panics if `order` is [`Acquire`] or [`AcqRel`].
1882    ///
1883    /// # Examples
1884    ///
1885    /// ```
1886    /// use portable_atomic::{AtomicPtr, Ordering};
1887    ///
1888    /// let ptr = &mut 5;
1889    /// let some_ptr = AtomicPtr::new(ptr);
1890    ///
1891    /// let other_ptr = &mut 10;
1892    ///
1893    /// some_ptr.store(other_ptr, Ordering::Relaxed);
1894    /// ```
1895    #[inline]
1896    #[cfg_attr(
1897        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1898        track_caller
1899    )]
1900    pub fn store(&self, ptr: *mut T, order: Ordering) {
1901        self.inner.store(ptr, order);
1902    }
1903
1904    cfg_has_atomic_cas_or_amo32! {
1905    /// Stores a value into the pointer, returning the previous value.
1906    ///
1907    /// `swap` takes an [`Ordering`] argument which describes the memory ordering
1908    /// of this operation. All ordering modes are possible. Note that using
1909    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1910    /// using [`Release`] makes the load part [`Relaxed`].
1911    ///
1912    /// # Examples
1913    ///
1914    /// ```
1915    /// use portable_atomic::{AtomicPtr, Ordering};
1916    ///
1917    /// let ptr = &mut 5;
1918    /// let some_ptr = AtomicPtr::new(ptr);
1919    ///
1920    /// let other_ptr = &mut 10;
1921    ///
1922    /// let value = some_ptr.swap(other_ptr, Ordering::Relaxed);
1923    /// ```
1924    #[inline]
1925    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1926    pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T {
1927        self.inner.swap(ptr, order)
1928    }
1929
1930    cfg_has_atomic_cas! {
1931    /// Stores a value into the pointer if the current value is the same as the `current` value.
1932    ///
1933    /// The return value is a result indicating whether the new value was written and containing
1934    /// the previous value. On success this value is guaranteed to be equal to `current`.
1935    ///
1936    /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
1937    /// ordering of this operation. `success` describes the required ordering for the
1938    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1939    /// `failure` describes the required ordering for the load operation that takes place when
1940    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1941    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1942    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1943    ///
1944    /// # Panics
1945    ///
1946    /// Panics if `failure` is [`Release`], [`AcqRel`].
1947    ///
1948    /// # Examples
1949    ///
1950    /// ```
1951    /// use portable_atomic::{AtomicPtr, Ordering};
1952    ///
1953    /// let ptr = &mut 5;
1954    /// let some_ptr = AtomicPtr::new(ptr);
1955    ///
1956    /// let other_ptr = &mut 10;
1957    ///
1958    /// let value = some_ptr.compare_exchange(ptr, other_ptr, Ordering::SeqCst, Ordering::Relaxed);
1959    /// ```
1960    #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
1961    #[inline]
1962    #[cfg_attr(
1963        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1964        track_caller
1965    )]
1966    pub fn compare_exchange(
1967        &self,
1968        current: *mut T,
1969        new: *mut T,
1970        success: Ordering,
1971        failure: Ordering,
1972    ) -> Result<*mut T, *mut T> {
1973        self.inner.compare_exchange(current, new, success, failure)
1974    }
1975
1976    /// Stores a value into the pointer if the current value is the same as the `current` value.
1977    ///
1978    /// Unlike [`AtomicPtr::compare_exchange`], this function is allowed to spuriously fail even when the
1979    /// comparison succeeds, which can result in more efficient code on some platforms. The
1980    /// return value is a result indicating whether the new value was written and containing the
1981    /// previous value.
1982    ///
1983    /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
1984    /// ordering of this operation. `success` describes the required ordering for the
1985    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1986    /// `failure` describes the required ordering for the load operation that takes place when
1987    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1988    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1989    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1990    ///
1991    /// # Panics
1992    ///
1993    /// Panics if `failure` is [`Release`], [`AcqRel`].
1994    ///
1995    /// # Examples
1996    ///
1997    /// ```
1998    /// use portable_atomic::{AtomicPtr, Ordering};
1999    ///
2000    /// let some_ptr = AtomicPtr::new(&mut 5);
2001    ///
2002    /// let new = &mut 10;
2003    /// let mut old = some_ptr.load(Ordering::Relaxed);
2004    /// loop {
2005    ///     match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
2006    ///         Ok(_) => break,
2007    ///         Err(x) => old = x,
2008    ///     }
2009    /// }
2010    /// ```
2011    #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
2012    #[inline]
2013    #[cfg_attr(
2014        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
2015        track_caller
2016    )]
2017    pub fn compare_exchange_weak(
2018        &self,
2019        current: *mut T,
2020        new: *mut T,
2021        success: Ordering,
2022        failure: Ordering,
2023    ) -> Result<*mut T, *mut T> {
2024        self.inner.compare_exchange_weak(current, new, success, failure)
2025    }
2026
2027    /// Fetches the value, and applies a function to it that returns an optional
2028    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
2029    /// returned `Some(_)`, else `Err(previous_value)`.
2030    ///
2031    /// Note: This may call the function multiple times if the value has been
2032    /// changed from other threads in the meantime, as long as the function
2033    /// returns `Some(_)`, but the function will have been applied only once to
2034    /// the stored value.
2035    ///
2036    /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
2037    /// ordering of this operation. The first describes the required ordering for
2038    /// when the operation finally succeeds while the second describes the
2039    /// required ordering for loads. These correspond to the success and failure
2040    /// orderings of [`compare_exchange`](Self::compare_exchange) respectively.
2041    ///
2042    /// Using [`Acquire`] as success ordering makes the store part of this
2043    /// operation [`Relaxed`], and using [`Release`] makes the final successful
2044    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
2045    /// [`Acquire`] or [`Relaxed`].
2046    ///
2047    /// # Panics
2048    ///
2049    /// Panics if `fetch_order` is [`Release`], [`AcqRel`].
2050    ///
2051    /// # Considerations
2052    ///
2053    /// This method is not magic; it is not provided by the hardware.
2054    /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
2055    /// and suffers from the same drawbacks.
2056    /// In particular, this method will not circumvent the [ABA Problem].
2057    ///
2058    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
2059    ///
2060    /// # Examples
2061    ///
2062    /// ```
2063    /// use portable_atomic::{AtomicPtr, Ordering};
2064    ///
2065    /// let ptr: *mut _ = &mut 5;
2066    /// let some_ptr = AtomicPtr::new(ptr);
2067    ///
2068    /// let new: *mut _ = &mut 10;
2069    /// assert_eq!(some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
2070    /// let result = some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
2071    ///     if x == ptr {
2072    ///         Some(new)
2073    ///     } else {
2074    ///         None
2075    ///     }
2076    /// });
2077    /// assert_eq!(result, Ok(ptr));
2078    /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
2079    /// ```
2080    #[inline]
2081    #[cfg_attr(
2082        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
2083        track_caller
2084    )]
2085    pub fn fetch_update<F>(
2086        &self,
2087        set_order: Ordering,
2088        fetch_order: Ordering,
2089        mut f: F,
2090    ) -> Result<*mut T, *mut T>
2091    where
2092        F: FnMut(*mut T) -> Option<*mut T>,
2093    {
2094        let mut prev = self.load(fetch_order);
2095        while let Some(next) = f(prev) {
2096            match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
2097                x @ Ok(_) => return x,
2098                Err(next_prev) => prev = next_prev,
2099            }
2100        }
2101        Err(prev)
2102    }
2103
2104    #[cfg(miri)]
2105    #[inline]
2106    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2107    fn fetch_update_<F>(&self, order: Ordering, mut f: F) -> *mut T
2108    where
2109        F: FnMut(*mut T) -> *mut T,
2110    {
2111        // This is a private function and all instances of `f` only operate on the value
2112        // loaded, so there is no need to synchronize the first load/failed CAS.
2113        let mut prev = self.load(Ordering::Relaxed);
2114        loop {
2115            let next = f(prev);
2116            match self.compare_exchange_weak(prev, next, order, Ordering::Relaxed) {
2117                Ok(x) => return x,
2118                Err(next_prev) => prev = next_prev,
2119            }
2120        }
2121    }
2122    } // cfg_has_atomic_cas!
2123
2124    /// Offsets the pointer's address by adding `val` (in units of `T`),
2125    /// returning the previous pointer.
2126    ///
2127    /// This is equivalent to using [`wrapping_add`] to atomically perform the
2128    /// equivalent of `ptr = ptr.wrapping_add(val);`.
2129    ///
2130    /// This method operates in units of `T`, which means that it cannot be used
2131    /// to offset the pointer by an amount which is not a multiple of
2132    /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2133    /// work with a deliberately misaligned pointer. In such cases, you may use
2134    /// the [`fetch_byte_add`](Self::fetch_byte_add) method instead.
2135    ///
2136    /// `fetch_ptr_add` takes an [`Ordering`] argument which describes the
2137    /// memory ordering of this operation. All ordering modes are possible. Note
2138    /// that using [`Acquire`] makes the store part of this operation
2139    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2140    ///
2141    /// [`wrapping_add`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_add
2142    ///
2143    /// # Examples
2144    ///
2145    /// ```
2146    /// # #![allow(unstable_name_collisions)]
2147    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2148    /// use portable_atomic::{AtomicPtr, Ordering};
2149    ///
2150    /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2151    /// assert_eq!(atom.fetch_ptr_add(1, Ordering::Relaxed).addr(), 0);
2152    /// // Note: units of `size_of::<i64>()`.
2153    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 8);
2154    /// ```
2155    #[inline]
2156    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2157    pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T {
2158        self.fetch_byte_add(val.wrapping_mul(core::mem::size_of::<T>()), order)
2159    }
2160
2161    /// Offsets the pointer's address by subtracting `val` (in units of `T`),
2162    /// returning the previous pointer.
2163    ///
2164    /// This is equivalent to using [`wrapping_sub`] to atomically perform the
2165    /// equivalent of `ptr = ptr.wrapping_sub(val);`.
2166    ///
2167    /// This method operates in units of `T`, which means that it cannot be used
2168    /// to offset the pointer by an amount which is not a multiple of
2169    /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2170    /// work with a deliberately misaligned pointer. In such cases, you may use
2171    /// the [`fetch_byte_sub`](Self::fetch_byte_sub) method instead.
2172    ///
2173    /// `fetch_ptr_sub` takes an [`Ordering`] argument which describes the memory
2174    /// ordering of this operation. All ordering modes are possible. Note that
2175    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2176    /// and using [`Release`] makes the load part [`Relaxed`].
2177    ///
2178    /// [`wrapping_sub`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_sub
2179    ///
2180    /// # Examples
2181    ///
2182    /// ```
2183    /// use portable_atomic::{AtomicPtr, Ordering};
2184    ///
2185    /// let array = [1i32, 2i32];
2186    /// let atom = AtomicPtr::new(array.as_ptr().wrapping_add(1) as *mut _);
2187    ///
2188    /// assert!(core::ptr::eq(atom.fetch_ptr_sub(1, Ordering::Relaxed), &array[1]));
2189    /// assert!(core::ptr::eq(atom.load(Ordering::Relaxed), &array[0]));
2190    /// ```
2191    #[inline]
2192    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2193    pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T {
2194        self.fetch_byte_sub(val.wrapping_mul(core::mem::size_of::<T>()), order)
2195    }
2196
2197    /// Offsets the pointer's address by adding `val` *bytes*, returning the
2198    /// previous pointer.
2199    ///
2200    /// This is equivalent to using [`wrapping_add`] and [`cast`] to atomically
2201    /// perform `ptr = ptr.cast::<u8>().wrapping_add(val).cast::<T>()`.
2202    ///
2203    /// `fetch_byte_add` takes an [`Ordering`] argument which describes the
2204    /// memory ordering of this operation. All ordering modes are possible. Note
2205    /// that using [`Acquire`] makes the store part of this operation
2206    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2207    ///
2208    /// [`wrapping_add`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_add
2209    /// [`cast`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.cast
2210    ///
2211    /// # Examples
2212    ///
2213    /// ```
2214    /// # #![allow(unstable_name_collisions)]
2215    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2216    /// use portable_atomic::{AtomicPtr, Ordering};
2217    ///
2218    /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2219    /// assert_eq!(atom.fetch_byte_add(1, Ordering::Relaxed).addr(), 0);
2220    /// // Note: in units of bytes, not `size_of::<i64>()`.
2221    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 1);
2222    /// ```
2223    #[inline]
2224    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2225    pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T {
2226        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2227        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2228        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2229        // compatible and is sound.
2230        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2231        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2232        #[cfg(miri)]
2233        {
2234            self.fetch_update_(order, |x| x.with_addr(x.addr().wrapping_add(val)))
2235        }
2236        #[cfg(not(miri))]
2237        {
2238            crate::utils::ptr::with_exposed_provenance_mut(
2239                self.as_atomic_usize().fetch_add(val, order)
2240            )
2241        }
2242    }
2243
2244    /// Offsets the pointer's address by subtracting `val` *bytes*, returning the
2245    /// previous pointer.
2246    ///
2247    /// This is equivalent to using [`wrapping_sub`] and [`cast`] to atomically
2248    /// perform `ptr = ptr.cast::<u8>().wrapping_sub(val).cast::<T>()`.
2249    ///
2250    /// `fetch_byte_sub` takes an [`Ordering`] argument which describes the
2251    /// memory ordering of this operation. All ordering modes are possible. Note
2252    /// that using [`Acquire`] makes the store part of this operation
2253    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2254    ///
2255    /// [`wrapping_sub`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_sub
2256    /// [`cast`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.cast
2257    ///
2258    /// # Examples
2259    ///
2260    /// ```
2261    /// # #![allow(unstable_name_collisions)]
2262    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2263    /// use portable_atomic::{AtomicPtr, Ordering};
2264    ///
2265    /// let atom = AtomicPtr::<i64>::new(sptr::invalid_mut(1));
2266    /// assert_eq!(atom.fetch_byte_sub(1, Ordering::Relaxed).addr(), 1);
2267    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 0);
2268    /// ```
2269    #[inline]
2270    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2271    pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T {
2272        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2273        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2274        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2275        // compatible and is sound.
2276        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2277        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2278        #[cfg(miri)]
2279        {
2280            self.fetch_update_(order, |x| x.with_addr(x.addr().wrapping_sub(val)))
2281        }
2282        #[cfg(not(miri))]
2283        {
2284            crate::utils::ptr::with_exposed_provenance_mut(
2285                self.as_atomic_usize().fetch_sub(val, order)
2286            )
2287        }
2288    }
2289
2290    /// Performs a bitwise "or" operation on the address of the current pointer,
2291    /// and the argument `val`, and stores a pointer with provenance of the
2292    /// current pointer and the resulting address.
2293    ///
2294    /// This is equivalent to using [`map_addr`] to atomically perform
2295    /// `ptr = ptr.map_addr(|a| a | val)`. This can be used in tagged
2296    /// pointer schemes to atomically set tag bits.
2297    ///
2298    /// **Caveat**: This operation returns the previous value. To compute the
2299    /// stored value without losing provenance, you may use [`map_addr`]. For
2300    /// example: `a.fetch_or(val).map_addr(|a| a | val)`.
2301    ///
2302    /// `fetch_or` takes an [`Ordering`] argument which describes the memory
2303    /// ordering of this operation. All ordering modes are possible. Note that
2304    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2305    /// and using [`Release`] makes the load part [`Relaxed`].
2306    ///
2307    /// This API and its claimed semantics are part of the Strict Provenance
2308    /// experiment, see the [module documentation for `ptr`][core::ptr] for
2309    /// details.
2310    ///
2311    /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr
2312    ///
2313    /// # Examples
2314    ///
2315    /// ```
2316    /// # #![allow(unstable_name_collisions)]
2317    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2318    /// use portable_atomic::{AtomicPtr, Ordering};
2319    ///
2320    /// let pointer = &mut 3i64 as *mut i64;
2321    ///
2322    /// let atom = AtomicPtr::<i64>::new(pointer);
2323    /// // Tag the bottom bit of the pointer.
2324    /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 0);
2325    /// // Extract and untag.
2326    /// let tagged = atom.load(Ordering::Relaxed);
2327    /// assert_eq!(tagged.addr() & 1, 1);
2328    /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
2329    /// ```
2330    #[inline]
2331    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2332    pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T {
2333        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2334        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2335        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2336        // compatible and is sound.
2337        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2338        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2339        #[cfg(miri)]
2340        {
2341            self.fetch_update_(order, |x| x.with_addr(x.addr() | val))
2342        }
2343        #[cfg(not(miri))]
2344        {
2345            crate::utils::ptr::with_exposed_provenance_mut(
2346                self.as_atomic_usize().fetch_or(val, order)
2347            )
2348        }
2349    }
2350
2351    /// Performs a bitwise "and" operation on the address of the current
2352    /// pointer, and the argument `val`, and stores a pointer with provenance of
2353    /// the current pointer and the resulting address.
2354    ///
2355    /// This is equivalent to using [`map_addr`] to atomically perform
2356    /// `ptr = ptr.map_addr(|a| a & val)`. This can be used in tagged
2357    /// pointer schemes to atomically unset tag bits.
2358    ///
2359    /// **Caveat**: This operation returns the previous value. To compute the
2360    /// stored value without losing provenance, you may use [`map_addr`]. For
2361    /// example: `a.fetch_and(val).map_addr(|a| a & val)`.
2362    ///
2363    /// `fetch_and` takes an [`Ordering`] argument which describes the memory
2364    /// ordering of this operation. All ordering modes are possible. Note that
2365    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2366    /// and using [`Release`] makes the load part [`Relaxed`].
2367    ///
2368    /// This API and its claimed semantics are part of the Strict Provenance
2369    /// experiment, see the [module documentation for `ptr`][core::ptr] for
2370    /// details.
2371    ///
2372    /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr
2373    ///
2374    /// # Examples
2375    ///
2376    /// ```
2377    /// # #![allow(unstable_name_collisions)]
2378    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2379    /// use portable_atomic::{AtomicPtr, Ordering};
2380    ///
2381    /// let pointer = &mut 3i64 as *mut i64;
2382    /// // A tagged pointer
2383    /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
2384    /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 1);
2385    /// // Untag, and extract the previously tagged pointer.
2386    /// let untagged = atom.fetch_and(!1, Ordering::Relaxed).map_addr(|a| a & !1);
2387    /// assert_eq!(untagged, pointer);
2388    /// ```
2389    #[inline]
2390    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2391    pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T {
2392        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2393        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2394        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2395        // compatible and is sound.
2396        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2397        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2398        #[cfg(miri)]
2399        {
2400            self.fetch_update_(order, |x| x.with_addr(x.addr() & val))
2401        }
2402        #[cfg(not(miri))]
2403        {
2404            crate::utils::ptr::with_exposed_provenance_mut(
2405                self.as_atomic_usize().fetch_and(val, order)
2406            )
2407        }
2408    }
2409
2410    /// Performs a bitwise "xor" operation on the address of the current
2411    /// pointer, and the argument `val`, and stores a pointer with provenance of
2412    /// the current pointer and the resulting address.
2413    ///
2414    /// This is equivalent to using [`map_addr`] to atomically perform
2415    /// `ptr = ptr.map_addr(|a| a ^ val)`. This can be used in tagged
2416    /// pointer schemes to atomically toggle tag bits.
2417    ///
2418    /// **Caveat**: This operation returns the previous value. To compute the
2419    /// stored value without losing provenance, you may use [`map_addr`]. For
2420    /// example: `a.fetch_xor(val).map_addr(|a| a ^ val)`.
2421    ///
2422    /// `fetch_xor` takes an [`Ordering`] argument which describes the memory
2423    /// ordering of this operation. All ordering modes are possible. Note that
2424    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2425    /// and using [`Release`] makes the load part [`Relaxed`].
2426    ///
2427    /// This API and its claimed semantics are part of the Strict Provenance
2428    /// experiment, see the [module documentation for `ptr`][core::ptr] for
2429    /// details.
2430    ///
2431    /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr
2432    ///
2433    /// # Examples
2434    ///
2435    /// ```
2436    /// # #![allow(unstable_name_collisions)]
2437    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2438    /// use portable_atomic::{AtomicPtr, Ordering};
2439    ///
2440    /// let pointer = &mut 3i64 as *mut i64;
2441    /// let atom = AtomicPtr::<i64>::new(pointer);
2442    ///
2443    /// // Toggle a tag bit on the pointer.
2444    /// atom.fetch_xor(1, Ordering::Relaxed);
2445    /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2446    /// ```
2447    #[inline]
2448    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2449    pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T {
2450        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2451        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2452        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2453        // compatible and is sound.
2454        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2455        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2456        #[cfg(miri)]
2457        {
2458            self.fetch_update_(order, |x| x.with_addr(x.addr() ^ val))
2459        }
2460        #[cfg(not(miri))]
2461        {
2462            crate::utils::ptr::with_exposed_provenance_mut(
2463                self.as_atomic_usize().fetch_xor(val, order)
2464            )
2465        }
2466    }
2467
2468    /// Sets the bit at the specified bit-position to 1.
2469    ///
2470    /// Returns `true` if the specified bit was previously set to 1.
2471    ///
2472    /// `bit_set` takes an [`Ordering`] argument which describes the memory ordering
2473    /// of this operation. All ordering modes are possible. Note that using
2474    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2475    /// using [`Release`] makes the load part [`Relaxed`].
2476    ///
2477    /// This corresponds to x86's `lock bts`, and the implementation calls them on x86/x86_64.
2478    ///
2479    /// # Examples
2480    ///
2481    /// ```
2482    /// # #![allow(unstable_name_collisions)]
2483    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2484    /// use portable_atomic::{AtomicPtr, Ordering};
2485    ///
2486    /// let pointer = &mut 3i64 as *mut i64;
2487    ///
2488    /// let atom = AtomicPtr::<i64>::new(pointer);
2489    /// // Tag the bottom bit of the pointer.
2490    /// assert!(!atom.bit_set(0, Ordering::Relaxed));
2491    /// // Extract and untag.
2492    /// let tagged = atom.load(Ordering::Relaxed);
2493    /// assert_eq!(tagged.addr() & 1, 1);
2494    /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
2495    /// ```
2496    #[inline]
2497    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2498    pub fn bit_set(&self, bit: u32, order: Ordering) -> bool {
2499        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2500        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2501        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2502        // compatible and is sound.
2503        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2504        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2505        #[cfg(miri)]
2506        {
2507            let mask = 1_usize.wrapping_shl(bit);
2508            self.fetch_or(mask, order).addr() & mask != 0
2509        }
2510        #[cfg(not(miri))]
2511        {
2512            self.as_atomic_usize().bit_set(bit, order)
2513        }
2514    }
2515
2516    /// Clears the bit at the specified bit-position to 1.
2517    ///
2518    /// Returns `true` if the specified bit was previously set to 1.
2519    ///
2520    /// `bit_clear` takes an [`Ordering`] argument which describes the memory ordering
2521    /// of this operation. All ordering modes are possible. Note that using
2522    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2523    /// using [`Release`] makes the load part [`Relaxed`].
2524    ///
2525    /// This corresponds to x86's `lock btr`, and the implementation calls them on x86/x86_64.
2526    ///
2527    /// # Examples
2528    ///
2529    /// ```
2530    /// # #![allow(unstable_name_collisions)]
2531    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2532    /// use portable_atomic::{AtomicPtr, Ordering};
2533    ///
2534    /// let pointer = &mut 3i64 as *mut i64;
2535    /// // A tagged pointer
2536    /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
2537    /// assert!(atom.bit_set(0, Ordering::Relaxed));
2538    /// // Untag
2539    /// assert!(atom.bit_clear(0, Ordering::Relaxed));
2540    /// ```
2541    #[inline]
2542    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2543    pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool {
2544        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2545        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2546        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2547        // compatible and is sound.
2548        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2549        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2550        #[cfg(miri)]
2551        {
2552            let mask = 1_usize.wrapping_shl(bit);
2553            self.fetch_and(!mask, order).addr() & mask != 0
2554        }
2555        #[cfg(not(miri))]
2556        {
2557            self.as_atomic_usize().bit_clear(bit, order)
2558        }
2559    }
2560
2561    /// Toggles the bit at the specified bit-position.
2562    ///
2563    /// Returns `true` if the specified bit was previously set to 1.
2564    ///
2565    /// `bit_toggle` takes an [`Ordering`] argument which describes the memory ordering
2566    /// of this operation. All ordering modes are possible. Note that using
2567    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2568    /// using [`Release`] makes the load part [`Relaxed`].
2569    ///
2570    /// This corresponds to x86's `lock btc`, and the implementation calls them on x86/x86_64.
2571    ///
2572    /// # Examples
2573    ///
2574    /// ```
2575    /// # #![allow(unstable_name_collisions)]
2576    /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2577    /// use portable_atomic::{AtomicPtr, Ordering};
2578    ///
2579    /// let pointer = &mut 3i64 as *mut i64;
2580    /// let atom = AtomicPtr::<i64>::new(pointer);
2581    ///
2582    /// // Toggle a tag bit on the pointer.
2583    /// atom.bit_toggle(0, Ordering::Relaxed);
2584    /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2585    /// ```
2586    #[inline]
2587    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2588    pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool {
2589        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2590        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2591        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2592        // compatible and is sound.
2593        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2594        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2595        #[cfg(miri)]
2596        {
2597            let mask = 1_usize.wrapping_shl(bit);
2598            self.fetch_xor(mask, order).addr() & mask != 0
2599        }
2600        #[cfg(not(miri))]
2601        {
2602            self.as_atomic_usize().bit_toggle(bit, order)
2603        }
2604    }
2605
2606    #[cfg(not(miri))]
2607    #[inline(always)]
2608    fn as_atomic_usize(&self) -> &AtomicUsize {
2609        static_assert!(
2610            core::mem::size_of::<AtomicPtr<()>>() == core::mem::size_of::<AtomicUsize>()
2611        );
2612        static_assert!(
2613            core::mem::align_of::<AtomicPtr<()>>() == core::mem::align_of::<AtomicUsize>()
2614        );
2615        // SAFETY: AtomicPtr and AtomicUsize have the same layout,
2616        // and both access data in the same way.
2617        unsafe { &*(self as *const Self as *const AtomicUsize) }
2618    }
2619    } // cfg_has_atomic_cas_or_amo32!
2620
2621    const_fn! {
2622        const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
2623        /// Returns a mutable pointer to the underlying pointer.
2624        ///
2625        /// Returning an `*mut` pointer from a shared reference to this atomic is
2626        /// safe because the atomic types work with interior mutability. Any use of
2627        /// the returned raw pointer requires an `unsafe` block and has to uphold
2628        /// the safety requirements. If there is concurrent access, note the following
2629        /// additional safety requirements:
2630        ///
2631        /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
2632        ///   operations on it must be atomic.
2633        /// - Otherwise, any concurrent operations on it must be compatible with
2634        ///   operations performed by this atomic type.
2635        ///
2636        /// This is `const fn` on Rust 1.58+.
2637        #[inline]
2638        pub const fn as_ptr(&self) -> *mut *mut T {
2639            self.inner.as_ptr()
2640        }
2641    }
2642}
2643// See https://github.com/taiki-e/portable-atomic/issues/180
2644#[cfg(not(feature = "require-cas"))]
2645cfg_no_atomic_cas! {
2646#[doc(hidden)]
2647#[allow(unused_variables, clippy::unused_self, clippy::extra_unused_lifetimes)]
2648impl<'a, T: 'a> AtomicPtr<T> {
2649    cfg_no_atomic_cas_or_amo32! {
2650    #[inline]
2651    pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T
2652    where
2653        &'a Self: HasSwap,
2654    {
2655        unimplemented!()
2656    }
2657    } // cfg_no_atomic_cas_or_amo32!
2658    #[inline]
2659    pub fn compare_exchange(
2660        &self,
2661        current: *mut T,
2662        new: *mut T,
2663        success: Ordering,
2664        failure: Ordering,
2665    ) -> Result<*mut T, *mut T>
2666    where
2667        &'a Self: HasCompareExchange,
2668    {
2669        unimplemented!()
2670    }
2671    #[inline]
2672    pub fn compare_exchange_weak(
2673        &self,
2674        current: *mut T,
2675        new: *mut T,
2676        success: Ordering,
2677        failure: Ordering,
2678    ) -> Result<*mut T, *mut T>
2679    where
2680        &'a Self: HasCompareExchangeWeak,
2681    {
2682        unimplemented!()
2683    }
2684    #[inline]
2685    pub fn fetch_update<F>(
2686        &self,
2687        set_order: Ordering,
2688        fetch_order: Ordering,
2689        f: F,
2690    ) -> Result<*mut T, *mut T>
2691    where
2692        F: FnMut(*mut T) -> Option<*mut T>,
2693        &'a Self: HasFetchUpdate,
2694    {
2695        unimplemented!()
2696    }
2697    cfg_no_atomic_cas_or_amo32! {
2698    #[inline]
2699    pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T
2700    where
2701        &'a Self: HasFetchPtrAdd,
2702    {
2703        unimplemented!()
2704    }
2705    #[inline]
2706    pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T
2707    where
2708        &'a Self: HasFetchPtrSub,
2709    {
2710        unimplemented!()
2711    }
2712    #[inline]
2713    pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T
2714    where
2715        &'a Self: HasFetchByteAdd,
2716    {
2717        unimplemented!()
2718    }
2719    #[inline]
2720    pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T
2721    where
2722        &'a Self: HasFetchByteSub,
2723    {
2724        unimplemented!()
2725    }
2726    #[inline]
2727    pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T
2728    where
2729        &'a Self: HasFetchOr,
2730    {
2731        unimplemented!()
2732    }
2733    #[inline]
2734    pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T
2735    where
2736        &'a Self: HasFetchAnd,
2737    {
2738        unimplemented!()
2739    }
2740    #[inline]
2741    pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T
2742    where
2743        &'a Self: HasFetchXor,
2744    {
2745        unimplemented!()
2746    }
2747    #[inline]
2748    pub fn bit_set(&self, bit: u32, order: Ordering) -> bool
2749    where
2750        &'a Self: HasBitSet,
2751    {
2752        unimplemented!()
2753    }
2754    #[inline]
2755    pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool
2756    where
2757        &'a Self: HasBitClear,
2758    {
2759        unimplemented!()
2760    }
2761    #[inline]
2762    pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool
2763    where
2764        &'a Self: HasBitToggle,
2765    {
2766        unimplemented!()
2767    }
2768    } // cfg_no_atomic_cas_or_amo32!
2769}
2770} // cfg_no_atomic_cas!
2771} // cfg_has_atomic_ptr!
2772
2773macro_rules! atomic_int {
2774    // Atomic{I,U}* impls
2775    ($atomic_type:ident, $int_type:ident, $align:literal,
2776        $cfg_has_atomic_cas_or_amo32_or_8:ident, $cfg_no_atomic_cas_or_amo32_or_8:ident
2777        $(, #[$cfg_float:meta] $atomic_float_type:ident, $float_type:ident)?
2778    ) => {
2779        doc_comment! {
2780            concat!("An integer type which can be safely shared between threads.
2781
2782This type has the same in-memory representation as the underlying integer type,
2783[`", stringify!($int_type), "`].
2784
2785If the compiler and the platform support atomic loads and stores of [`", stringify!($int_type),
2786"`], this type is a wrapper for the standard library's `", stringify!($atomic_type),
2787"`. If the platform supports it but the compiler does not, atomic operations are implemented using
2788inline assembly. Otherwise synchronizes using global locks.
2789You can call [`", stringify!($atomic_type), "::is_lock_free()`] to check whether
2790atomic instructions or locks will be used.
2791"
2792            ),
2793            // We can use #[repr(transparent)] here, but #[repr(C, align(N))]
2794            // will show clearer docs.
2795            #[repr(C, align($align))]
2796            pub struct $atomic_type {
2797                inner: imp::$atomic_type,
2798            }
2799        }
2800
2801        impl Default for $atomic_type {
2802            #[inline]
2803            fn default() -> Self {
2804                Self::new($int_type::default())
2805            }
2806        }
2807
2808        impl From<$int_type> for $atomic_type {
2809            #[inline]
2810            fn from(v: $int_type) -> Self {
2811                Self::new(v)
2812            }
2813        }
2814
2815        // UnwindSafe is implicitly implemented.
2816        #[cfg(not(portable_atomic_no_core_unwind_safe))]
2817        impl core::panic::RefUnwindSafe for $atomic_type {}
2818        #[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
2819        impl std::panic::RefUnwindSafe for $atomic_type {}
2820
2821        impl_debug_and_serde!($atomic_type);
2822
2823        impl $atomic_type {
2824            doc_comment! {
2825                concat!(
2826                    "Creates a new atomic integer.
2827
2828# Examples
2829
2830```
2831use portable_atomic::", stringify!($atomic_type), ";
2832
2833let atomic_forty_two = ", stringify!($atomic_type), "::new(42);
2834```"
2835                ),
2836                #[inline]
2837                #[must_use]
2838                pub const fn new(v: $int_type) -> Self {
2839                    static_assert_layout!($atomic_type, $int_type);
2840                    Self { inner: imp::$atomic_type::new(v) }
2841                }
2842            }
2843
2844            // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
2845            #[cfg(not(portable_atomic_no_const_mut_refs))]
2846            doc_comment! {
2847                concat!("Creates a new reference to an atomic integer from a pointer.
2848
2849This is `const fn` on Rust 1.83+.
2850
2851# Safety
2852
2853* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
2854  can be bigger than `align_of::<", stringify!($int_type), ">()`).
2855* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
2856* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
2857  behind `ptr` must have a happens-before relationship with atomic accesses via
2858  the returned value (or vice-versa).
2859  * In other words, time periods where the value is accessed atomically may not
2860    overlap with periods where the value is accessed non-atomically.
2861  * This requirement is trivially satisfied if `ptr` is never used non-atomically
2862    for the duration of lifetime `'a`. Most use cases should be able to follow
2863    this guideline.
2864  * This requirement is also trivially satisfied if all accesses (atomic or not) are
2865    done from the same thread.
2866* If this atomic type is *not* lock-free:
2867  * Any accesses to the value behind `ptr` must have a happens-before relationship
2868    with accesses via the returned value (or vice-versa).
2869  * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
2870    be compatible with operations performed by this atomic type.
2871* This method must not be used to create overlapping or mixed-size atomic
2872  accesses, as these are not supported by the memory model.
2873
2874[valid]: core::ptr#safety"),
2875                #[inline]
2876                #[must_use]
2877                pub const unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a Self {
2878                    #[allow(clippy::cast_ptr_alignment)]
2879                    // SAFETY: guaranteed by the caller
2880                    unsafe { &*(ptr as *mut Self) }
2881                }
2882            }
2883            #[cfg(portable_atomic_no_const_mut_refs)]
2884            doc_comment! {
2885                concat!("Creates a new reference to an atomic integer from a pointer.
2886
2887This is `const fn` on Rust 1.83+.
2888
2889# Safety
2890
2891* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
2892  can be bigger than `align_of::<", stringify!($int_type), ">()`).
2893* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
2894* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
2895  behind `ptr` must have a happens-before relationship with atomic accesses via
2896  the returned value (or vice-versa).
2897  * In other words, time periods where the value is accessed atomically may not
2898    overlap with periods where the value is accessed non-atomically.
2899  * This requirement is trivially satisfied if `ptr` is never used non-atomically
2900    for the duration of lifetime `'a`. Most use cases should be able to follow
2901    this guideline.
2902  * This requirement is also trivially satisfied if all accesses (atomic or not) are
2903    done from the same thread.
2904* If this atomic type is *not* lock-free:
2905  * Any accesses to the value behind `ptr` must have a happens-before relationship
2906    with accesses via the returned value (or vice-versa).
2907  * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
2908    be compatible with operations performed by this atomic type.
2909* This method must not be used to create overlapping or mixed-size atomic
2910  accesses, as these are not supported by the memory model.
2911
2912[valid]: core::ptr#safety"),
2913                #[inline]
2914                #[must_use]
2915                pub unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a Self {
2916                    #[allow(clippy::cast_ptr_alignment)]
2917                    // SAFETY: guaranteed by the caller
2918                    unsafe { &*(ptr as *mut Self) }
2919                }
2920            }
2921
2922            doc_comment! {
2923                concat!("Returns `true` if operations on values of this type are lock-free.
2924
2925If the compiler or the platform doesn't support the necessary
2926atomic instructions, global locks for every potentially
2927concurrent atomic operation will be used.
2928
2929# Examples
2930
2931```
2932use portable_atomic::", stringify!($atomic_type), ";
2933
2934let is_lock_free = ", stringify!($atomic_type), "::is_lock_free();
2935```"),
2936                #[inline]
2937                #[must_use]
2938                pub fn is_lock_free() -> bool {
2939                    <imp::$atomic_type>::is_lock_free()
2940                }
2941            }
2942
2943            doc_comment! {
2944                concat!("Returns `true` if operations on values of this type are lock-free.
2945
2946If the compiler or the platform doesn't support the necessary
2947atomic instructions, global locks for every potentially
2948concurrent atomic operation will be used.
2949
2950**Note:** If the atomic operation relies on dynamic CPU feature detection,
2951this type may be lock-free even if the function returns false.
2952
2953# Examples
2954
2955```
2956use portable_atomic::", stringify!($atomic_type), ";
2957
2958const IS_ALWAYS_LOCK_FREE: bool = ", stringify!($atomic_type), "::is_always_lock_free();
2959```"),
2960                #[inline]
2961                #[must_use]
2962                pub const fn is_always_lock_free() -> bool {
2963                    <imp::$atomic_type>::IS_ALWAYS_LOCK_FREE
2964                }
2965            }
2966            #[cfg(test)]
2967            #[cfg_attr(all(valgrind, target_arch = "powerpc64"), allow(dead_code))] // TODO: Hang (as of Valgrind 3.25)
2968            const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
2969
2970            #[cfg(not(portable_atomic_no_const_mut_refs))]
2971            doc_comment! {
2972                concat!("Returns a mutable reference to the underlying integer.\n
2973This is safe because the mutable reference guarantees that no other threads are
2974concurrently accessing the atomic data.
2975
2976This is `const fn` on Rust 1.83+.
2977
2978# Examples
2979
2980```
2981use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2982
2983let mut some_var = ", stringify!($atomic_type), "::new(10);
2984assert_eq!(*some_var.get_mut(), 10);
2985*some_var.get_mut() = 5;
2986assert_eq!(some_var.load(Ordering::SeqCst), 5);
2987```"),
2988                #[inline]
2989                pub const fn get_mut(&mut self) -> &mut $int_type {
2990                    // SAFETY: the mutable reference guarantees unique ownership.
2991                    // (core::sync::atomic::Atomic*::get_mut is not const yet)
2992                    unsafe { &mut *self.as_ptr() }
2993                }
2994            }
2995            #[cfg(portable_atomic_no_const_mut_refs)]
2996            doc_comment! {
2997                concat!("Returns a mutable reference to the underlying integer.\n
2998This is safe because the mutable reference guarantees that no other threads are
2999concurrently accessing the atomic data.
3000
3001This is `const fn` on Rust 1.83+.
3002
3003# Examples
3004
3005```
3006use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3007
3008let mut some_var = ", stringify!($atomic_type), "::new(10);
3009assert_eq!(*some_var.get_mut(), 10);
3010*some_var.get_mut() = 5;
3011assert_eq!(some_var.load(Ordering::SeqCst), 5);
3012```"),
3013                #[inline]
3014                pub fn get_mut(&mut self) -> &mut $int_type {
3015                    // SAFETY: the mutable reference guarantees unique ownership.
3016                    unsafe { &mut *self.as_ptr() }
3017                }
3018            }
3019
3020            // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
3021            // https://github.com/rust-lang/rust/issues/76314
3022
3023            #[cfg(not(portable_atomic_no_const_transmute))]
3024            doc_comment! {
3025                concat!("Consumes the atomic and returns the contained value.
3026
3027This is safe because passing `self` by value guarantees that no other threads are
3028concurrently accessing the atomic data.
3029
3030This is `const fn` on Rust 1.56+.
3031
3032# Examples
3033
3034```
3035use portable_atomic::", stringify!($atomic_type), ";
3036
3037let some_var = ", stringify!($atomic_type), "::new(5);
3038assert_eq!(some_var.into_inner(), 5);
3039```"),
3040                #[inline]
3041                pub const fn into_inner(self) -> $int_type {
3042                    // SAFETY: $atomic_type and $int_type have the same size and in-memory representations,
3043                    // so they can be safely transmuted.
3044                    // (const UnsafeCell::into_inner is unstable)
3045                    unsafe { core::mem::transmute(self) }
3046                }
3047            }
3048            #[cfg(portable_atomic_no_const_transmute)]
3049            doc_comment! {
3050                concat!("Consumes the atomic and returns the contained value.
3051
3052This is safe because passing `self` by value guarantees that no other threads are
3053concurrently accessing the atomic data.
3054
3055This is `const fn` on Rust 1.56+.
3056
3057# Examples
3058
3059```
3060use portable_atomic::", stringify!($atomic_type), ";
3061
3062let some_var = ", stringify!($atomic_type), "::new(5);
3063assert_eq!(some_var.into_inner(), 5);
3064```"),
3065                #[inline]
3066                pub fn into_inner(self) -> $int_type {
3067                    // SAFETY: $atomic_type and $int_type have the same size and in-memory representations,
3068                    // so they can be safely transmuted.
3069                    // (const UnsafeCell::into_inner is unstable)
3070                    unsafe { core::mem::transmute(self) }
3071                }
3072            }
3073
3074            doc_comment! {
3075                concat!("Loads a value from the atomic integer.
3076
3077`load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
3078Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
3079
3080# Panics
3081
3082Panics if `order` is [`Release`] or [`AcqRel`].
3083
3084# Examples
3085
3086```
3087use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3088
3089let some_var = ", stringify!($atomic_type), "::new(5);
3090
3091assert_eq!(some_var.load(Ordering::Relaxed), 5);
3092```"),
3093                #[inline]
3094                #[cfg_attr(
3095                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3096                    track_caller
3097                )]
3098                pub fn load(&self, order: Ordering) -> $int_type {
3099                    self.inner.load(order)
3100                }
3101            }
3102
3103            doc_comment! {
3104                concat!("Stores a value into the atomic integer.
3105
3106`store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
3107Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
3108
3109# Panics
3110
3111Panics if `order` is [`Acquire`] or [`AcqRel`].
3112
3113# Examples
3114
3115```
3116use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3117
3118let some_var = ", stringify!($atomic_type), "::new(5);
3119
3120some_var.store(10, Ordering::Relaxed);
3121assert_eq!(some_var.load(Ordering::Relaxed), 10);
3122```"),
3123                #[inline]
3124                #[cfg_attr(
3125                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3126                    track_caller
3127                )]
3128                pub fn store(&self, val: $int_type, order: Ordering) {
3129                    self.inner.store(val, order)
3130                }
3131            }
3132
3133            cfg_has_atomic_cas_or_amo32! {
3134            $cfg_has_atomic_cas_or_amo32_or_8! {
3135            doc_comment! {
3136                concat!("Stores a value into the atomic integer, returning the previous value.
3137
3138`swap` takes an [`Ordering`] argument which describes the memory ordering
3139of this operation. All ordering modes are possible. Note that using
3140[`Acquire`] makes the store part of this operation [`Relaxed`], and
3141using [`Release`] makes the load part [`Relaxed`].
3142
3143# Examples
3144
3145```
3146use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3147
3148let some_var = ", stringify!($atomic_type), "::new(5);
3149
3150assert_eq!(some_var.swap(10, Ordering::Relaxed), 5);
3151```"),
3152                #[inline]
3153                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3154                pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type {
3155                    self.inner.swap(val, order)
3156                }
3157            }
3158            } // $cfg_has_atomic_cas_or_amo32_or_8!
3159
3160            cfg_has_atomic_cas! {
3161            doc_comment! {
3162                concat!("Stores a value into the atomic integer if the current value is the same as
3163the `current` value.
3164
3165The return value is a result indicating whether the new value was written and
3166containing the previous value. On success this value is guaranteed to be equal to
3167`current`.
3168
3169`compare_exchange` takes two [`Ordering`] arguments to describe the memory
3170ordering of this operation. `success` describes the required ordering for the
3171read-modify-write operation that takes place if the comparison with `current` succeeds.
3172`failure` describes the required ordering for the load operation that takes place when
3173the comparison fails. Using [`Acquire`] as success ordering makes the store part
3174of this operation [`Relaxed`], and using [`Release`] makes the successful load
3175[`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3176
3177# Panics
3178
3179Panics if `failure` is [`Release`], [`AcqRel`].
3180
3181# Examples
3182
3183```
3184use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3185
3186let some_var = ", stringify!($atomic_type), "::new(5);
3187
3188assert_eq!(
3189    some_var.compare_exchange(5, 10, Ordering::Acquire, Ordering::Relaxed),
3190    Ok(5),
3191);
3192assert_eq!(some_var.load(Ordering::Relaxed), 10);
3193
3194assert_eq!(
3195    some_var.compare_exchange(6, 12, Ordering::SeqCst, Ordering::Acquire),
3196    Err(10),
3197);
3198assert_eq!(some_var.load(Ordering::Relaxed), 10);
3199```"),
3200                #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
3201                #[inline]
3202                #[cfg_attr(
3203                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3204                    track_caller
3205                )]
3206                pub fn compare_exchange(
3207                    &self,
3208                    current: $int_type,
3209                    new: $int_type,
3210                    success: Ordering,
3211                    failure: Ordering,
3212                ) -> Result<$int_type, $int_type> {
3213                    self.inner.compare_exchange(current, new, success, failure)
3214                }
3215            }
3216
3217            doc_comment! {
3218                concat!("Stores a value into the atomic integer if the current value is the same as
3219the `current` value.
3220Unlike [`compare_exchange`](Self::compare_exchange)
3221this function is allowed to spuriously fail even
3222when the comparison succeeds, which can result in more efficient code on some
3223platforms. The return value is a result indicating whether the new value was
3224written and containing the previous value.
3225
3226`compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
3227ordering of this operation. `success` describes the required ordering for the
3228read-modify-write operation that takes place if the comparison with `current` succeeds.
3229`failure` describes the required ordering for the load operation that takes place when
3230the comparison fails. Using [`Acquire`] as success ordering makes the store part
3231of this operation [`Relaxed`], and using [`Release`] makes the successful load
3232[`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3233
3234# Panics
3235
3236Panics if `failure` is [`Release`], [`AcqRel`].
3237
3238# Examples
3239
3240```
3241use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3242
3243let val = ", stringify!($atomic_type), "::new(4);
3244
3245let mut old = val.load(Ordering::Relaxed);
3246loop {
3247    let new = old * 2;
3248    match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
3249        Ok(_) => break,
3250        Err(x) => old = x,
3251    }
3252}
3253```"),
3254                #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
3255                #[inline]
3256                #[cfg_attr(
3257                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3258                    track_caller
3259                )]
3260                pub fn compare_exchange_weak(
3261                    &self,
3262                    current: $int_type,
3263                    new: $int_type,
3264                    success: Ordering,
3265                    failure: Ordering,
3266                ) -> Result<$int_type, $int_type> {
3267                    self.inner.compare_exchange_weak(current, new, success, failure)
3268                }
3269            }
3270            } // cfg_has_atomic_cas!
3271
3272            $cfg_has_atomic_cas_or_amo32_or_8! {
3273            doc_comment! {
3274                concat!("Adds to the current value, returning the previous value.
3275
3276This operation wraps around on overflow.
3277
3278`fetch_add` takes an [`Ordering`] argument which describes the memory ordering
3279of this operation. All ordering modes are possible. Note that using
3280[`Acquire`] makes the store part of this operation [`Relaxed`], and
3281using [`Release`] makes the load part [`Relaxed`].
3282
3283# Examples
3284
3285```
3286use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3287
3288let foo = ", stringify!($atomic_type), "::new(0);
3289assert_eq!(foo.fetch_add(10, Ordering::SeqCst), 0);
3290assert_eq!(foo.load(Ordering::SeqCst), 10);
3291```"),
3292                #[inline]
3293                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3294                pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type {
3295                    self.inner.fetch_add(val, order)
3296                }
3297            }
3298
3299            doc_comment! {
3300                concat!("Adds to the current value.
3301
3302This operation wraps around on overflow.
3303
3304Unlike `fetch_add`, this does not return the previous value.
3305
3306`add` takes an [`Ordering`] argument which describes the memory ordering
3307of this operation. All ordering modes are possible. Note that using
3308[`Acquire`] makes the store part of this operation [`Relaxed`], and
3309using [`Release`] makes the load part [`Relaxed`].
3310
3311This function may generate more efficient code than `fetch_add` on some platforms.
3312
3313- MSP430: `add` instead of disabling interrupts ({8,16}-bit atomics)
3314
3315# Examples
3316
3317```
3318use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3319
3320let foo = ", stringify!($atomic_type), "::new(0);
3321foo.add(10, Ordering::SeqCst);
3322assert_eq!(foo.load(Ordering::SeqCst), 10);
3323```"),
3324                #[inline]
3325                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3326                pub fn add(&self, val: $int_type, order: Ordering) {
3327                    self.inner.add(val, order);
3328                }
3329            }
3330
3331            doc_comment! {
3332                concat!("Subtracts from the current value, returning the previous value.
3333
3334This operation wraps around on overflow.
3335
3336`fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
3337of this operation. All ordering modes are possible. Note that using
3338[`Acquire`] makes the store part of this operation [`Relaxed`], and
3339using [`Release`] makes the load part [`Relaxed`].
3340
3341# Examples
3342
3343```
3344use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3345
3346let foo = ", stringify!($atomic_type), "::new(20);
3347assert_eq!(foo.fetch_sub(10, Ordering::SeqCst), 20);
3348assert_eq!(foo.load(Ordering::SeqCst), 10);
3349```"),
3350                #[inline]
3351                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3352                pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type {
3353                    self.inner.fetch_sub(val, order)
3354                }
3355            }
3356
3357            doc_comment! {
3358                concat!("Subtracts from the current value.
3359
3360This operation wraps around on overflow.
3361
3362Unlike `fetch_sub`, this does not return the previous value.
3363
3364`sub` takes an [`Ordering`] argument which describes the memory ordering
3365of this operation. All ordering modes are possible. Note that using
3366[`Acquire`] makes the store part of this operation [`Relaxed`], and
3367using [`Release`] makes the load part [`Relaxed`].
3368
3369This function may generate more efficient code than `fetch_sub` on some platforms.
3370
3371- MSP430: `sub` instead of disabling interrupts ({8,16}-bit atomics)
3372
3373# Examples
3374
3375```
3376use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3377
3378let foo = ", stringify!($atomic_type), "::new(20);
3379foo.sub(10, Ordering::SeqCst);
3380assert_eq!(foo.load(Ordering::SeqCst), 10);
3381```"),
3382                #[inline]
3383                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3384                pub fn sub(&self, val: $int_type, order: Ordering) {
3385                    self.inner.sub(val, order);
3386                }
3387            }
3388            } // $cfg_has_atomic_cas_or_amo32_or_8!
3389
3390            doc_comment! {
3391                concat!("Bitwise \"and\" with the current value.
3392
3393Performs a bitwise \"and\" operation on the current value and the argument `val`, and
3394sets the new value to the result.
3395
3396Returns the previous value.
3397
3398`fetch_and` takes an [`Ordering`] argument which describes the memory ordering
3399of this operation. All ordering modes are possible. Note that using
3400[`Acquire`] makes the store part of this operation [`Relaxed`], and
3401using [`Release`] makes the load part [`Relaxed`].
3402
3403# Examples
3404
3405```
3406use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3407
3408let foo = ", stringify!($atomic_type), "::new(0b101101);
3409assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
3410assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
3411```"),
3412                #[inline]
3413                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3414                pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type {
3415                    self.inner.fetch_and(val, order)
3416                }
3417            }
3418
3419            doc_comment! {
3420                concat!("Bitwise \"and\" with the current value.
3421
3422Performs a bitwise \"and\" operation on the current value and the argument `val`, and
3423sets the new value to the result.
3424
3425Unlike `fetch_and`, this does not return the previous value.
3426
3427`and` takes an [`Ordering`] argument which describes the memory ordering
3428of this operation. All ordering modes are possible. Note that using
3429[`Acquire`] makes the store part of this operation [`Relaxed`], and
3430using [`Release`] makes the load part [`Relaxed`].
3431
3432This function may generate more efficient code than `fetch_and` on some platforms.
3433
3434- x86/x86_64: `lock and` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3435- MSP430: `and` instead of disabling interrupts ({8,16}-bit atomics)
3436
3437Note: On x86/x86_64, the use of either function should not usually
3438affect the generated code, because LLVM can properly optimize the case
3439where the result is unused.
3440
3441# Examples
3442
3443```
3444use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3445
3446let foo = ", stringify!($atomic_type), "::new(0b101101);
3447assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
3448assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
3449```"),
3450                #[inline]
3451                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3452                pub fn and(&self, val: $int_type, order: Ordering) {
3453                    self.inner.and(val, order);
3454                }
3455            }
3456
3457            cfg_has_atomic_cas! {
3458            doc_comment! {
3459                concat!("Bitwise \"nand\" with the current value.
3460
3461Performs a bitwise \"nand\" operation on the current value and the argument `val`, and
3462sets the new value to the result.
3463
3464Returns the previous value.
3465
3466`fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
3467of this operation. All ordering modes are possible. Note that using
3468[`Acquire`] makes the store part of this operation [`Relaxed`], and
3469using [`Release`] makes the load part [`Relaxed`].
3470
3471# Examples
3472
3473```
3474use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3475
3476let foo = ", stringify!($atomic_type), "::new(0x13);
3477assert_eq!(foo.fetch_nand(0x31, Ordering::SeqCst), 0x13);
3478assert_eq!(foo.load(Ordering::SeqCst), !(0x13 & 0x31));
3479```"),
3480                #[inline]
3481                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3482                pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type {
3483                    self.inner.fetch_nand(val, order)
3484                }
3485            }
3486            } // cfg_has_atomic_cas!
3487
3488            doc_comment! {
3489                concat!("Bitwise \"or\" with the current value.
3490
3491Performs a bitwise \"or\" operation on the current value and the argument `val`, and
3492sets the new value to the result.
3493
3494Returns the previous value.
3495
3496`fetch_or` takes an [`Ordering`] argument which describes the memory ordering
3497of this operation. All ordering modes are possible. Note that using
3498[`Acquire`] makes the store part of this operation [`Relaxed`], and
3499using [`Release`] makes the load part [`Relaxed`].
3500
3501# Examples
3502
3503```
3504use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3505
3506let foo = ", stringify!($atomic_type), "::new(0b101101);
3507assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
3508assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
3509```"),
3510                #[inline]
3511                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3512                pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type {
3513                    self.inner.fetch_or(val, order)
3514                }
3515            }
3516
3517            doc_comment! {
3518                concat!("Bitwise \"or\" with the current value.
3519
3520Performs a bitwise \"or\" operation on the current value and the argument `val`, and
3521sets the new value to the result.
3522
3523Unlike `fetch_or`, this does not return the previous value.
3524
3525`or` takes an [`Ordering`] argument which describes the memory ordering
3526of this operation. All ordering modes are possible. Note that using
3527[`Acquire`] makes the store part of this operation [`Relaxed`], and
3528using [`Release`] makes the load part [`Relaxed`].
3529
3530This function may generate more efficient code than `fetch_or` on some platforms.
3531
3532- x86/x86_64: `lock or` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3533- MSP430: `or` instead of disabling interrupts ({8,16}-bit atomics)
3534
3535Note: On x86/x86_64, the use of either function should not usually
3536affect the generated code, because LLVM can properly optimize the case
3537where the result is unused.
3538
3539# Examples
3540
3541```
3542use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3543
3544let foo = ", stringify!($atomic_type), "::new(0b101101);
3545assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
3546assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
3547```"),
3548                #[inline]
3549                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3550                pub fn or(&self, val: $int_type, order: Ordering) {
3551                    self.inner.or(val, order);
3552                }
3553            }
3554
3555            doc_comment! {
3556                concat!("Bitwise \"xor\" with the current value.
3557
3558Performs a bitwise \"xor\" operation on the current value and the argument `val`, and
3559sets the new value to the result.
3560
3561Returns the previous value.
3562
3563`fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
3564of this operation. All ordering modes are possible. Note that using
3565[`Acquire`] makes the store part of this operation [`Relaxed`], and
3566using [`Release`] makes the load part [`Relaxed`].
3567
3568# Examples
3569
3570```
3571use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3572
3573let foo = ", stringify!($atomic_type), "::new(0b101101);
3574assert_eq!(foo.fetch_xor(0b110011, Ordering::SeqCst), 0b101101);
3575assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
3576```"),
3577                #[inline]
3578                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3579                pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type {
3580                    self.inner.fetch_xor(val, order)
3581                }
3582            }
3583
3584            doc_comment! {
3585                concat!("Bitwise \"xor\" with the current value.
3586
3587Performs a bitwise \"xor\" operation on the current value and the argument `val`, and
3588sets the new value to the result.
3589
3590Unlike `fetch_xor`, this does not return the previous value.
3591
3592`xor` takes an [`Ordering`] argument which describes the memory ordering
3593of this operation. All ordering modes are possible. Note that using
3594[`Acquire`] makes the store part of this operation [`Relaxed`], and
3595using [`Release`] makes the load part [`Relaxed`].
3596
3597This function may generate more efficient code than `fetch_xor` on some platforms.
3598
3599- x86/x86_64: `lock xor` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3600- MSP430: `xor` instead of disabling interrupts ({8,16}-bit atomics)
3601
3602Note: On x86/x86_64, the use of either function should not usually
3603affect the generated code, because LLVM can properly optimize the case
3604where the result is unused.
3605
3606# Examples
3607
3608```
3609use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3610
3611let foo = ", stringify!($atomic_type), "::new(0b101101);
3612foo.xor(0b110011, Ordering::SeqCst);
3613assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
3614```"),
3615                #[inline]
3616                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3617                pub fn xor(&self, val: $int_type, order: Ordering) {
3618                    self.inner.xor(val, order);
3619                }
3620            }
3621
3622            cfg_has_atomic_cas! {
3623            doc_comment! {
3624                concat!("Fetches the value, and applies a function to it that returns an optional
3625new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
3626`Err(previous_value)`.
3627
3628Note: This may call the function multiple times if the value has been changed from other threads in
3629the meantime, as long as the function returns `Some(_)`, but the function will have been applied
3630only once to the stored value.
3631
3632`fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3633The first describes the required ordering for when the operation finally succeeds while the second
3634describes the required ordering for loads. These correspond to the success and failure orderings of
3635[`compare_exchange`](Self::compare_exchange) respectively.
3636
3637Using [`Acquire`] as success ordering makes the store part
3638of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3639[`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3640
3641# Panics
3642
3643Panics if `fetch_order` is [`Release`], [`AcqRel`].
3644
3645# Considerations
3646
3647This method is not magic; it is not provided by the hardware.
3648It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
3649and suffers from the same drawbacks.
3650In particular, this method will not circumvent the [ABA Problem].
3651
3652[ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3653
3654# Examples
3655
3656```
3657use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3658
3659let x = ", stringify!($atomic_type), "::new(7);
3660assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
3661assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
3662assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
3663assert_eq!(x.load(Ordering::SeqCst), 9);
3664```"),
3665                #[inline]
3666                #[cfg_attr(
3667                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3668                    track_caller
3669                )]
3670                pub fn fetch_update<F>(
3671                    &self,
3672                    set_order: Ordering,
3673                    fetch_order: Ordering,
3674                    mut f: F,
3675                ) -> Result<$int_type, $int_type>
3676                where
3677                    F: FnMut($int_type) -> Option<$int_type>,
3678                {
3679                    let mut prev = self.load(fetch_order);
3680                    while let Some(next) = f(prev) {
3681                        match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
3682                            x @ Ok(_) => return x,
3683                            Err(next_prev) => prev = next_prev,
3684                        }
3685                    }
3686                    Err(prev)
3687                }
3688            }
3689            } // cfg_has_atomic_cas!
3690
3691            $cfg_has_atomic_cas_or_amo32_or_8! {
3692            doc_comment! {
3693                concat!("Maximum with the current value.
3694
3695Finds the maximum of the current value and the argument `val`, and
3696sets the new value to the result.
3697
3698Returns the previous value.
3699
3700`fetch_max` takes an [`Ordering`] argument which describes the memory ordering
3701of this operation. All ordering modes are possible. Note that using
3702[`Acquire`] makes the store part of this operation [`Relaxed`], and
3703using [`Release`] makes the load part [`Relaxed`].
3704
3705# Examples
3706
3707```
3708use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3709
3710let foo = ", stringify!($atomic_type), "::new(23);
3711assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
3712assert_eq!(foo.load(Ordering::SeqCst), 42);
3713```
3714
3715If you want to obtain the maximum value in one step, you can use the following:
3716
3717```
3718use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3719
3720let foo = ", stringify!($atomic_type), "::new(23);
3721let bar = 42;
3722let max_foo = foo.fetch_max(bar, Ordering::SeqCst).max(bar);
3723assert!(max_foo == 42);
3724```"),
3725                #[inline]
3726                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3727                pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type {
3728                    self.inner.fetch_max(val, order)
3729                }
3730            }
3731
3732            doc_comment! {
3733                concat!("Minimum with the current value.
3734
3735Finds the minimum of the current value and the argument `val`, and
3736sets the new value to the result.
3737
3738Returns the previous value.
3739
3740`fetch_min` takes an [`Ordering`] argument which describes the memory ordering
3741of this operation. All ordering modes are possible. Note that using
3742[`Acquire`] makes the store part of this operation [`Relaxed`], and
3743using [`Release`] makes the load part [`Relaxed`].
3744
3745# Examples
3746
3747```
3748use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3749
3750let foo = ", stringify!($atomic_type), "::new(23);
3751assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
3752assert_eq!(foo.load(Ordering::Relaxed), 23);
3753assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
3754assert_eq!(foo.load(Ordering::Relaxed), 22);
3755```
3756
3757If you want to obtain the minimum value in one step, you can use the following:
3758
3759```
3760use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3761
3762let foo = ", stringify!($atomic_type), "::new(23);
3763let bar = 12;
3764let min_foo = foo.fetch_min(bar, Ordering::SeqCst).min(bar);
3765assert_eq!(min_foo, 12);
3766```"),
3767                #[inline]
3768                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3769                pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type {
3770                    self.inner.fetch_min(val, order)
3771                }
3772            }
3773            } // $cfg_has_atomic_cas_or_amo32_or_8!
3774
3775            doc_comment! {
3776                concat!("Sets the bit at the specified bit-position to 1.
3777
3778Returns `true` if the specified bit was previously set to 1.
3779
3780`bit_set` takes an [`Ordering`] argument which describes the memory ordering
3781of this operation. All ordering modes are possible. Note that using
3782[`Acquire`] makes the store part of this operation [`Relaxed`], and
3783using [`Release`] makes the load part [`Relaxed`].
3784
3785This corresponds to x86's `lock bts`, and the implementation calls them on x86/x86_64.
3786
3787# Examples
3788
3789```
3790use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3791
3792let foo = ", stringify!($atomic_type), "::new(0b0000);
3793assert!(!foo.bit_set(0, Ordering::Relaxed));
3794assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
3795assert!(foo.bit_set(0, Ordering::Relaxed));
3796assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
3797```"),
3798                #[inline]
3799                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3800                pub fn bit_set(&self, bit: u32, order: Ordering) -> bool {
3801                    self.inner.bit_set(bit, order)
3802                }
3803            }
3804
3805            doc_comment! {
3806                concat!("Clears the bit at the specified bit-position to 1.
3807
3808Returns `true` if the specified bit was previously set to 1.
3809
3810`bit_clear` takes an [`Ordering`] argument which describes the memory ordering
3811of this operation. All ordering modes are possible. Note that using
3812[`Acquire`] makes the store part of this operation [`Relaxed`], and
3813using [`Release`] makes the load part [`Relaxed`].
3814
3815This corresponds to x86's `lock btr`, and the implementation calls them on x86/x86_64.
3816
3817# Examples
3818
3819```
3820use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3821
3822let foo = ", stringify!($atomic_type), "::new(0b0001);
3823assert!(foo.bit_clear(0, Ordering::Relaxed));
3824assert_eq!(foo.load(Ordering::Relaxed), 0b0000);
3825```"),
3826                #[inline]
3827                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3828                pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool {
3829                    self.inner.bit_clear(bit, order)
3830                }
3831            }
3832
3833            doc_comment! {
3834                concat!("Toggles the bit at the specified bit-position.
3835
3836Returns `true` if the specified bit was previously set to 1.
3837
3838`bit_toggle` takes an [`Ordering`] argument which describes the memory ordering
3839of this operation. All ordering modes are possible. Note that using
3840[`Acquire`] makes the store part of this operation [`Relaxed`], and
3841using [`Release`] makes the load part [`Relaxed`].
3842
3843This corresponds to x86's `lock btc`, and the implementation calls them on x86/x86_64.
3844
3845# Examples
3846
3847```
3848use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3849
3850let foo = ", stringify!($atomic_type), "::new(0b0000);
3851assert!(!foo.bit_toggle(0, Ordering::Relaxed));
3852assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
3853assert!(foo.bit_toggle(0, Ordering::Relaxed));
3854assert_eq!(foo.load(Ordering::Relaxed), 0b0000);
3855```"),
3856                #[inline]
3857                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3858                pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool {
3859                    self.inner.bit_toggle(bit, order)
3860                }
3861            }
3862
3863            doc_comment! {
3864                concat!("Logical negates the current value, and sets the new value to the result.
3865
3866Returns the previous value.
3867
3868`fetch_not` takes an [`Ordering`] argument which describes the memory ordering
3869of this operation. All ordering modes are possible. Note that using
3870[`Acquire`] makes the store part of this operation [`Relaxed`], and
3871using [`Release`] makes the load part [`Relaxed`].
3872
3873# Examples
3874
3875```
3876use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3877
3878let foo = ", stringify!($atomic_type), "::new(0);
3879assert_eq!(foo.fetch_not(Ordering::Relaxed), 0);
3880assert_eq!(foo.load(Ordering::Relaxed), !0);
3881```"),
3882                #[inline]
3883                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3884                pub fn fetch_not(&self, order: Ordering) -> $int_type {
3885                    self.inner.fetch_not(order)
3886                }
3887            }
3888
3889            doc_comment! {
3890                concat!("Logical negates the current value, and sets the new value to the result.
3891
3892Unlike `fetch_not`, this does not return the previous value.
3893
3894`not` takes an [`Ordering`] argument which describes the memory ordering
3895of this operation. All ordering modes are possible. Note that using
3896[`Acquire`] makes the store part of this operation [`Relaxed`], and
3897using [`Release`] makes the load part [`Relaxed`].
3898
3899This function may generate more efficient code than `fetch_not` on some platforms.
3900
3901- x86/x86_64: `lock not` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3902- MSP430: `inv` instead of disabling interrupts ({8,16}-bit atomics)
3903
3904# Examples
3905
3906```
3907use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3908
3909let foo = ", stringify!($atomic_type), "::new(0);
3910foo.not(Ordering::Relaxed);
3911assert_eq!(foo.load(Ordering::Relaxed), !0);
3912```"),
3913                #[inline]
3914                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3915                pub fn not(&self, order: Ordering) {
3916                    self.inner.not(order);
3917                }
3918            }
3919
3920            cfg_has_atomic_cas! {
3921            doc_comment! {
3922                concat!("Negates the current value, and sets the new value to the result.
3923
3924Returns the previous value.
3925
3926`fetch_neg` takes an [`Ordering`] argument which describes the memory ordering
3927of this operation. All ordering modes are possible. Note that using
3928[`Acquire`] makes the store part of this operation [`Relaxed`], and
3929using [`Release`] makes the load part [`Relaxed`].
3930
3931# Examples
3932
3933```
3934use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3935
3936let foo = ", stringify!($atomic_type), "::new(5);
3937assert_eq!(foo.fetch_neg(Ordering::Relaxed), 5);
3938assert_eq!(foo.load(Ordering::Relaxed), 5_", stringify!($int_type), ".wrapping_neg());
3939assert_eq!(foo.fetch_neg(Ordering::Relaxed), 5_", stringify!($int_type), ".wrapping_neg());
3940assert_eq!(foo.load(Ordering::Relaxed), 5);
3941```"),
3942                #[inline]
3943                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3944                pub fn fetch_neg(&self, order: Ordering) -> $int_type {
3945                    self.inner.fetch_neg(order)
3946                }
3947            }
3948
3949            doc_comment! {
3950                concat!("Negates the current value, and sets the new value to the result.
3951
3952Unlike `fetch_neg`, this does not return the previous value.
3953
3954`neg` takes an [`Ordering`] argument which describes the memory ordering
3955of this operation. All ordering modes are possible. Note that using
3956[`Acquire`] makes the store part of this operation [`Relaxed`], and
3957using [`Release`] makes the load part [`Relaxed`].
3958
3959This function may generate more efficient code than `fetch_neg` on some platforms.
3960
3961- x86/x86_64: `lock neg` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3962
3963# Examples
3964
3965```
3966use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3967
3968let foo = ", stringify!($atomic_type), "::new(5);
3969foo.neg(Ordering::Relaxed);
3970assert_eq!(foo.load(Ordering::Relaxed), 5_", stringify!($int_type), ".wrapping_neg());
3971foo.neg(Ordering::Relaxed);
3972assert_eq!(foo.load(Ordering::Relaxed), 5);
3973```"),
3974                #[inline]
3975                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3976                pub fn neg(&self, order: Ordering) {
3977                    self.inner.neg(order);
3978                }
3979            }
3980            } // cfg_has_atomic_cas!
3981            } // cfg_has_atomic_cas_or_amo32!
3982
3983            const_fn! {
3984                const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
3985                /// Returns a mutable pointer to the underlying integer.
3986                ///
3987                /// Returning an `*mut` pointer from a shared reference to this atomic is
3988                /// safe because the atomic types work with interior mutability. Any use of
3989                /// the returned raw pointer requires an `unsafe` block and has to uphold
3990                /// the safety requirements. If there is concurrent access, note the following
3991                /// additional safety requirements:
3992                ///
3993                /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
3994                ///   operations on it must be atomic.
3995                /// - Otherwise, any concurrent operations on it must be compatible with
3996                ///   operations performed by this atomic type.
3997                ///
3998                /// This is `const fn` on Rust 1.58+.
3999                #[inline]
4000                pub const fn as_ptr(&self) -> *mut $int_type {
4001                    self.inner.as_ptr()
4002                }
4003            }
4004        }
4005        // See https://github.com/taiki-e/portable-atomic/issues/180
4006        #[cfg(not(feature = "require-cas"))]
4007        cfg_no_atomic_cas! {
4008        #[doc(hidden)]
4009        #[allow(unused_variables, clippy::unused_self, clippy::extra_unused_lifetimes)]
4010        impl<'a> $atomic_type {
4011            $cfg_no_atomic_cas_or_amo32_or_8! {
4012            #[inline]
4013            pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type
4014            where
4015                &'a Self: HasSwap,
4016            {
4017                unimplemented!()
4018            }
4019            } // $cfg_no_atomic_cas_or_amo32_or_8!
4020            #[inline]
4021            pub fn compare_exchange(
4022                &self,
4023                current: $int_type,
4024                new: $int_type,
4025                success: Ordering,
4026                failure: Ordering,
4027            ) -> Result<$int_type, $int_type>
4028            where
4029                &'a Self: HasCompareExchange,
4030            {
4031                unimplemented!()
4032            }
4033            #[inline]
4034            pub fn compare_exchange_weak(
4035                &self,
4036                current: $int_type,
4037                new: $int_type,
4038                success: Ordering,
4039                failure: Ordering,
4040            ) -> Result<$int_type, $int_type>
4041            where
4042                &'a Self: HasCompareExchangeWeak,
4043            {
4044                unimplemented!()
4045            }
4046            $cfg_no_atomic_cas_or_amo32_or_8! {
4047            #[inline]
4048            pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type
4049            where
4050                &'a Self: HasFetchAdd,
4051            {
4052                unimplemented!()
4053            }
4054            #[inline]
4055            pub fn add(&self, val: $int_type, order: Ordering)
4056            where
4057                &'a Self: HasAdd,
4058            {
4059                unimplemented!()
4060            }
4061            #[inline]
4062            pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type
4063            where
4064                &'a Self: HasFetchSub,
4065            {
4066                unimplemented!()
4067            }
4068            #[inline]
4069            pub fn sub(&self, val: $int_type, order: Ordering)
4070            where
4071                &'a Self: HasSub,
4072            {
4073                unimplemented!()
4074            }
4075            } // $cfg_no_atomic_cas_or_amo32_or_8!
4076            cfg_no_atomic_cas_or_amo32! {
4077            #[inline]
4078            pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type
4079            where
4080                &'a Self: HasFetchAnd,
4081            {
4082                unimplemented!()
4083            }
4084            #[inline]
4085            pub fn and(&self, val: $int_type, order: Ordering)
4086            where
4087                &'a Self: HasAnd,
4088            {
4089                unimplemented!()
4090            }
4091            } // cfg_no_atomic_cas_or_amo32!
4092            #[inline]
4093            pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type
4094            where
4095                &'a Self: HasFetchNand,
4096            {
4097                unimplemented!()
4098            }
4099            cfg_no_atomic_cas_or_amo32! {
4100            #[inline]
4101            pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type
4102            where
4103                &'a Self: HasFetchOr,
4104            {
4105                unimplemented!()
4106            }
4107            #[inline]
4108            pub fn or(&self, val: $int_type, order: Ordering)
4109            where
4110                &'a Self: HasOr,
4111            {
4112                unimplemented!()
4113            }
4114            #[inline]
4115            pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type
4116            where
4117                &'a Self: HasFetchXor,
4118            {
4119                unimplemented!()
4120            }
4121            #[inline]
4122            pub fn xor(&self, val: $int_type, order: Ordering)
4123            where
4124                &'a Self: HasXor,
4125            {
4126                unimplemented!()
4127            }
4128            } // cfg_no_atomic_cas_or_amo32!
4129            #[inline]
4130            pub fn fetch_update<F>(
4131                &self,
4132                set_order: Ordering,
4133                fetch_order: Ordering,
4134                f: F,
4135            ) -> Result<$int_type, $int_type>
4136            where
4137                F: FnMut($int_type) -> Option<$int_type>,
4138                &'a Self: HasFetchUpdate,
4139            {
4140                unimplemented!()
4141            }
4142            $cfg_no_atomic_cas_or_amo32_or_8! {
4143            #[inline]
4144            pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type
4145            where
4146                &'a Self: HasFetchMax,
4147            {
4148                unimplemented!()
4149            }
4150            #[inline]
4151            pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type
4152            where
4153                &'a Self: HasFetchMin,
4154            {
4155                unimplemented!()
4156            }
4157            } // $cfg_no_atomic_cas_or_amo32_or_8!
4158            cfg_no_atomic_cas_or_amo32! {
4159            #[inline]
4160            pub fn bit_set(&self, bit: u32, order: Ordering) -> bool
4161            where
4162                &'a Self: HasBitSet,
4163            {
4164                unimplemented!()
4165            }
4166            #[inline]
4167            pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool
4168            where
4169                &'a Self: HasBitClear,
4170            {
4171                unimplemented!()
4172            }
4173            #[inline]
4174            pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool
4175            where
4176                &'a Self: HasBitToggle,
4177            {
4178                unimplemented!()
4179            }
4180            #[inline]
4181            pub fn fetch_not(&self, order: Ordering) -> $int_type
4182            where
4183                &'a Self: HasFetchNot,
4184            {
4185                unimplemented!()
4186            }
4187            #[inline]
4188            pub fn not(&self, order: Ordering)
4189            where
4190                &'a Self: HasNot,
4191            {
4192                unimplemented!()
4193            }
4194            } // cfg_no_atomic_cas_or_amo32!
4195            #[inline]
4196            pub fn fetch_neg(&self, order: Ordering) -> $int_type
4197            where
4198                &'a Self: HasFetchNeg,
4199            {
4200                unimplemented!()
4201            }
4202            #[inline]
4203            pub fn neg(&self, order: Ordering)
4204            where
4205                &'a Self: HasNeg,
4206            {
4207                unimplemented!()
4208            }
4209        }
4210        } // cfg_no_atomic_cas!
4211        $(
4212            #[$cfg_float]
4213            atomic_int!(float,
4214                #[$cfg_float] $atomic_float_type, $float_type, $atomic_type, $int_type, $align
4215            );
4216        )?
4217    };
4218
4219    // AtomicF* impls
4220    (float,
4221        #[$cfg_float:meta]
4222        $atomic_type:ident,
4223        $float_type:ident,
4224        $atomic_int_type:ident,
4225        $int_type:ident,
4226        $align:literal
4227    ) => {
4228        doc_comment! {
4229            concat!("A floating point type which can be safely shared between threads.
4230
4231This type has the same in-memory representation as the underlying floating point type,
4232[`", stringify!($float_type), "`].
4233"
4234            ),
4235            #[cfg_attr(docsrs, doc($cfg_float))]
4236            // We can use #[repr(transparent)] here, but #[repr(C, align(N))]
4237            // will show clearer docs.
4238            #[repr(C, align($align))]
4239            pub struct $atomic_type {
4240                inner: imp::float::$atomic_type,
4241            }
4242        }
4243
4244        impl Default for $atomic_type {
4245            #[inline]
4246            fn default() -> Self {
4247                Self::new($float_type::default())
4248            }
4249        }
4250
4251        impl From<$float_type> for $atomic_type {
4252            #[inline]
4253            fn from(v: $float_type) -> Self {
4254                Self::new(v)
4255            }
4256        }
4257
4258        // UnwindSafe is implicitly implemented.
4259        #[cfg(not(portable_atomic_no_core_unwind_safe))]
4260        impl core::panic::RefUnwindSafe for $atomic_type {}
4261        #[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
4262        impl std::panic::RefUnwindSafe for $atomic_type {}
4263
4264        impl_debug_and_serde!($atomic_type);
4265
4266        impl $atomic_type {
4267            /// Creates a new atomic float.
4268            #[inline]
4269            #[must_use]
4270            pub const fn new(v: $float_type) -> Self {
4271                static_assert_layout!($atomic_type, $float_type);
4272                Self { inner: imp::float::$atomic_type::new(v) }
4273            }
4274
4275            // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
4276            #[cfg(not(portable_atomic_no_const_mut_refs))]
4277            doc_comment! {
4278                concat!("Creates a new reference to an atomic float from a pointer.
4279
4280This is `const fn` on Rust 1.83+.
4281
4282# Safety
4283
4284* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
4285  can be bigger than `align_of::<", stringify!($float_type), ">()`).
4286* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
4287* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
4288  behind `ptr` must have a happens-before relationship with atomic accesses via
4289  the returned value (or vice-versa).
4290  * In other words, time periods where the value is accessed atomically may not
4291    overlap with periods where the value is accessed non-atomically.
4292  * This requirement is trivially satisfied if `ptr` is never used non-atomically
4293    for the duration of lifetime `'a`. Most use cases should be able to follow
4294    this guideline.
4295  * This requirement is also trivially satisfied if all accesses (atomic or not) are
4296    done from the same thread.
4297* If this atomic type is *not* lock-free:
4298  * Any accesses to the value behind `ptr` must have a happens-before relationship
4299    with accesses via the returned value (or vice-versa).
4300  * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
4301    be compatible with operations performed by this atomic type.
4302* This method must not be used to create overlapping or mixed-size atomic
4303  accesses, as these are not supported by the memory model.
4304
4305[valid]: core::ptr#safety"),
4306                #[inline]
4307                #[must_use]
4308                pub const unsafe fn from_ptr<'a>(ptr: *mut $float_type) -> &'a Self {
4309                    #[allow(clippy::cast_ptr_alignment)]
4310                    // SAFETY: guaranteed by the caller
4311                    unsafe { &*(ptr as *mut Self) }
4312                }
4313            }
4314            #[cfg(portable_atomic_no_const_mut_refs)]
4315            doc_comment! {
4316                concat!("Creates a new reference to an atomic float from a pointer.
4317
4318This is `const fn` on Rust 1.83+.
4319
4320# Safety
4321
4322* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
4323  can be bigger than `align_of::<", stringify!($float_type), ">()`).
4324* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
4325* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
4326  behind `ptr` must have a happens-before relationship with atomic accesses via
4327  the returned value (or vice-versa).
4328  * In other words, time periods where the value is accessed atomically may not
4329    overlap with periods where the value is accessed non-atomically.
4330  * This requirement is trivially satisfied if `ptr` is never used non-atomically
4331    for the duration of lifetime `'a`. Most use cases should be able to follow
4332    this guideline.
4333  * This requirement is also trivially satisfied if all accesses (atomic or not) are
4334    done from the same thread.
4335* If this atomic type is *not* lock-free:
4336  * Any accesses to the value behind `ptr` must have a happens-before relationship
4337    with accesses via the returned value (or vice-versa).
4338  * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
4339    be compatible with operations performed by this atomic type.
4340* This method must not be used to create overlapping or mixed-size atomic
4341  accesses, as these are not supported by the memory model.
4342
4343[valid]: core::ptr#safety"),
4344                #[inline]
4345                #[must_use]
4346                pub unsafe fn from_ptr<'a>(ptr: *mut $float_type) -> &'a Self {
4347                    #[allow(clippy::cast_ptr_alignment)]
4348                    // SAFETY: guaranteed by the caller
4349                    unsafe { &*(ptr as *mut Self) }
4350                }
4351            }
4352
4353            /// Returns `true` if operations on values of this type are lock-free.
4354            ///
4355            /// If the compiler or the platform doesn't support the necessary
4356            /// atomic instructions, global locks for every potentially
4357            /// concurrent atomic operation will be used.
4358            #[inline]
4359            #[must_use]
4360            pub fn is_lock_free() -> bool {
4361                <imp::float::$atomic_type>::is_lock_free()
4362            }
4363
4364            /// Returns `true` if operations on values of this type are lock-free.
4365            ///
4366            /// If the compiler or the platform doesn't support the necessary
4367            /// atomic instructions, global locks for every potentially
4368            /// concurrent atomic operation will be used.
4369            ///
4370            /// **Note:** If the atomic operation relies on dynamic CPU feature detection,
4371            /// this type may be lock-free even if the function returns false.
4372            #[inline]
4373            #[must_use]
4374            pub const fn is_always_lock_free() -> bool {
4375                <imp::float::$atomic_type>::IS_ALWAYS_LOCK_FREE
4376            }
4377            #[cfg(test)]
4378            const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
4379
4380            const_fn! {
4381                const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
4382                /// Returns a mutable reference to the underlying float.
4383                ///
4384                /// This is safe because the mutable reference guarantees that no other threads are
4385                /// concurrently accessing the atomic data.
4386                ///
4387                /// This is `const fn` on Rust 1.83+.
4388                #[inline]
4389                pub const fn get_mut(&mut self) -> &mut $float_type {
4390                    // SAFETY: the mutable reference guarantees unique ownership.
4391                    unsafe { &mut *self.as_ptr() }
4392                }
4393            }
4394
4395            // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
4396            // https://github.com/rust-lang/rust/issues/76314
4397
4398            const_fn! {
4399                const_if: #[cfg(not(portable_atomic_no_const_transmute))];
4400                /// Consumes the atomic and returns the contained value.
4401                ///
4402                /// This is safe because passing `self` by value guarantees that no other threads are
4403                /// concurrently accessing the atomic data.
4404                ///
4405                /// This is `const fn` on Rust 1.56+.
4406                #[inline]
4407                pub const fn into_inner(self) -> $float_type {
4408                    // SAFETY: $atomic_type and $float_type have the same size and in-memory representations,
4409                    // so they can be safely transmuted.
4410                    // (const UnsafeCell::into_inner is unstable)
4411                    unsafe { core::mem::transmute(self) }
4412                }
4413            }
4414
4415            /// Loads a value from the atomic float.
4416            ///
4417            /// `load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
4418            /// Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
4419            ///
4420            /// # Panics
4421            ///
4422            /// Panics if `order` is [`Release`] or [`AcqRel`].
4423            #[inline]
4424            #[cfg_attr(
4425                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4426                track_caller
4427            )]
4428            pub fn load(&self, order: Ordering) -> $float_type {
4429                self.inner.load(order)
4430            }
4431
4432            /// Stores a value into the atomic float.
4433            ///
4434            /// `store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
4435            ///  Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
4436            ///
4437            /// # Panics
4438            ///
4439            /// Panics if `order` is [`Acquire`] or [`AcqRel`].
4440            #[inline]
4441            #[cfg_attr(
4442                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4443                track_caller
4444            )]
4445            pub fn store(&self, val: $float_type, order: Ordering) {
4446                self.inner.store(val, order)
4447            }
4448
4449            cfg_has_atomic_cas_or_amo32! {
4450            /// Stores a value into the atomic float, returning the previous value.
4451            ///
4452            /// `swap` takes an [`Ordering`] argument which describes the memory ordering
4453            /// of this operation. All ordering modes are possible. Note that using
4454            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4455            /// using [`Release`] makes the load part [`Relaxed`].
4456            #[inline]
4457            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4458            pub fn swap(&self, val: $float_type, order: Ordering) -> $float_type {
4459                self.inner.swap(val, order)
4460            }
4461
4462            cfg_has_atomic_cas! {
4463            /// Stores a value into the atomic float if the current value is the same as
4464            /// the `current` value.
4465            ///
4466            /// The return value is a result indicating whether the new value was written and
4467            /// containing the previous value. On success this value is guaranteed to be equal to
4468            /// `current`.
4469            ///
4470            /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
4471            /// ordering of this operation. `success` describes the required ordering for the
4472            /// read-modify-write operation that takes place if the comparison with `current` succeeds.
4473            /// `failure` describes the required ordering for the load operation that takes place when
4474            /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
4475            /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
4476            /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
4477            ///
4478            /// # Panics
4479            ///
4480            /// Panics if `failure` is [`Release`], [`AcqRel`].
4481            #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
4482            #[inline]
4483            #[cfg_attr(
4484                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4485                track_caller
4486            )]
4487            pub fn compare_exchange(
4488                &self,
4489                current: $float_type,
4490                new: $float_type,
4491                success: Ordering,
4492                failure: Ordering,
4493            ) -> Result<$float_type, $float_type> {
4494                self.inner.compare_exchange(current, new, success, failure)
4495            }
4496
4497            /// Stores a value into the atomic float if the current value is the same as
4498            /// the `current` value.
4499            /// Unlike [`compare_exchange`](Self::compare_exchange)
4500            /// this function is allowed to spuriously fail even
4501            /// when the comparison succeeds, which can result in more efficient code on some
4502            /// platforms. The return value is a result indicating whether the new value was
4503            /// written and containing the previous value.
4504            ///
4505            /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
4506            /// ordering of this operation. `success` describes the required ordering for the
4507            /// read-modify-write operation that takes place if the comparison with `current` succeeds.
4508            /// `failure` describes the required ordering for the load operation that takes place when
4509            /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
4510            /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
4511            /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
4512            ///
4513            /// # Panics
4514            ///
4515            /// Panics if `failure` is [`Release`], [`AcqRel`].
4516            #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
4517            #[inline]
4518            #[cfg_attr(
4519                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4520                track_caller
4521            )]
4522            pub fn compare_exchange_weak(
4523                &self,
4524                current: $float_type,
4525                new: $float_type,
4526                success: Ordering,
4527                failure: Ordering,
4528            ) -> Result<$float_type, $float_type> {
4529                self.inner.compare_exchange_weak(current, new, success, failure)
4530            }
4531
4532            /// Adds to the current value, returning the previous value.
4533            ///
4534            /// This operation wraps around on overflow.
4535            ///
4536            /// `fetch_add` takes an [`Ordering`] argument which describes the memory ordering
4537            /// of this operation. All ordering modes are possible. Note that using
4538            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4539            /// using [`Release`] makes the load part [`Relaxed`].
4540            #[inline]
4541            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4542            pub fn fetch_add(&self, val: $float_type, order: Ordering) -> $float_type {
4543                self.inner.fetch_add(val, order)
4544            }
4545
4546            /// Subtracts from the current value, returning the previous value.
4547            ///
4548            /// This operation wraps around on overflow.
4549            ///
4550            /// `fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
4551            /// of this operation. All ordering modes are possible. Note that using
4552            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4553            /// using [`Release`] makes the load part [`Relaxed`].
4554            #[inline]
4555            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4556            pub fn fetch_sub(&self, val: $float_type, order: Ordering) -> $float_type {
4557                self.inner.fetch_sub(val, order)
4558            }
4559
4560            /// Fetches the value, and applies a function to it that returns an optional
4561            /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
4562            /// `Err(previous_value)`.
4563            ///
4564            /// Note: This may call the function multiple times if the value has been changed from other threads in
4565            /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
4566            /// only once to the stored value.
4567            ///
4568            /// `fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
4569            /// The first describes the required ordering for when the operation finally succeeds while the second
4570            /// describes the required ordering for loads. These correspond to the success and failure orderings of
4571            /// [`compare_exchange`](Self::compare_exchange) respectively.
4572            ///
4573            /// Using [`Acquire`] as success ordering makes the store part
4574            /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
4575            /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
4576            ///
4577            /// # Panics
4578            ///
4579            /// Panics if `fetch_order` is [`Release`], [`AcqRel`].
4580            ///
4581            /// # Considerations
4582            ///
4583            /// This method is not magic; it is not provided by the hardware.
4584            /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
4585            /// and suffers from the same drawbacks.
4586            /// In particular, this method will not circumvent the [ABA Problem].
4587            ///
4588            /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
4589            #[inline]
4590            #[cfg_attr(
4591                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4592                track_caller
4593            )]
4594            pub fn fetch_update<F>(
4595                &self,
4596                set_order: Ordering,
4597                fetch_order: Ordering,
4598                mut f: F,
4599            ) -> Result<$float_type, $float_type>
4600            where
4601                F: FnMut($float_type) -> Option<$float_type>,
4602            {
4603                let mut prev = self.load(fetch_order);
4604                while let Some(next) = f(prev) {
4605                    match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
4606                        x @ Ok(_) => return x,
4607                        Err(next_prev) => prev = next_prev,
4608                    }
4609                }
4610                Err(prev)
4611            }
4612
4613            /// Maximum with the current value.
4614            ///
4615            /// Finds the maximum of the current value and the argument `val`, and
4616            /// sets the new value to the result.
4617            ///
4618            /// Returns the previous value.
4619            ///
4620            /// `fetch_max` takes an [`Ordering`] argument which describes the memory ordering
4621            /// of this operation. All ordering modes are possible. Note that using
4622            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4623            /// using [`Release`] makes the load part [`Relaxed`].
4624            #[inline]
4625            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4626            pub fn fetch_max(&self, val: $float_type, order: Ordering) -> $float_type {
4627                self.inner.fetch_max(val, order)
4628            }
4629
4630            /// Minimum with the current value.
4631            ///
4632            /// Finds the minimum of the current value and the argument `val`, and
4633            /// sets the new value to the result.
4634            ///
4635            /// Returns the previous value.
4636            ///
4637            /// `fetch_min` takes an [`Ordering`] argument which describes the memory ordering
4638            /// of this operation. All ordering modes are possible. Note that using
4639            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4640            /// using [`Release`] makes the load part [`Relaxed`].
4641            #[inline]
4642            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4643            pub fn fetch_min(&self, val: $float_type, order: Ordering) -> $float_type {
4644                self.inner.fetch_min(val, order)
4645            }
4646            } // cfg_has_atomic_cas!
4647
4648            /// Negates the current value, and sets the new value to the result.
4649            ///
4650            /// Returns the previous value.
4651            ///
4652            /// `fetch_neg` takes an [`Ordering`] argument which describes the memory ordering
4653            /// of this operation. All ordering modes are possible. Note that using
4654            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4655            /// using [`Release`] makes the load part [`Relaxed`].
4656            #[inline]
4657            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4658            pub fn fetch_neg(&self, order: Ordering) -> $float_type {
4659                self.inner.fetch_neg(order)
4660            }
4661
4662            /// Computes the absolute value of the current value, and sets the
4663            /// new value to the result.
4664            ///
4665            /// Returns the previous value.
4666            ///
4667            /// `fetch_abs` takes an [`Ordering`] argument which describes the memory ordering
4668            /// of this operation. All ordering modes are possible. Note that using
4669            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4670            /// using [`Release`] makes the load part [`Relaxed`].
4671            #[inline]
4672            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4673            pub fn fetch_abs(&self, order: Ordering) -> $float_type {
4674                self.inner.fetch_abs(order)
4675            }
4676            } // cfg_has_atomic_cas_or_amo32!
4677
4678            #[cfg(not(portable_atomic_no_const_raw_ptr_deref))]
4679            doc_comment! {
4680                concat!("Raw transmutation to `&", stringify!($atomic_int_type), "`.
4681
4682See [`", stringify!($float_type) ,"::from_bits`] for some discussion of the
4683portability of this operation (there are almost no issues).
4684
4685This is `const fn` on Rust 1.58+."),
4686                #[inline]
4687                pub const fn as_bits(&self) -> &$atomic_int_type {
4688                    self.inner.as_bits()
4689                }
4690            }
4691            #[cfg(portable_atomic_no_const_raw_ptr_deref)]
4692            doc_comment! {
4693                concat!("Raw transmutation to `&", stringify!($atomic_int_type), "`.
4694
4695See [`", stringify!($float_type) ,"::from_bits`] for some discussion of the
4696portability of this operation (there are almost no issues).
4697
4698This is `const fn` on Rust 1.58+."),
4699                #[inline]
4700                pub fn as_bits(&self) -> &$atomic_int_type {
4701                    self.inner.as_bits()
4702                }
4703            }
4704
4705            const_fn! {
4706                const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
4707                /// Returns a mutable pointer to the underlying float.
4708                ///
4709                /// Returning an `*mut` pointer from a shared reference to this atomic is
4710                /// safe because the atomic types work with interior mutability. Any use of
4711                /// the returned raw pointer requires an `unsafe` block and has to uphold
4712                /// the safety requirements. If there is concurrent access, note the following
4713                /// additional safety requirements:
4714                ///
4715                /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
4716                ///   operations on it must be atomic.
4717                /// - Otherwise, any concurrent operations on it must be compatible with
4718                ///   operations performed by this atomic type.
4719                ///
4720                /// This is `const fn` on Rust 1.58+.
4721                #[inline]
4722                pub const fn as_ptr(&self) -> *mut $float_type {
4723                    self.inner.as_ptr()
4724                }
4725            }
4726        }
4727        // See https://github.com/taiki-e/portable-atomic/issues/180
4728        #[cfg(not(feature = "require-cas"))]
4729        cfg_no_atomic_cas! {
4730        #[doc(hidden)]
4731        #[allow(unused_variables, clippy::unused_self, clippy::extra_unused_lifetimes)]
4732        impl<'a> $atomic_type {
4733            cfg_no_atomic_cas_or_amo32! {
4734            #[inline]
4735            pub fn swap(&self, val: $float_type, order: Ordering) -> $float_type
4736            where
4737                &'a Self: HasSwap,
4738            {
4739                unimplemented!()
4740            }
4741            } // cfg_no_atomic_cas_or_amo32!
4742            #[inline]
4743            pub fn compare_exchange(
4744                &self,
4745                current: $float_type,
4746                new: $float_type,
4747                success: Ordering,
4748                failure: Ordering,
4749            ) -> Result<$float_type, $float_type>
4750            where
4751                &'a Self: HasCompareExchange,
4752            {
4753                unimplemented!()
4754            }
4755            #[inline]
4756            pub fn compare_exchange_weak(
4757                &self,
4758                current: $float_type,
4759                new: $float_type,
4760                success: Ordering,
4761                failure: Ordering,
4762            ) -> Result<$float_type, $float_type>
4763            where
4764                &'a Self: HasCompareExchangeWeak,
4765            {
4766                unimplemented!()
4767            }
4768            #[inline]
4769            pub fn fetch_add(&self, val: $float_type, order: Ordering) -> $float_type
4770            where
4771                &'a Self: HasFetchAdd,
4772            {
4773                unimplemented!()
4774            }
4775            #[inline]
4776            pub fn fetch_sub(&self, val: $float_type, order: Ordering) -> $float_type
4777            where
4778                &'a Self: HasFetchSub,
4779            {
4780                unimplemented!()
4781            }
4782            #[inline]
4783            pub fn fetch_update<F>(
4784                &self,
4785                set_order: Ordering,
4786                fetch_order: Ordering,
4787                f: F,
4788            ) -> Result<$float_type, $float_type>
4789            where
4790                F: FnMut($float_type) -> Option<$float_type>,
4791                &'a Self: HasFetchUpdate,
4792            {
4793                unimplemented!()
4794            }
4795            #[inline]
4796            pub fn fetch_max(&self, val: $float_type, order: Ordering) -> $float_type
4797            where
4798                &'a Self: HasFetchMax,
4799            {
4800                unimplemented!()
4801            }
4802            #[inline]
4803            pub fn fetch_min(&self, val: $float_type, order: Ordering) -> $float_type
4804            where
4805                &'a Self: HasFetchMin,
4806            {
4807                unimplemented!()
4808            }
4809            cfg_no_atomic_cas_or_amo32! {
4810            #[inline]
4811            pub fn fetch_neg(&self, order: Ordering) -> $float_type
4812            where
4813                &'a Self: HasFetchNeg,
4814            {
4815                unimplemented!()
4816            }
4817            #[inline]
4818            pub fn fetch_abs(&self, order: Ordering) -> $float_type
4819            where
4820                &'a Self: HasFetchAbs,
4821            {
4822                unimplemented!()
4823            }
4824            } // cfg_no_atomic_cas_or_amo32!
4825        }
4826        } // cfg_no_atomic_cas!
4827    };
4828}
4829
4830cfg_has_atomic_ptr! {
4831    #[cfg(target_pointer_width = "16")]
4832    atomic_int!(AtomicIsize, isize, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4833    #[cfg(target_pointer_width = "16")]
4834    atomic_int!(AtomicUsize, usize, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4835    #[cfg(target_pointer_width = "32")]
4836    atomic_int!(AtomicIsize, isize, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4837    #[cfg(target_pointer_width = "32")]
4838    atomic_int!(AtomicUsize, usize, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4839    #[cfg(target_pointer_width = "64")]
4840    atomic_int!(AtomicIsize, isize, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4841    #[cfg(target_pointer_width = "64")]
4842    atomic_int!(AtomicUsize, usize, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4843    #[cfg(target_pointer_width = "128")]
4844    atomic_int!(AtomicIsize, isize, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4845    #[cfg(target_pointer_width = "128")]
4846    atomic_int!(AtomicUsize, usize, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4847}
4848
4849cfg_has_atomic_8! {
4850    atomic_int!(AtomicI8, i8, 1, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4851    atomic_int!(AtomicU8, u8, 1, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4852}
4853cfg_has_atomic_16! {
4854    atomic_int!(AtomicI16, i16, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4855    atomic_int!(AtomicU16, u16, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8,
4856        #[cfg(all(feature = "float", portable_atomic_unstable_f16))] AtomicF16, f16);
4857}
4858cfg_has_atomic_32! {
4859    atomic_int!(AtomicI32, i32, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4860    atomic_int!(AtomicU32, u32, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32,
4861        #[cfg(feature = "float")] AtomicF32, f32);
4862}
4863cfg_has_atomic_64! {
4864    atomic_int!(AtomicI64, i64, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4865    atomic_int!(AtomicU64, u64, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32,
4866        #[cfg(feature = "float")] AtomicF64, f64);
4867}
4868cfg_has_atomic_128! {
4869    atomic_int!(AtomicI128, i128, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4870    atomic_int!(AtomicU128, u128, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32,
4871        #[cfg(all(feature = "float", portable_atomic_unstable_f128))] AtomicF128, f128);
4872}
4873
4874// See https://github.com/taiki-e/portable-atomic/issues/180
4875#[cfg(not(feature = "require-cas"))]
4876cfg_no_atomic_cas! {
4877cfg_no_atomic_cas_or_amo32! {
4878#[cfg(feature = "float")]
4879use self::diagnostic_helper::HasFetchAbs;
4880use self::diagnostic_helper::{
4881    HasAnd, HasBitClear, HasBitSet, HasBitToggle, HasFetchAnd, HasFetchByteAdd, HasFetchByteSub,
4882    HasFetchNot, HasFetchOr, HasFetchPtrAdd, HasFetchPtrSub, HasFetchXor, HasNot, HasOr, HasXor,
4883};
4884} // cfg_no_atomic_cas_or_amo32!
4885cfg_no_atomic_cas_or_amo8! {
4886use self::diagnostic_helper::{HasAdd, HasSub, HasSwap};
4887} // cfg_no_atomic_cas_or_amo8!
4888#[cfg_attr(not(feature = "float"), allow(unused_imports))]
4889use self::diagnostic_helper::{
4890    HasCompareExchange, HasCompareExchangeWeak, HasFetchAdd, HasFetchMax, HasFetchMin,
4891    HasFetchNand, HasFetchNeg, HasFetchSub, HasFetchUpdate, HasNeg,
4892};
4893#[cfg_attr(
4894    any(
4895        all(
4896            portable_atomic_no_atomic_load_store,
4897            not(any(
4898                target_arch = "avr",
4899                target_arch = "bpf",
4900                target_arch = "msp430",
4901                target_arch = "riscv32",
4902                target_arch = "riscv64",
4903                feature = "critical-section",
4904                portable_atomic_unsafe_assume_single_core,
4905            )),
4906        ),
4907        not(feature = "float"),
4908    ),
4909    allow(dead_code, unreachable_pub)
4910)]
4911#[allow(unknown_lints, unnameable_types)] // Not public API. unnameable_types is available on Rust 1.79+
4912mod diagnostic_helper {
4913    cfg_no_atomic_cas_or_amo8! {
4914    #[doc(hidden)]
4915    #[cfg_attr(
4916        not(portable_atomic_no_diagnostic_namespace),
4917        diagnostic::on_unimplemented(
4918            message = "`swap` requires atomic CAS but not available on this target by default",
4919            label = "this associated function is not available on this target by default",
4920            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4921            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4922        )
4923    )]
4924    pub trait HasSwap {}
4925    } // cfg_no_atomic_cas_or_amo8!
4926    #[doc(hidden)]
4927    #[cfg_attr(
4928        not(portable_atomic_no_diagnostic_namespace),
4929        diagnostic::on_unimplemented(
4930            message = "`compare_exchange` requires atomic CAS but not available on this target by default",
4931            label = "this associated function is not available on this target by default",
4932            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4933            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4934        )
4935    )]
4936    pub trait HasCompareExchange {}
4937    #[doc(hidden)]
4938    #[cfg_attr(
4939        not(portable_atomic_no_diagnostic_namespace),
4940        diagnostic::on_unimplemented(
4941            message = "`compare_exchange_weak` requires atomic CAS but not available on this target by default",
4942            label = "this associated function is not available on this target by default",
4943            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4944            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4945        )
4946    )]
4947    pub trait HasCompareExchangeWeak {}
4948    #[doc(hidden)]
4949    #[cfg_attr(
4950        not(portable_atomic_no_diagnostic_namespace),
4951        diagnostic::on_unimplemented(
4952            message = "`fetch_add` requires atomic CAS but not available on this target by default",
4953            label = "this associated function is not available on this target by default",
4954            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4955            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4956        )
4957    )]
4958    pub trait HasFetchAdd {}
4959    cfg_no_atomic_cas_or_amo8! {
4960    #[doc(hidden)]
4961    #[cfg_attr(
4962        not(portable_atomic_no_diagnostic_namespace),
4963        diagnostic::on_unimplemented(
4964            message = "`add` requires atomic CAS but not available on this target by default",
4965            label = "this associated function is not available on this target by default",
4966            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4967            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4968        )
4969    )]
4970    pub trait HasAdd {}
4971    } // cfg_no_atomic_cas_or_amo8!
4972    #[doc(hidden)]
4973    #[cfg_attr(
4974        not(portable_atomic_no_diagnostic_namespace),
4975        diagnostic::on_unimplemented(
4976            message = "`fetch_sub` requires atomic CAS but not available on this target by default",
4977            label = "this associated function is not available on this target by default",
4978            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4979            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4980        )
4981    )]
4982    pub trait HasFetchSub {}
4983    cfg_no_atomic_cas_or_amo8! {
4984    #[doc(hidden)]
4985    #[cfg_attr(
4986        not(portable_atomic_no_diagnostic_namespace),
4987        diagnostic::on_unimplemented(
4988            message = "`sub` requires atomic CAS but not available on this target by default",
4989            label = "this associated function is not available on this target by default",
4990            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4991            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4992        )
4993    )]
4994    pub trait HasSub {}
4995    } // cfg_no_atomic_cas_or_amo8!
4996    cfg_no_atomic_cas_or_amo32! {
4997    #[doc(hidden)]
4998    #[cfg_attr(
4999        not(portable_atomic_no_diagnostic_namespace),
5000        diagnostic::on_unimplemented(
5001            message = "`fetch_ptr_add` requires atomic CAS but not available on this target by default",
5002            label = "this associated function is not available on this target by default",
5003            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5004            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5005        )
5006    )]
5007    pub trait HasFetchPtrAdd {}
5008    #[doc(hidden)]
5009    #[cfg_attr(
5010        not(portable_atomic_no_diagnostic_namespace),
5011        diagnostic::on_unimplemented(
5012            message = "`fetch_ptr_sub` requires atomic CAS but not available on this target by default",
5013            label = "this associated function is not available on this target by default",
5014            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5015            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5016        )
5017    )]
5018    pub trait HasFetchPtrSub {}
5019    #[doc(hidden)]
5020    #[cfg_attr(
5021        not(portable_atomic_no_diagnostic_namespace),
5022        diagnostic::on_unimplemented(
5023            message = "`fetch_byte_add` requires atomic CAS but not available on this target by default",
5024            label = "this associated function is not available on this target by default",
5025            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5026            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5027        )
5028    )]
5029    pub trait HasFetchByteAdd {}
5030    #[doc(hidden)]
5031    #[cfg_attr(
5032        not(portable_atomic_no_diagnostic_namespace),
5033        diagnostic::on_unimplemented(
5034            message = "`fetch_byte_sub` requires atomic CAS but not available on this target by default",
5035            label = "this associated function is not available on this target by default",
5036            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5037            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5038        )
5039    )]
5040    pub trait HasFetchByteSub {}
5041    #[doc(hidden)]
5042    #[cfg_attr(
5043        not(portable_atomic_no_diagnostic_namespace),
5044        diagnostic::on_unimplemented(
5045            message = "`fetch_and` requires atomic CAS but not available on this target by default",
5046            label = "this associated function is not available on this target by default",
5047            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5048            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5049        )
5050    )]
5051    pub trait HasFetchAnd {}
5052    #[doc(hidden)]
5053    #[cfg_attr(
5054        not(portable_atomic_no_diagnostic_namespace),
5055        diagnostic::on_unimplemented(
5056            message = "`and` requires atomic CAS but not available on this target by default",
5057            label = "this associated function is not available on this target by default",
5058            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5059            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5060        )
5061    )]
5062    pub trait HasAnd {}
5063    } // cfg_no_atomic_cas_or_amo32!
5064    #[doc(hidden)]
5065    #[cfg_attr(
5066        not(portable_atomic_no_diagnostic_namespace),
5067        diagnostic::on_unimplemented(
5068            message = "`fetch_nand` requires atomic CAS but not available on this target by default",
5069            label = "this associated function is not available on this target by default",
5070            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5071            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5072        )
5073    )]
5074    pub trait HasFetchNand {}
5075    cfg_no_atomic_cas_or_amo32! {
5076    #[doc(hidden)]
5077    #[cfg_attr(
5078        not(portable_atomic_no_diagnostic_namespace),
5079        diagnostic::on_unimplemented(
5080            message = "`fetch_or` requires atomic CAS but not available on this target by default",
5081            label = "this associated function is not available on this target by default",
5082            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5083            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5084        )
5085    )]
5086    pub trait HasFetchOr {}
5087    #[doc(hidden)]
5088    #[cfg_attr(
5089        not(portable_atomic_no_diagnostic_namespace),
5090        diagnostic::on_unimplemented(
5091            message = "`or` requires atomic CAS but not available on this target by default",
5092            label = "this associated function is not available on this target by default",
5093            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5094            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5095        )
5096    )]
5097    pub trait HasOr {}
5098    #[doc(hidden)]
5099    #[cfg_attr(
5100        not(portable_atomic_no_diagnostic_namespace),
5101        diagnostic::on_unimplemented(
5102            message = "`fetch_xor` requires atomic CAS but not available on this target by default",
5103            label = "this associated function is not available on this target by default",
5104            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5105            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5106        )
5107    )]
5108    pub trait HasFetchXor {}
5109    #[doc(hidden)]
5110    #[cfg_attr(
5111        not(portable_atomic_no_diagnostic_namespace),
5112        diagnostic::on_unimplemented(
5113            message = "`xor` requires atomic CAS but not available on this target by default",
5114            label = "this associated function is not available on this target by default",
5115            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5116            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5117        )
5118    )]
5119    pub trait HasXor {}
5120    #[doc(hidden)]
5121    #[cfg_attr(
5122        not(portable_atomic_no_diagnostic_namespace),
5123        diagnostic::on_unimplemented(
5124            message = "`fetch_not` requires atomic CAS but not available on this target by default",
5125            label = "this associated function is not available on this target by default",
5126            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5127            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5128        )
5129    )]
5130    pub trait HasFetchNot {}
5131    #[doc(hidden)]
5132    #[cfg_attr(
5133        not(portable_atomic_no_diagnostic_namespace),
5134        diagnostic::on_unimplemented(
5135            message = "`not` requires atomic CAS but not available on this target by default",
5136            label = "this associated function is not available on this target by default",
5137            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5138            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5139        )
5140    )]
5141    pub trait HasNot {}
5142    } // cfg_no_atomic_cas_or_amo32!
5143    #[doc(hidden)]
5144    #[cfg_attr(
5145        not(portable_atomic_no_diagnostic_namespace),
5146        diagnostic::on_unimplemented(
5147            message = "`fetch_neg` requires atomic CAS but not available on this target by default",
5148            label = "this associated function is not available on this target by default",
5149            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5150            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5151        )
5152    )]
5153    pub trait HasFetchNeg {}
5154    #[doc(hidden)]
5155    #[cfg_attr(
5156        not(portable_atomic_no_diagnostic_namespace),
5157        diagnostic::on_unimplemented(
5158            message = "`neg` requires atomic CAS but not available on this target by default",
5159            label = "this associated function is not available on this target by default",
5160            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5161            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5162        )
5163    )]
5164    pub trait HasNeg {}
5165    cfg_no_atomic_cas_or_amo32! {
5166    #[cfg(feature = "float")]
5167    #[cfg_attr(target_pointer_width = "16", allow(dead_code, unreachable_pub))]
5168    #[doc(hidden)]
5169    #[cfg_attr(
5170        not(portable_atomic_no_diagnostic_namespace),
5171        diagnostic::on_unimplemented(
5172            message = "`fetch_abs` requires atomic CAS but not available on this target by default",
5173            label = "this associated function is not available on this target by default",
5174            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5175            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5176        )
5177    )]
5178    pub trait HasFetchAbs {}
5179    } // cfg_no_atomic_cas_or_amo32!
5180    #[doc(hidden)]
5181    #[cfg_attr(
5182        not(portable_atomic_no_diagnostic_namespace),
5183        diagnostic::on_unimplemented(
5184            message = "`fetch_min` requires atomic CAS but not available on this target by default",
5185            label = "this associated function is not available on this target by default",
5186            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5187            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5188        )
5189    )]
5190    pub trait HasFetchMin {}
5191    #[doc(hidden)]
5192    #[cfg_attr(
5193        not(portable_atomic_no_diagnostic_namespace),
5194        diagnostic::on_unimplemented(
5195            message = "`fetch_max` requires atomic CAS but not available on this target by default",
5196            label = "this associated function is not available on this target by default",
5197            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5198            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5199        )
5200    )]
5201    pub trait HasFetchMax {}
5202    #[doc(hidden)]
5203    #[cfg_attr(
5204        not(portable_atomic_no_diagnostic_namespace),
5205        diagnostic::on_unimplemented(
5206            message = "`fetch_update` requires atomic CAS but not available on this target by default",
5207            label = "this associated function is not available on this target by default",
5208            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5209            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5210        )
5211    )]
5212    pub trait HasFetchUpdate {}
5213    cfg_no_atomic_cas_or_amo32! {
5214    #[doc(hidden)]
5215    #[cfg_attr(
5216        not(portable_atomic_no_diagnostic_namespace),
5217        diagnostic::on_unimplemented(
5218            message = "`bit_set` requires atomic CAS but not available on this target by default",
5219            label = "this associated function is not available on this target by default",
5220            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5221            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5222        )
5223    )]
5224    pub trait HasBitSet {}
5225    #[doc(hidden)]
5226    #[cfg_attr(
5227        not(portable_atomic_no_diagnostic_namespace),
5228        diagnostic::on_unimplemented(
5229            message = "`bit_clear` requires atomic CAS but not available on this target by default",
5230            label = "this associated function is not available on this target by default",
5231            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5232            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5233        )
5234    )]
5235    pub trait HasBitClear {}
5236    #[doc(hidden)]
5237    #[cfg_attr(
5238        not(portable_atomic_no_diagnostic_namespace),
5239        diagnostic::on_unimplemented(
5240            message = "`bit_toggle` requires atomic CAS but not available on this target by default",
5241            label = "this associated function is not available on this target by default",
5242            note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5243            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5244        )
5245    )]
5246    pub trait HasBitToggle {}
5247    } // cfg_no_atomic_cas_or_amo32!
5248}
5249} // cfg_no_atomic_cas!