portable_atomic/lib.rs
1// SPDX-License-Identifier: Apache-2.0 OR MIT
2
3/*!
4<!-- Note: Document from sync-markdown-to-rustdoc:start through sync-markdown-to-rustdoc:end
5 is synchronized from README.md. Any changes to that range are not preserved. -->
6<!-- tidy:sync-markdown-to-rustdoc:start -->
7
8Portable atomic types including support for 128-bit atomics, atomic float, etc.
9
10- Provide all atomic integer types (`Atomic{I,U}{8,16,32,64}`) for all targets that can use atomic CAS. (i.e., all targets that can use `std`, and most no-std targets)
11- Provide `AtomicI128` and `AtomicU128`.
12- Provide `AtomicF32` and `AtomicF64`. ([optional, requires the `float` feature](#optional-features-float))
13- Provide `AtomicF16` and `AtomicF128` for [unstable `f16` and `f128`](https://github.com/rust-lang/rust/issues/116909). ([optional, requires the `float` feature and unstable cfgs](#optional-features-float))
14- Provide atomic load/store for targets where atomic is not available at all in the standard library. (RISC-V without A-extension, MSP430, AVR)
15- Provide atomic CAS for targets where atomic CAS is not available in the standard library. (thumbv6m, pre-v6 Arm, RISC-V without A-extension, MSP430, AVR, Xtensa, etc.) (always enabled for MSP430 and AVR, [optional](#optional-features-critical-section) otherwise)
16- Make features that require newer compilers, such as [`fetch_{max,min}`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.fetch_max), [`fetch_update`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.fetch_update), [`as_ptr`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.as_ptr), [`from_ptr`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.from_ptr), [`AtomicBool::fetch_not`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicBool.html#method.fetch_not), [`AtomicPtr::fetch_*`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicPtr.html#method.fetch_and), and [stronger CAS failure ordering](https://github.com/rust-lang/rust/pull/98383) available on Rust 1.34+.
17- Provide workaround for bugs in the standard library's atomic-related APIs, such as [rust-lang/rust#100650], `fence`/`compiler_fence` on MSP430 that cause LLVM error, etc.
18
19<!-- TODO:
20- mention Atomic{I,U}*::fetch_neg, Atomic{I*,U*,Ptr}::bit_*, etc.
21- mention optimizations not available in the standard library's equivalents
22-->
23
24portable-atomic version of `std::sync::Arc` is provided by the [portable-atomic-util](https://github.com/taiki-e/portable-atomic/tree/HEAD/portable-atomic-util) crate.
25
26## Usage
27
28Add this to your `Cargo.toml`:
29
30```toml
31[dependencies]
32portable-atomic = "1"
33```
34
35The default features are mainly for users who use atomics larger than the pointer width.
36If you don't need them, disabling the default features may reduce code size and compile time slightly.
37
38```toml
39[dependencies]
40portable-atomic = { version = "1", default-features = false }
41```
42
43If your crate supports no-std environment and requires atomic CAS, enabling the `require-cas` feature will allow the portable-atomic to display a [helpful error message](https://github.com/taiki-e/portable-atomic/pull/100) to users on targets requiring additional action on the user side to provide atomic CAS.
44
45```toml
46[dependencies]
47portable-atomic = { version = "1.3", default-features = false, features = ["require-cas"] }
48```
49
50(Since 1.8, portable-atomic can display a [helpful error message](https://github.com/taiki-e/portable-atomic/pull/181) even without the `require-cas` feature when the rustc version is 1.78+. However, the `require-cas` feature also allows rejecting builds at an earlier stage, we recommend enabling it unless enabling it causes [problems](https://github.com/matklad/once_cell/pull/267).)
51
52## 128-bit atomics support
53
54Native 128-bit atomic operations are available on x86_64 (Rust 1.59+), AArch64 (Rust 1.59+), riscv64 (Rust 1.59+), Arm64EC (Rust 1.84+), s390x (Rust 1.84+), and powerpc64 (nightly only), otherwise the fallback implementation is used.
55
56On x86_64, even if `cmpxchg16b` is not available at compile-time (Note: `cmpxchg16b` target feature is enabled by default only on Apple, Windows (except Windows 7), and Fuchsia targets), run-time detection checks whether `cmpxchg16b` is available. If `cmpxchg16b` is not available at either compile-time or run-time detection, the fallback implementation is used. See also [`portable_atomic_no_outline_atomics`](#optional-cfg-no-outline-atomics) cfg.
57
58They are usually implemented using inline assembly, and when using Miri or ThreadSanitizer that do not support inline assembly, core intrinsics are used instead of inline assembly if possible.
59
60See the [`atomic128` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/atomic128/README.md) for details.
61
62## <a name="optional-features"></a><a name="optional-cfg"></a>Optional features/cfgs
63
64portable-atomic provides features and cfgs to allow enabling specific APIs and customizing its behavior.
65
66Some options have both a feature and a cfg. When both exist, it indicates that the feature does not follow Cargo's recommendation that [features should be additive](https://doc.rust-lang.org/nightly/cargo/reference/features.html#feature-unification). Therefore, the maintainer's recommendation is to use cfg instead of feature. However, in the embedded ecosystem, it is very common to use features in such places, so these options provide both so you can choose based on your preference.
67
68<details>
69<summary>How to enable cfg (click to show)</summary>
70
71One of the ways to enable cfg is to set [rustflags in the cargo config](https://doc.rust-lang.org/cargo/reference/config.html#targettriplerustflags):
72
73```toml
74# .cargo/config.toml
75[target.<target>]
76rustflags = ["--cfg", "portable_atomic_unsafe_assume_single_core"]
77```
78
79Or set environment variable:
80
81```sh
82RUSTFLAGS="--cfg portable_atomic_unsafe_assume_single_core" cargo ...
83```
84
85</details>
86
87- <a name="optional-features-fallback"></a>**`fallback` feature** *(enabled by default)*<br>
88 Enable fallback implementations.
89
90 This enables atomic types with larger than the width supported by atomic instructions available on the current target. If the current target supports 128-bit atomics, this is no-op.
91
92 This uses fallback implementation that using global locks by default. The following features/cfgs change this behavior:
93 - [`unsafe-assume-single-core` feature / `portable_atomic_unsafe_assume_single_core` cfg](#optional-features-unsafe-assume-single-core): Use fallback implementations that disabling interrupts instead of using global locks.
94 - If your target is single-core and calling interrupt disable instructions is safe, this is a safer and more efficient option.
95 - [`unsafe-assume-privileged` feature / `portable_atomic_unsafe_assume_privileged` cfg](#optional-features-unsafe-assume-privileged): Use fallback implementations that using global locks with disabling interrupts.
96 - If your target is multi-core and calling interrupt disable instructions is safe, this is a safer option.
97
98- <a name="optional-features-float"></a>**`float` feature**<br>
99 Provide `AtomicF{32,64}`.
100
101 If you want atomic types for unstable float types ([`f16` and `f128`](https://github.com/rust-lang/rust/issues/116909)), enable unstable cfg (`portable_atomic_unstable_f16` cfg for `AtomicF16`, `portable_atomic_unstable_f128` cfg for `AtomicF128`, [there is no possibility that both feature and cfg will be provided for unstable options.](https://github.com/taiki-e/portable-atomic/pull/200#issuecomment-2682252991)).
102
103<div class="rustdoc-alert rustdoc-alert-note">
104
105> **ⓘ Note**
106>
107> - Atomic float's `fetch_{add,sub,min,max}` are usually implemented using CAS loops, which can be slower than equivalent operations of atomic integers. As an exception, AArch64 with FEAT_LSFE and GPU targets have atomic float instructions and we use them on AArch64 when `lsfe` target feature is available at compile-time. We [plan to use atomic float instructions for GPU targets as well in the future.](https://github.com/taiki-e/portable-atomic/issues/34)
108> - Unstable cfgs are outside of the normal semver guarantees and minor or patch versions of portable-atomic may make breaking changes to them at any time.
109
110</div>
111
112- <a name="optional-features-std"></a>**`std` feature**<br>
113 Use `std`.
114
115- <a name="optional-features-require-cas"></a>**`require-cas` feature**<br>
116 Emit compile error if atomic CAS is not available. See [Usage](#usage) section for usage of this feature.
117
118- <a name="optional-features-serde"></a>**`serde` feature**<br>
119 Implement `serde::{Serialize,Deserialize}` for atomic types.
120
121 Note:
122 - The MSRV when this feature is enabled depends on the MSRV of [serde].
123
124- <a name="optional-features-critical-section"></a>**`critical-section` feature**<br>
125 Use [critical-section] to provide atomic CAS for targets where atomic CAS is not available in the standard library.
126
127 `critical-section` support is useful to get atomic CAS when the [`unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)](#optional-features-unsafe-assume-single-core) can't be used,
128 such as multi-core targets, unprivileged code running under some RTOS, or environments where disabling interrupts
129 needs extra care due to e.g. real-time requirements.
130
131<div class="rustdoc-alert rustdoc-alert-note">
132
133> **ⓘ Note**
134>
135> - When enabling this feature, you should provide a suitable critical section implementation for the current target, see the [critical-section] documentation for details on how to do so.
136> - With this feature, critical sections are taken for all atomic operations, while with `unsafe-assume-single-core` feature [some operations](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/interrupt/README.md#no-disable-interrupts) don't require disabling interrupts. Therefore, for better performance, if all the `critical-section` implementation for your target does is disable interrupts, prefer using `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg) instead.
137> - It is usually **discouraged** to always enable this feature in libraries that depend on `portable-atomic`.
138>
139> Enabling this feature will prevent the end user from having the chance to take advantage of other (potentially) efficient implementations (implementations provided by `unsafe-assume-single-core` feature mentioned above, implementation proposed in [#60], etc.). Also, targets that are currently unsupported may be supported in the future.
140>
141> The recommended approach for libraries is to leave it up to the end user whether or not to enable this feature. (However, it may make sense to enable this feature by default for libraries specific to a platform where other implementations are known not to work.)
142>
143> See also [](https://github.com/matklad/once_cell/issues/264#issuecomment-2352654806).
144>
145> As an example, the end-user's `Cargo.toml` that uses a crate that provides a critical-section implementation and a crate that depends on portable-atomic as an option would be expected to look like this:
146>
147> ```toml
148> [dependencies]
149> portable-atomic = { version = "1", default-features = false, features = ["critical-section"] }
150> crate-provides-critical-section-impl = "..."
151> crate-uses-portable-atomic-as-feature = { version = "...", features = ["portable-atomic"] }
152> ```
153>
154> - Enabling both this feature and `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg) will result in a compile error.
155> - Enabling both this feature and `unsafe-assume-privileged` feature (or `portable_atomic_unsafe_assume_privileged` cfg) will result in a compile error.
156> - The MSRV when this feature is enabled depends on the MSRV of [critical-section].
157
158</div>
159
160- <a name="optional-features-unsafe-assume-single-core"></a><a name="optional-cfg-unsafe-assume-single-core"></a>**`unsafe-assume-single-core` feature / `portable_atomic_unsafe_assume_single_core` cfg**<br>
161 Assume that the target is single-core and privileged instructions required to disable interrupts are available.
162
163 - When this feature/cfg is enabled, this crate provides atomic CAS for targets where atomic CAS is not available in the standard library by disabling interrupts.
164 - When both this feature/cfg and enabled-by-default `fallback` feature is enabled, this crate provides atomic types with larger than the width supported by native instructions by disabling interrupts.
165
166<div class="rustdoc-alert rustdoc-alert-warning">
167
168> **⚠ Warning**
169>
170> This feature/cfg is `unsafe`, and note the following safety requirements:
171> - Enabling this feature/cfg for multi-core systems is always **unsound**.
172>
173> - This uses privileged instructions to disable interrupts, so it usually doesn't work on unprivileged mode.
174>
175> Enabling this feature/cfg in an environment where privileged instructions are not available, or if the instructions used are not sufficient to disable interrupts in the system, it is also usually considered **unsound**, although the details are system-dependent.
176>
177> The following are known cases:
178> - On Arm (except for M-Profile architectures), this disables only IRQs by default. For many systems (e.g., GBA) this is enough. If the system need to disable both IRQs and FIQs, you need to enable the `disable-fiq` feature (or `portable_atomic_disable_fiq` cfg) together.
179> - On RISC-V, this generates code for machine-mode (M-mode) by default. If you enable the `s-mode` feature (or `portable_atomic_s_mode` cfg) together, this generates code for supervisor-mode (S-mode). In particular, `qemu-system-riscv*` uses [OpenSBI](https://github.com/riscv-software-src/opensbi) as the default firmware.
180
181</div>
182
183Consider using the [`unsafe-assume-privileged` feature (or `portable_atomic_unsafe_assume_privileged` cfg)](#optional-features-unsafe-assume-privileged) for multi-core systems with atomic CAS.
184
185Consider using the [`critical-section` feature](#optional-features-critical-section) for systems that cannot use this feature/cfg.
186
187See also the [`interrupt` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/interrupt/README.md).
188
189<div class="rustdoc-alert rustdoc-alert-note">
190
191> **ⓘ Note**
192>
193> - It is **very strongly discouraged** to enable this feature/cfg in libraries that depend on `portable-atomic`.
194>
195> The recommended approach for libraries is to leave it up to the end user whether or not to enable this feature/cfg. (However, it may make sense to enable this feature/cfg by default for libraries specific to a platform where it is guaranteed to always be sound, for example in a hardware abstraction layer targeting a single-core chip.)
196> - Enabling this feature/cfg for unsupported architectures will result in a compile error.
197> - Arm, RISC-V, and Xtensa are currently supported. (Since all MSP430 and AVR are single-core, we always provide atomic CAS for them without this feature/cfg.)
198> - Feel free to [submit an issue](https://github.com/taiki-e/portable-atomic/issues/new) if your target is not supported yet.
199> - Enabling this feature/cfg for targets where privileged instructions are obviously unavailable (e.g., Linux) will result in a compile error.
200> - Feel free to [submit an issue](https://github.com/taiki-e/portable-atomic/issues/new) if your target supports privileged instructions but the build rejected.
201> - Enabling both this feature/cfg and `critical-section` feature will result in a compile error.
202> - When both this feature/cfg and `unsafe-assume-privileged` feature (or `portable_atomic_unsafe_assume_privileged` cfg) are enabled, this feature/cfg is preferred.
203
204</div>
205
206- <a name="optional-features-unsafe-assume-privileged"></a><a name="optional-cfg-unsafe-assume-privileged"></a>**`unsafe-assume-privileged` feature / `portable_atomic_unsafe_assume_privileged` cfg**<br>
207 Similar to `unsafe-assume-single-core` feature / `portable_atomic_unsafe_assume_single_core` cfg, but only assumes about availability of privileged instructions required to disable interrupts.
208
209 - When both this feature/cfg and enabled-by-default `fallback` feature is enabled, this crate provides atomic types with larger than the width supported by native instructions by using global locks with disabling interrupts.
210
211<div class="rustdoc-alert rustdoc-alert-warning">
212
213> **⚠ Warning**
214>
215> This feature/cfg is `unsafe`, and except for being sound in multi-core systems, this has the same safety requirements as [`unsafe-assume-single-core` feature / `portable_atomic_unsafe_assume_single_core` cfg](#optional-features-unsafe-assume-single-core).
216
217</div>
218
219<div class="rustdoc-alert rustdoc-alert-note">
220
221> **ⓘ Note**
222>
223> - It is **very strongly discouraged** to enable this feature/cfg in libraries that depend on `portable-atomic`.
224>
225> The recommended approach for libraries is to leave it up to the end user whether or not to enable this feature/cfg. (However, it may make sense to enable this feature/cfg by default for libraries specific to a platform where it is guaranteed to always be sound, for example in a hardware abstraction layer.)
226> - Enabling this feature/cfg for unsupported targets will result in a compile error.
227> - This requires atomic CAS (`cfg(target_has_atomic = "ptr")` or `cfg_no_atomic_cas!`).
228> - Arm, RISC-V, and Xtensa are currently supported.
229> - Feel free to [submit an issue](https://github.com/taiki-e/portable-atomic/issues/new) if your target is not supported yet.
230> - Enabling this feature/cfg for targets where privileged instructions are obviously unavailable (e.g., Linux) will result in a compile error.
231> - Feel free to [submit an issue](https://github.com/taiki-e/portable-atomic/issues/new) if your target supports privileged instructions but the build rejected.
232> - Enabling both this feature/cfg and `critical-section` feature will result in a compile error.
233> - When both this feature/cfg and `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg) are enabled, `unsafe-assume-single-core` is preferred.
234
235</div>
236
237- <a name="optional-cfg-no-outline-atomics"></a>**`portable_atomic_no_outline_atomics` cfg**<br>
238 Disable dynamic dispatching by run-time CPU feature detection.
239
240 Dynamic dispatching by run-time CPU feature detection allows maintaining support for older CPUs while using features that are not supported on older CPUs, such as CMPXCHG16B (x86_64) and FEAT_LSE/FEAT_LSE2 (AArch64).
241
242 See also the [`atomic128` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/atomic128/README.md).
243
244<div class="rustdoc-alert rustdoc-alert-note">
245
246> **ⓘ Note**
247>
248> - If the required target features are enabled at compile-time, dynamic dispatching is automatically disabled and the atomic operations are inlined.
249> - This is compatible with no-std (as with all features except `std`).
250> - On some targets, run-time detection is disabled by default mainly for compatibility with incomplete build environments or support for it is experimental, and can be enabled by `portable_atomic_outline_atomics` cfg. (When both cfg are enabled, `*_no_*` cfg is preferred.)
251> - Some AArch64 targets enable LLVM's `outline-atomics` target feature by default, so if you set this cfg, you may want to disable that as well. (However, portable-atomic's outline-atomics does not depend on the compiler-rt symbols, so even if you need to disable LLVM's outline-atomics, you may not need to disable portable-atomic's outline-atomics.)
252> - Dynamic detection is currently only supported in x86_64, AArch64, Arm, RISC-V, Arm64EC, and powerpc64. Enabling this cfg for unsupported architectures will result in a compile error.
253
254</div>
255
256## Related Projects
257
258- [atomic-maybe-uninit]: Atomic operations on potentially uninitialized integers.
259- [atomic-memcpy]: Byte-wise atomic memcpy.
260
261[#60]: https://github.com/taiki-e/portable-atomic/issues/60
262[atomic-maybe-uninit]: https://github.com/taiki-e/atomic-maybe-uninit
263[atomic-memcpy]: https://github.com/taiki-e/atomic-memcpy
264[critical-section]: https://github.com/rust-embedded/critical-section
265[rust-lang/rust#100650]: https://github.com/rust-lang/rust/issues/100650
266[serde]: https://github.com/serde-rs/serde
267
268<!-- tidy:sync-markdown-to-rustdoc:end -->
269*/
270
271#![no_std]
272#![doc(test(
273 no_crate_inject,
274 attr(
275 deny(warnings, rust_2018_idioms, single_use_lifetimes),
276 allow(dead_code, unused_variables)
277 )
278))]
279#![cfg_attr(not(portable_atomic_no_unsafe_op_in_unsafe_fn), warn(unsafe_op_in_unsafe_fn))] // unsafe_op_in_unsafe_fn requires Rust 1.52
280#![cfg_attr(portable_atomic_no_unsafe_op_in_unsafe_fn, allow(unused_unsafe))]
281#![warn(
282 // Lints that may help when writing public library.
283 missing_debug_implementations,
284 // missing_docs,
285 clippy::alloc_instead_of_core,
286 clippy::exhaustive_enums,
287 clippy::exhaustive_structs,
288 clippy::impl_trait_in_params,
289 clippy::missing_inline_in_public_items,
290 clippy::std_instead_of_alloc,
291 clippy::std_instead_of_core,
292 // Code outside of cfg(feature = "float") shouldn't use float.
293 clippy::float_arithmetic,
294)]
295#![cfg_attr(not(portable_atomic_no_asm), warn(missing_docs))] // module-level #![allow(missing_docs)] doesn't work for macros on old rustc
296#![cfg_attr(portable_atomic_no_strict_provenance, allow(unstable_name_collisions))]
297#![allow(clippy::inline_always, clippy::used_underscore_items)]
298// asm_experimental_arch
299// AVR, MSP430, and Xtensa are tier 3 platforms and require nightly anyway.
300// On tier 2 platforms (powerpc64), we use cfg set by build script to
301// determine whether this feature is available or not.
302#![cfg_attr(
303 all(
304 not(portable_atomic_no_asm),
305 any(
306 target_arch = "avr",
307 target_arch = "msp430",
308 all(
309 target_arch = "xtensa",
310 any(
311 portable_atomic_unsafe_assume_single_core,
312 portable_atomic_unsafe_assume_privileged,
313 ),
314 ),
315 all(target_arch = "powerpc64", portable_atomic_unstable_asm_experimental_arch),
316 ),
317 ),
318 feature(asm_experimental_arch)
319)]
320// f16/f128
321// cfg is unstable and explicitly enabled by the user
322#![cfg_attr(portable_atomic_unstable_f16, feature(f16))]
323#![cfg_attr(portable_atomic_unstable_f128, feature(f128))]
324// Old nightly only
325// These features are already stabilized or have already been removed from compilers,
326// and can safely be enabled for old nightly as long as version detection works.
327// - cfg(target_has_atomic)
328// - asm! on AArch64, Arm, RISC-V, x86, x86_64, Arm64EC, s390x
329// - llvm_asm! on AVR (tier 3) and MSP430 (tier 3)
330// - #[instruction_set] on non-Linux/Android pre-v6 Arm (tier 3)
331// This also helps us test that our assembly code works with the minimum external
332// LLVM version of the first rustc version that inline assembly stabilized.
333#![cfg_attr(portable_atomic_unstable_cfg_target_has_atomic, feature(cfg_target_has_atomic))]
334#![cfg_attr(
335 all(
336 portable_atomic_unstable_asm,
337 any(
338 target_arch = "aarch64",
339 target_arch = "arm",
340 target_arch = "riscv32",
341 target_arch = "riscv64",
342 target_arch = "x86",
343 target_arch = "x86_64",
344 ),
345 ),
346 feature(asm)
347)]
348#![cfg_attr(
349 all(
350 portable_atomic_unstable_asm_experimental_arch,
351 any(target_arch = "arm64ec", target_arch = "s390x"),
352 ),
353 feature(asm_experimental_arch)
354)]
355#![cfg_attr(
356 all(any(target_arch = "avr", target_arch = "msp430"), portable_atomic_no_asm),
357 feature(llvm_asm)
358)]
359#![cfg_attr(
360 all(
361 target_arch = "arm",
362 portable_atomic_unstable_isa_attribute,
363 any(portable_atomic_unsafe_assume_single_core, portable_atomic_unsafe_assume_privileged),
364 not(any(target_feature = "v7", portable_atomic_target_feature = "v7")),
365 not(any(target_feature = "mclass", portable_atomic_target_feature = "mclass")),
366 ),
367 feature(isa_attribute)
368)]
369// Miri and/or ThreadSanitizer only
370// They do not support inline assembly, so we need to use unstable features instead.
371// Since they require nightly compilers anyway, we can use the unstable features.
372// This is not an ideal situation, but it is still better than always using lock-based
373// fallback and causing memory ordering problems to be missed by these checkers.
374#![cfg_attr(
375 all(
376 any(
377 target_arch = "aarch64",
378 target_arch = "arm64ec",
379 target_arch = "powerpc64",
380 target_arch = "s390x",
381 ),
382 any(miri, portable_atomic_sanitize_thread),
383 ),
384 allow(internal_features)
385)]
386#![cfg_attr(
387 all(
388 any(
389 target_arch = "aarch64",
390 target_arch = "arm64ec",
391 target_arch = "powerpc64",
392 target_arch = "s390x",
393 ),
394 any(miri, portable_atomic_sanitize_thread),
395 ),
396 feature(core_intrinsics)
397)]
398// docs.rs only (cfg is enabled by docs.rs, not build script)
399#![cfg_attr(docsrs, feature(doc_cfg))]
400#![cfg_attr(docsrs, doc(auto_cfg = false))]
401#![cfg_attr(
402 all(
403 portable_atomic_no_atomic_load_store,
404 not(any(
405 target_arch = "avr",
406 target_arch = "bpf",
407 target_arch = "msp430",
408 target_arch = "riscv32",
409 target_arch = "riscv64",
410 feature = "critical-section",
411 portable_atomic_unsafe_assume_single_core,
412 )),
413 ),
414 allow(unused_imports, unused_macros, clippy::unused_trait_names)
415)]
416
417#[cfg(any(test, feature = "std"))]
418extern crate std;
419
420#[macro_use]
421mod cfgs;
422#[cfg(target_pointer_width = "16")]
423pub use self::{cfg_has_atomic_16 as cfg_has_atomic_ptr, cfg_no_atomic_16 as cfg_no_atomic_ptr};
424#[cfg(target_pointer_width = "32")]
425pub use self::{cfg_has_atomic_32 as cfg_has_atomic_ptr, cfg_no_atomic_32 as cfg_no_atomic_ptr};
426#[cfg(target_pointer_width = "64")]
427pub use self::{cfg_has_atomic_64 as cfg_has_atomic_ptr, cfg_no_atomic_64 as cfg_no_atomic_ptr};
428#[cfg(target_pointer_width = "128")]
429pub use self::{cfg_has_atomic_128 as cfg_has_atomic_ptr, cfg_no_atomic_128 as cfg_no_atomic_ptr};
430
431// There are currently no 128-bit or higher builtin targets.
432// (Although some of our generic code is written with the future
433// addition of 128-bit targets in mind.)
434// Note that Rust (and C99) pointers must be at least 16-bit (i.e., 8-bit targets are impossible): https://github.com/rust-lang/rust/pull/49305
435#[cfg(not(any(
436 target_pointer_width = "16",
437 target_pointer_width = "32",
438 target_pointer_width = "64",
439)))]
440compile_error!(
441 "portable-atomic currently only supports targets with {16,32,64}-bit pointer width; \
442 if you need support for others, \
443 please submit an issue at <https://github.com/taiki-e/portable-atomic>"
444);
445
446// Reject unsupported architectures.
447#[cfg(portable_atomic_unsafe_assume_single_core)]
448#[cfg(not(any(
449 target_arch = "arm",
450 target_arch = "avr",
451 target_arch = "msp430",
452 target_arch = "riscv32",
453 target_arch = "riscv64",
454 target_arch = "xtensa",
455)))]
456compile_error!(
457 "`portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature) \
458 is not supported yet on this architecture;\n\
459 if you need unsafe-assume-{single-core,privileged} support for this target,\n\
460 please submit an issue at <https://github.com/taiki-e/portable-atomic/issues/new>"
461);
462// unsafe-assume-single-core is accepted on AVR/MSP430, but
463// unsafe-assume-privileged on them is really useless on them since they are
464// always single-core, so rejected here.
465#[cfg(portable_atomic_unsafe_assume_privileged)]
466#[cfg(not(any(
467 target_arch = "arm",
468 target_arch = "riscv32",
469 target_arch = "riscv64",
470 target_arch = "xtensa",
471)))]
472compile_error!(
473 "`portable_atomic_unsafe_assume_privileged` cfg (`unsafe-assume-privileged` feature) \
474 is not supported yet on this architecture;\n\
475 if you need unsafe-assume-{single-core,privileged} support for this target,\n\
476 please submit an issue at <https://github.com/taiki-e/portable-atomic/issues/new>"
477);
478// unsafe-assume-privileged requires CAS.
479#[cfg(portable_atomic_unsafe_assume_privileged)]
480cfg_no_atomic_cas! {
481 compile_error!(
482 "`portable_atomic_unsafe_assume_privileged` cfg (`unsafe-assume-privileged` feature) \
483 requires atomic CAS"
484 );
485}
486// Reject targets where privileged instructions are obviously unavailable.
487// TODO: Some embedded OSes should probably be accepted here.
488#[cfg(any(portable_atomic_unsafe_assume_single_core, portable_atomic_unsafe_assume_privileged))]
489#[cfg(any(
490 target_arch = "arm",
491 target_arch = "avr",
492 target_arch = "msp430",
493 target_arch = "riscv32",
494 target_arch = "riscv64",
495 target_arch = "xtensa",
496))]
497#[cfg_attr(
498 portable_atomic_no_cfg_target_has_atomic,
499 cfg(all(not(portable_atomic_no_atomic_cas), not(target_os = "none")))
500)]
501#[cfg_attr(
502 not(portable_atomic_no_cfg_target_has_atomic),
503 cfg(all(target_has_atomic = "ptr", not(target_os = "none")))
504)]
505compile_error!(
506 "`portable_atomic_unsafe_assume_{single_core,privileged}` cfg (`unsafe-assume-{single-core,privileged}` feature) \
507 is not compatible with target where privileged instructions are obviously unavailable;\n\
508 if you need unsafe-assume-{single-core,privileged} support for this target,\n\
509 please submit an issue at <https://github.com/taiki-e/portable-atomic/issues/new>\n\
510 see also <https://github.com/taiki-e/portable-atomic/issues/148> for troubleshooting"
511);
512
513#[cfg(portable_atomic_no_outline_atomics)]
514#[cfg(not(any(
515 target_arch = "aarch64",
516 target_arch = "arm",
517 target_arch = "arm64ec",
518 target_arch = "powerpc64",
519 target_arch = "riscv32",
520 target_arch = "riscv64",
521 target_arch = "x86_64",
522)))]
523compile_error!("`portable_atomic_no_outline_atomics` cfg does not compatible with this target");
524#[cfg(portable_atomic_outline_atomics)]
525#[cfg(not(any(
526 target_arch = "aarch64",
527 target_arch = "powerpc64",
528 target_arch = "riscv32",
529 target_arch = "riscv64",
530)))]
531compile_error!("`portable_atomic_outline_atomics` cfg does not compatible with this target");
532
533#[cfg(portable_atomic_disable_fiq)]
534#[cfg(not(all(
535 target_arch = "arm",
536 not(any(target_feature = "mclass", portable_atomic_target_feature = "mclass")),
537)))]
538compile_error!(
539 "`portable_atomic_disable_fiq` cfg (`disable-fiq` feature) is only available on Arm (except for M-Profile architectures)"
540);
541#[cfg(portable_atomic_s_mode)]
542#[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64")))]
543compile_error!("`portable_atomic_s_mode` cfg (`s-mode` feature) is only available on RISC-V");
544#[cfg(portable_atomic_force_amo)]
545#[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64")))]
546compile_error!("`portable_atomic_force_amo` cfg (`force-amo` feature) is only available on RISC-V");
547
548#[cfg(portable_atomic_disable_fiq)]
549#[cfg(not(any(
550 portable_atomic_unsafe_assume_single_core,
551 portable_atomic_unsafe_assume_privileged,
552)))]
553compile_error!(
554 "`portable_atomic_disable_fiq` cfg (`disable-fiq` feature) may only be used together with `portable_atomic_unsafe_assume_{single_core,privileged}` cfg (`unsafe-assume-{single-core,privileged}` feature)"
555);
556#[cfg(portable_atomic_s_mode)]
557#[cfg(not(any(
558 portable_atomic_unsafe_assume_single_core,
559 portable_atomic_unsafe_assume_privileged,
560)))]
561compile_error!(
562 "`portable_atomic_s_mode` cfg (`s-mode` feature) may only be used together with `portable_atomic_unsafe_assume_{single_core,privileged}` cfg (`unsafe-assume-{single-core,privileged}` feature)"
563);
564#[cfg(portable_atomic_force_amo)]
565#[cfg(not(portable_atomic_unsafe_assume_single_core))]
566compile_error!(
567 "`portable_atomic_force_amo` cfg (`force-amo` feature) may only be used together with `portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature)"
568);
569#[cfg(portable_atomic_unsafe_assume_privileged)]
570#[cfg(not(feature = "fallback"))]
571compile_error!(
572 "`portable_atomic_unsafe_assume_privileged` cfg (`unsafe-assume-privileged` feature) may only be used together with `fallback` feature"
573);
574
575#[cfg(all(
576 any(portable_atomic_unsafe_assume_single_core, portable_atomic_unsafe_assume_privileged),
577 feature = "critical-section"
578))]
579compile_error!(
580 "you may not enable `critical-section` feature and `portable_atomic_unsafe_assume_{single_core,privileged}` cfg (`unsafe-assume-{single-core,privileged}` feature) at the same time"
581);
582
583#[cfg(feature = "require-cas")]
584#[cfg_attr(
585 portable_atomic_no_cfg_target_has_atomic,
586 cfg(not(any(
587 not(portable_atomic_no_atomic_cas),
588 target_arch = "avr",
589 target_arch = "msp430",
590 feature = "critical-section",
591 portable_atomic_unsafe_assume_single_core,
592 )))
593)]
594#[cfg_attr(
595 not(portable_atomic_no_cfg_target_has_atomic),
596 cfg(not(any(
597 target_has_atomic = "ptr",
598 target_arch = "avr",
599 target_arch = "msp430",
600 feature = "critical-section",
601 portable_atomic_unsafe_assume_single_core,
602 )))
603)]
604compile_error!(
605 "dependents require atomic CAS but not available on this target by default;\n\
606 consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg).\n\
607 see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
608);
609
610#[macro_use]
611mod utils;
612
613#[cfg(test)]
614#[macro_use]
615mod tests;
616
617#[doc(no_inline)]
618pub use core::sync::atomic::Ordering;
619
620// LLVM doesn't support fence/compiler_fence for MSP430.
621#[cfg(target_arch = "msp430")]
622pub use self::imp::msp430::{compiler_fence, fence};
623#[doc(no_inline)]
624#[cfg(not(target_arch = "msp430"))]
625pub use core::sync::atomic::{compiler_fence, fence};
626
627mod imp;
628
629pub mod hint {
630 //! Re-export of the [`core::hint`] module.
631 //!
632 //! The only difference from the [`core::hint`] module is that [`spin_loop`]
633 //! is available in all rust versions that this crate supports.
634 //!
635 //! ```
636 //! use portable_atomic::hint;
637 //!
638 //! hint::spin_loop();
639 //! ```
640
641 #[doc(no_inline)]
642 pub use core::hint::*;
643
644 /// Emits a machine instruction to signal the processor that it is running in
645 /// a busy-wait spin-loop ("spin lock").
646 ///
647 /// Upon receiving the spin-loop signal the processor can optimize its behavior by,
648 /// for example, saving power or switching hyper-threads.
649 ///
650 /// This function is different from [`thread::yield_now`] which directly
651 /// yields to the system's scheduler, whereas `spin_loop` does not interact
652 /// with the operating system.
653 ///
654 /// A common use case for `spin_loop` is implementing bounded optimistic
655 /// spinning in a CAS loop in synchronization primitives. To avoid problems
656 /// like priority inversion, it is strongly recommended that the spin loop is
657 /// terminated after a finite amount of iterations and an appropriate blocking
658 /// syscall is made.
659 ///
660 /// **Note:** On platforms that do not support receiving spin-loop hints this
661 /// function does not do anything at all.
662 ///
663 /// [`thread::yield_now`]: https://doc.rust-lang.org/std/thread/fn.yield_now.html
664 #[inline]
665 pub fn spin_loop() {
666 #[allow(deprecated)]
667 core::sync::atomic::spin_loop_hint();
668 }
669}
670
671#[cfg(doc)]
672use core::sync::atomic::Ordering::{AcqRel, Acquire, Relaxed, Release, SeqCst};
673use core::{fmt, ptr};
674
675cfg_has_atomic_8! {
676/// A boolean type which can be safely shared between threads.
677///
678/// This type has the same in-memory representation as a [`bool`].
679///
680/// If the compiler and the platform support atomic loads and stores of `u8`,
681/// this type is a wrapper for the standard library's
682/// [`AtomicBool`](core::sync::atomic::AtomicBool). If the platform supports it
683/// but the compiler does not, atomic operations are implemented using inline
684/// assembly.
685#[repr(C, align(1))]
686pub struct AtomicBool {
687 v: core::cell::UnsafeCell<u8>,
688}
689
690impl Default for AtomicBool {
691 /// Creates an `AtomicBool` initialized to `false`.
692 #[inline]
693 fn default() -> Self {
694 Self::new(false)
695 }
696}
697
698impl From<bool> for AtomicBool {
699 /// Converts a `bool` into an `AtomicBool`.
700 #[inline]
701 fn from(b: bool) -> Self {
702 Self::new(b)
703 }
704}
705
706// Send is implicitly implemented.
707// SAFETY: any data races are prevented by disabling interrupts or
708// atomic intrinsics (see module-level comments).
709unsafe impl Sync for AtomicBool {}
710
711// UnwindSafe is implicitly implemented.
712#[cfg(not(portable_atomic_no_core_unwind_safe))]
713impl core::panic::RefUnwindSafe for AtomicBool {}
714#[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
715impl std::panic::RefUnwindSafe for AtomicBool {}
716
717impl_debug_and_serde!(AtomicBool);
718
719impl AtomicBool {
720 /// Creates a new `AtomicBool`.
721 ///
722 /// # Examples
723 ///
724 /// ```
725 /// use portable_atomic::AtomicBool;
726 ///
727 /// let atomic_true = AtomicBool::new(true);
728 /// let atomic_false = AtomicBool::new(false);
729 /// ```
730 #[inline]
731 #[must_use]
732 pub const fn new(v: bool) -> Self {
733 static_assert_layout!(AtomicBool, bool);
734 Self { v: core::cell::UnsafeCell::new(v as u8) }
735 }
736
737 // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
738 const_fn! {
739 const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
740 /// Creates a new `AtomicBool` from a pointer.
741 ///
742 /// This is `const fn` on Rust 1.83+.
743 ///
744 /// # Safety
745 ///
746 /// * `ptr` must be aligned to `align_of::<AtomicBool>()` (note that on some platforms this can
747 /// be bigger than `align_of::<bool>()`).
748 /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
749 /// * If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
750 /// behind `ptr` must have a happens-before relationship with atomic accesses via the returned
751 /// value (or vice-versa).
752 /// * In other words, time periods where the value is accessed atomically may not overlap
753 /// with periods where the value is accessed non-atomically.
754 /// * This requirement is trivially satisfied if `ptr` is never used non-atomically for the
755 /// duration of lifetime `'a`. Most use cases should be able to follow this guideline.
756 /// * This requirement is also trivially satisfied if all accesses (atomic or not) are done
757 /// from the same thread.
758 /// * If this atomic type is *not* lock-free:
759 /// * Any accesses to the value behind `ptr` must have a happens-before relationship
760 /// with accesses via the returned value (or vice-versa).
761 /// * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
762 /// be compatible with operations performed by this atomic type.
763 /// * This method must not be used to create overlapping or mixed-size atomic accesses, as
764 /// these are not supported by the memory model.
765 ///
766 /// [valid]: core::ptr#safety
767 #[inline]
768 #[must_use]
769 pub const unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a Self {
770 #[allow(clippy::cast_ptr_alignment)]
771 // SAFETY: guaranteed by the caller
772 unsafe { &*(ptr as *mut Self) }
773 }
774 }
775
776 /// Returns `true` if operations on values of this type are lock-free.
777 ///
778 /// If the compiler or the platform doesn't support the necessary
779 /// atomic instructions, global locks for every potentially
780 /// concurrent atomic operation will be used.
781 ///
782 /// # Examples
783 ///
784 /// ```
785 /// use portable_atomic::AtomicBool;
786 ///
787 /// let is_lock_free = AtomicBool::is_lock_free();
788 /// ```
789 #[inline]
790 #[must_use]
791 pub fn is_lock_free() -> bool {
792 imp::AtomicU8::is_lock_free()
793 }
794
795 /// Returns `true` if operations on values of this type are lock-free.
796 ///
797 /// If the compiler or the platform doesn't support the necessary
798 /// atomic instructions, global locks for every potentially
799 /// concurrent atomic operation will be used.
800 ///
801 /// **Note:** If the atomic operation relies on dynamic CPU feature detection,
802 /// this type may be lock-free even if the function returns false.
803 ///
804 /// # Examples
805 ///
806 /// ```
807 /// use portable_atomic::AtomicBool;
808 ///
809 /// const IS_ALWAYS_LOCK_FREE: bool = AtomicBool::is_always_lock_free();
810 /// ```
811 #[inline]
812 #[must_use]
813 pub const fn is_always_lock_free() -> bool {
814 imp::AtomicU8::IS_ALWAYS_LOCK_FREE
815 }
816 #[cfg(test)]
817 const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
818
819 const_fn! {
820 const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
821 /// Returns a mutable reference to the underlying [`bool`].
822 ///
823 /// This is safe because the mutable reference guarantees that no other threads are
824 /// concurrently accessing the atomic data.
825 ///
826 /// This is `const fn` on Rust 1.83+.
827 ///
828 /// # Examples
829 ///
830 /// ```
831 /// use portable_atomic::{AtomicBool, Ordering};
832 ///
833 /// let mut some_bool = AtomicBool::new(true);
834 /// assert_eq!(*some_bool.get_mut(), true);
835 /// *some_bool.get_mut() = false;
836 /// assert_eq!(some_bool.load(Ordering::SeqCst), false);
837 /// ```
838 #[inline]
839 pub const fn get_mut(&mut self) -> &mut bool {
840 // SAFETY: the mutable reference guarantees unique ownership.
841 unsafe { &mut *self.as_ptr() }
842 }
843 }
844
845 // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
846 // https://github.com/rust-lang/rust/issues/76314
847
848 const_fn! {
849 const_if: #[cfg(not(portable_atomic_no_const_transmute))];
850 /// Consumes the atomic and returns the contained value.
851 ///
852 /// This is safe because passing `self` by value guarantees that no other threads are
853 /// concurrently accessing the atomic data.
854 ///
855 /// This is `const fn` on Rust 1.56+.
856 ///
857 /// # Examples
858 ///
859 /// ```
860 /// use portable_atomic::AtomicBool;
861 ///
862 /// let some_bool = AtomicBool::new(true);
863 /// assert_eq!(some_bool.into_inner(), true);
864 /// ```
865 #[inline]
866 pub const fn into_inner(self) -> bool {
867 // SAFETY: AtomicBool and u8 have the same size and in-memory representations,
868 // so they can be safely transmuted.
869 // (const UnsafeCell::into_inner is unstable)
870 unsafe { core::mem::transmute::<AtomicBool, u8>(self) != 0 }
871 }
872 }
873
874 /// Loads a value from the bool.
875 ///
876 /// `load` takes an [`Ordering`] argument which describes the memory ordering
877 /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
878 ///
879 /// # Panics
880 ///
881 /// Panics if `order` is [`Release`] or [`AcqRel`].
882 ///
883 /// # Examples
884 ///
885 /// ```
886 /// use portable_atomic::{AtomicBool, Ordering};
887 ///
888 /// let some_bool = AtomicBool::new(true);
889 ///
890 /// assert_eq!(some_bool.load(Ordering::Relaxed), true);
891 /// ```
892 #[inline]
893 #[cfg_attr(
894 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
895 track_caller
896 )]
897 pub fn load(&self, order: Ordering) -> bool {
898 self.as_atomic_u8().load(order) != 0
899 }
900
901 /// Stores a value into the bool.
902 ///
903 /// `store` takes an [`Ordering`] argument which describes the memory ordering
904 /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
905 ///
906 /// # Panics
907 ///
908 /// Panics if `order` is [`Acquire`] or [`AcqRel`].
909 ///
910 /// # Examples
911 ///
912 /// ```
913 /// use portable_atomic::{AtomicBool, Ordering};
914 ///
915 /// let some_bool = AtomicBool::new(true);
916 ///
917 /// some_bool.store(false, Ordering::Relaxed);
918 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
919 /// ```
920 #[inline]
921 #[cfg_attr(
922 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
923 track_caller
924 )]
925 pub fn store(&self, val: bool, order: Ordering) {
926 self.as_atomic_u8().store(val as u8, order);
927 }
928
929 cfg_has_atomic_cas_or_amo32! {
930 /// Stores a value into the bool, returning the previous value.
931 ///
932 /// `swap` takes an [`Ordering`] argument which describes the memory ordering
933 /// of this operation. All ordering modes are possible. Note that using
934 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
935 /// using [`Release`] makes the load part [`Relaxed`].
936 ///
937 /// # Examples
938 ///
939 /// ```
940 /// use portable_atomic::{AtomicBool, Ordering};
941 ///
942 /// let some_bool = AtomicBool::new(true);
943 ///
944 /// assert_eq!(some_bool.swap(false, Ordering::Relaxed), true);
945 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
946 /// ```
947 #[inline]
948 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
949 pub fn swap(&self, val: bool, order: Ordering) -> bool {
950 #[cfg(any(
951 target_arch = "riscv32",
952 target_arch = "riscv64",
953 target_arch = "loongarch32",
954 target_arch = "loongarch64",
955 ))]
956 {
957 // See https://github.com/rust-lang/rust/pull/114034 for details.
958 // https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L249
959 // https://godbolt.org/z/ofbGGdx44
960 if val { self.fetch_or(true, order) } else { self.fetch_and(false, order) }
961 }
962 #[cfg(not(any(
963 target_arch = "riscv32",
964 target_arch = "riscv64",
965 target_arch = "loongarch32",
966 target_arch = "loongarch64",
967 )))]
968 {
969 self.as_atomic_u8().swap(val as u8, order) != 0
970 }
971 }
972
973 /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
974 ///
975 /// The return value is a result indicating whether the new value was written and containing
976 /// the previous value. On success this value is guaranteed to be equal to `current`.
977 ///
978 /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
979 /// ordering of this operation. `success` describes the required ordering for the
980 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
981 /// `failure` describes the required ordering for the load operation that takes place when
982 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
983 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
984 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
985 ///
986 /// # Panics
987 ///
988 /// Panics if `failure` is [`Release`], [`AcqRel`].
989 ///
990 /// # Examples
991 ///
992 /// ```
993 /// use portable_atomic::{AtomicBool, Ordering};
994 ///
995 /// let some_bool = AtomicBool::new(true);
996 ///
997 /// assert_eq!(
998 /// some_bool.compare_exchange(true, false, Ordering::Acquire, Ordering::Relaxed),
999 /// Ok(true)
1000 /// );
1001 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
1002 ///
1003 /// assert_eq!(
1004 /// some_bool.compare_exchange(true, true, Ordering::SeqCst, Ordering::Acquire),
1005 /// Err(false)
1006 /// );
1007 /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
1008 /// ```
1009 #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
1010 #[inline]
1011 #[cfg_attr(
1012 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1013 track_caller
1014 )]
1015 pub fn compare_exchange(
1016 &self,
1017 current: bool,
1018 new: bool,
1019 success: Ordering,
1020 failure: Ordering,
1021 ) -> Result<bool, bool> {
1022 #[cfg(any(
1023 target_arch = "riscv32",
1024 target_arch = "riscv64",
1025 target_arch = "loongarch32",
1026 target_arch = "loongarch64",
1027 ))]
1028 {
1029 // See https://github.com/rust-lang/rust/pull/114034 for details.
1030 // https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L249
1031 // https://godbolt.org/z/ofbGGdx44
1032 crate::utils::assert_compare_exchange_ordering(success, failure);
1033 let order = crate::utils::upgrade_success_ordering(success, failure);
1034 let old = if current == new {
1035 // This is a no-op, but we still need to perform the operation
1036 // for memory ordering reasons.
1037 self.fetch_or(false, order)
1038 } else {
1039 // This sets the value to the new one and returns the old one.
1040 self.swap(new, order)
1041 };
1042 if old == current { Ok(old) } else { Err(old) }
1043 }
1044 #[cfg(not(any(
1045 target_arch = "riscv32",
1046 target_arch = "riscv64",
1047 target_arch = "loongarch32",
1048 target_arch = "loongarch64",
1049 )))]
1050 {
1051 match self.as_atomic_u8().compare_exchange(current as u8, new as u8, success, failure) {
1052 Ok(x) => Ok(x != 0),
1053 Err(x) => Err(x != 0),
1054 }
1055 }
1056 }
1057
1058 /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
1059 ///
1060 /// Unlike [`AtomicBool::compare_exchange`], this function is allowed to spuriously fail even when the
1061 /// comparison succeeds, which can result in more efficient code on some platforms. The
1062 /// return value is a result indicating whether the new value was written and containing the
1063 /// previous value.
1064 ///
1065 /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
1066 /// ordering of this operation. `success` describes the required ordering for the
1067 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1068 /// `failure` describes the required ordering for the load operation that takes place when
1069 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1070 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1071 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1072 ///
1073 /// # Panics
1074 ///
1075 /// Panics if `failure` is [`Release`], [`AcqRel`].
1076 ///
1077 /// # Examples
1078 ///
1079 /// ```
1080 /// use portable_atomic::{AtomicBool, Ordering};
1081 ///
1082 /// let val = AtomicBool::new(false);
1083 ///
1084 /// let new = true;
1085 /// let mut old = val.load(Ordering::Relaxed);
1086 /// loop {
1087 /// match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
1088 /// Ok(_) => break,
1089 /// Err(x) => old = x,
1090 /// }
1091 /// }
1092 /// ```
1093 #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
1094 #[inline]
1095 #[cfg_attr(
1096 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1097 track_caller
1098 )]
1099 pub fn compare_exchange_weak(
1100 &self,
1101 current: bool,
1102 new: bool,
1103 success: Ordering,
1104 failure: Ordering,
1105 ) -> Result<bool, bool> {
1106 #[cfg(any(
1107 target_arch = "riscv32",
1108 target_arch = "riscv64",
1109 target_arch = "loongarch32",
1110 target_arch = "loongarch64",
1111 ))]
1112 {
1113 // See https://github.com/rust-lang/rust/pull/114034 for details.
1114 // https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L249
1115 // https://godbolt.org/z/ofbGGdx44
1116 self.compare_exchange(current, new, success, failure)
1117 }
1118 #[cfg(not(any(
1119 target_arch = "riscv32",
1120 target_arch = "riscv64",
1121 target_arch = "loongarch32",
1122 target_arch = "loongarch64",
1123 )))]
1124 {
1125 match self
1126 .as_atomic_u8()
1127 .compare_exchange_weak(current as u8, new as u8, success, failure)
1128 {
1129 Ok(x) => Ok(x != 0),
1130 Err(x) => Err(x != 0),
1131 }
1132 }
1133 }
1134
1135 /// Logical "and" with a boolean value.
1136 ///
1137 /// Performs a logical "and" operation on the current value and the argument `val`, and sets
1138 /// the new value to the result.
1139 ///
1140 /// Returns the previous value.
1141 ///
1142 /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
1143 /// of this operation. All ordering modes are possible. Note that using
1144 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1145 /// using [`Release`] makes the load part [`Relaxed`].
1146 ///
1147 /// # Examples
1148 ///
1149 /// ```
1150 /// use portable_atomic::{AtomicBool, Ordering};
1151 ///
1152 /// let foo = AtomicBool::new(true);
1153 /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), true);
1154 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1155 ///
1156 /// let foo = AtomicBool::new(true);
1157 /// assert_eq!(foo.fetch_and(true, Ordering::SeqCst), true);
1158 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1159 ///
1160 /// let foo = AtomicBool::new(false);
1161 /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), false);
1162 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1163 /// ```
1164 #[inline]
1165 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1166 pub fn fetch_and(&self, val: bool, order: Ordering) -> bool {
1167 self.as_atomic_u8().fetch_and(val as u8, order) != 0
1168 }
1169
1170 /// Logical "and" with a boolean value.
1171 ///
1172 /// Performs a logical "and" operation on the current value and the argument `val`, and sets
1173 /// the new value to the result.
1174 ///
1175 /// Unlike `fetch_and`, this does not return the previous value.
1176 ///
1177 /// `and` takes an [`Ordering`] argument which describes the memory ordering
1178 /// of this operation. All ordering modes are possible. Note that using
1179 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1180 /// using [`Release`] makes the load part [`Relaxed`].
1181 ///
1182 /// This function may generate more efficient code than `fetch_and` on some platforms.
1183 ///
1184 /// - x86/x86_64: `lock and` instead of `cmpxchg` loop
1185 /// - MSP430: `and` instead of disabling interrupts
1186 ///
1187 /// Note: On x86/x86_64, the use of either function should not usually
1188 /// affect the generated code, because LLVM can properly optimize the case
1189 /// where the result is unused.
1190 ///
1191 /// # Examples
1192 ///
1193 /// ```
1194 /// use portable_atomic::{AtomicBool, Ordering};
1195 ///
1196 /// let foo = AtomicBool::new(true);
1197 /// foo.and(false, Ordering::SeqCst);
1198 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1199 ///
1200 /// let foo = AtomicBool::new(true);
1201 /// foo.and(true, Ordering::SeqCst);
1202 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1203 ///
1204 /// let foo = AtomicBool::new(false);
1205 /// foo.and(false, Ordering::SeqCst);
1206 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1207 /// ```
1208 #[inline]
1209 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1210 pub fn and(&self, val: bool, order: Ordering) {
1211 self.as_atomic_u8().and(val as u8, order);
1212 }
1213
1214 /// Logical "nand" with a boolean value.
1215 ///
1216 /// Performs a logical "nand" operation on the current value and the argument `val`, and sets
1217 /// the new value to the result.
1218 ///
1219 /// Returns the previous value.
1220 ///
1221 /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
1222 /// of this operation. All ordering modes are possible. Note that using
1223 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1224 /// using [`Release`] makes the load part [`Relaxed`].
1225 ///
1226 /// # Examples
1227 ///
1228 /// ```
1229 /// use portable_atomic::{AtomicBool, Ordering};
1230 ///
1231 /// let foo = AtomicBool::new(true);
1232 /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), true);
1233 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1234 ///
1235 /// let foo = AtomicBool::new(true);
1236 /// assert_eq!(foo.fetch_nand(true, Ordering::SeqCst), true);
1237 /// assert_eq!(foo.load(Ordering::SeqCst) as usize, 0);
1238 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1239 ///
1240 /// let foo = AtomicBool::new(false);
1241 /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), false);
1242 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1243 /// ```
1244 #[inline]
1245 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1246 pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool {
1247 // https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L973-L985
1248 if val {
1249 // !(x & true) == !x
1250 // We must invert the bool.
1251 self.fetch_xor(true, order)
1252 } else {
1253 // !(x & false) == true
1254 // We must set the bool to true.
1255 self.swap(true, order)
1256 }
1257 }
1258
1259 /// Logical "or" with a boolean value.
1260 ///
1261 /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
1262 /// new value to the result.
1263 ///
1264 /// Returns the previous value.
1265 ///
1266 /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
1267 /// of this operation. All ordering modes are possible. Note that using
1268 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1269 /// using [`Release`] makes the load part [`Relaxed`].
1270 ///
1271 /// # Examples
1272 ///
1273 /// ```
1274 /// use portable_atomic::{AtomicBool, Ordering};
1275 ///
1276 /// let foo = AtomicBool::new(true);
1277 /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), true);
1278 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1279 ///
1280 /// let foo = AtomicBool::new(true);
1281 /// assert_eq!(foo.fetch_or(true, Ordering::SeqCst), true);
1282 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1283 ///
1284 /// let foo = AtomicBool::new(false);
1285 /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), false);
1286 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1287 /// ```
1288 #[inline]
1289 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1290 pub fn fetch_or(&self, val: bool, order: Ordering) -> bool {
1291 self.as_atomic_u8().fetch_or(val as u8, order) != 0
1292 }
1293
1294 /// Logical "or" with a boolean value.
1295 ///
1296 /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
1297 /// new value to the result.
1298 ///
1299 /// Unlike `fetch_or`, this does not return the previous value.
1300 ///
1301 /// `or` takes an [`Ordering`] argument which describes the memory ordering
1302 /// of this operation. All ordering modes are possible. Note that using
1303 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1304 /// using [`Release`] makes the load part [`Relaxed`].
1305 ///
1306 /// This function may generate more efficient code than `fetch_or` on some platforms.
1307 ///
1308 /// - x86/x86_64: `lock or` instead of `cmpxchg` loop
1309 /// - MSP430: `bis` instead of disabling interrupts
1310 ///
1311 /// Note: On x86/x86_64, the use of either function should not usually
1312 /// affect the generated code, because LLVM can properly optimize the case
1313 /// where the result is unused.
1314 ///
1315 /// # Examples
1316 ///
1317 /// ```
1318 /// use portable_atomic::{AtomicBool, Ordering};
1319 ///
1320 /// let foo = AtomicBool::new(true);
1321 /// foo.or(false, Ordering::SeqCst);
1322 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1323 ///
1324 /// let foo = AtomicBool::new(true);
1325 /// foo.or(true, Ordering::SeqCst);
1326 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1327 ///
1328 /// let foo = AtomicBool::new(false);
1329 /// foo.or(false, Ordering::SeqCst);
1330 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1331 /// ```
1332 #[inline]
1333 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1334 pub fn or(&self, val: bool, order: Ordering) {
1335 self.as_atomic_u8().or(val as u8, order);
1336 }
1337
1338 /// Logical "xor" with a boolean value.
1339 ///
1340 /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1341 /// the new value to the result.
1342 ///
1343 /// Returns the previous value.
1344 ///
1345 /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
1346 /// of this operation. All ordering modes are possible. Note that using
1347 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1348 /// using [`Release`] makes the load part [`Relaxed`].
1349 ///
1350 /// # Examples
1351 ///
1352 /// ```
1353 /// use portable_atomic::{AtomicBool, Ordering};
1354 ///
1355 /// let foo = AtomicBool::new(true);
1356 /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), true);
1357 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1358 ///
1359 /// let foo = AtomicBool::new(true);
1360 /// assert_eq!(foo.fetch_xor(true, Ordering::SeqCst), true);
1361 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1362 ///
1363 /// let foo = AtomicBool::new(false);
1364 /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), false);
1365 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1366 /// ```
1367 #[inline]
1368 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1369 pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool {
1370 self.as_atomic_u8().fetch_xor(val as u8, order) != 0
1371 }
1372
1373 /// Logical "xor" with a boolean value.
1374 ///
1375 /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1376 /// the new value to the result.
1377 ///
1378 /// Unlike `fetch_xor`, this does not return the previous value.
1379 ///
1380 /// `xor` takes an [`Ordering`] argument which describes the memory ordering
1381 /// of this operation. All ordering modes are possible. Note that using
1382 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1383 /// using [`Release`] makes the load part [`Relaxed`].
1384 ///
1385 /// This function may generate more efficient code than `fetch_xor` on some platforms.
1386 ///
1387 /// - x86/x86_64: `lock xor` instead of `cmpxchg` loop
1388 /// - MSP430: `xor` instead of disabling interrupts
1389 ///
1390 /// Note: On x86/x86_64, the use of either function should not usually
1391 /// affect the generated code, because LLVM can properly optimize the case
1392 /// where the result is unused.
1393 ///
1394 /// # Examples
1395 ///
1396 /// ```
1397 /// use portable_atomic::{AtomicBool, Ordering};
1398 ///
1399 /// let foo = AtomicBool::new(true);
1400 /// foo.xor(false, Ordering::SeqCst);
1401 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1402 ///
1403 /// let foo = AtomicBool::new(true);
1404 /// foo.xor(true, Ordering::SeqCst);
1405 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1406 ///
1407 /// let foo = AtomicBool::new(false);
1408 /// foo.xor(false, Ordering::SeqCst);
1409 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1410 /// ```
1411 #[inline]
1412 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1413 pub fn xor(&self, val: bool, order: Ordering) {
1414 self.as_atomic_u8().xor(val as u8, order);
1415 }
1416
1417 /// Logical "not" with a boolean value.
1418 ///
1419 /// Performs a logical "not" operation on the current value, and sets
1420 /// the new value to the result.
1421 ///
1422 /// Returns the previous value.
1423 ///
1424 /// `fetch_not` takes an [`Ordering`] argument which describes the memory ordering
1425 /// of this operation. All ordering modes are possible. Note that using
1426 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1427 /// using [`Release`] makes the load part [`Relaxed`].
1428 ///
1429 /// # Examples
1430 ///
1431 /// ```
1432 /// use portable_atomic::{AtomicBool, Ordering};
1433 ///
1434 /// let foo = AtomicBool::new(true);
1435 /// assert_eq!(foo.fetch_not(Ordering::SeqCst), true);
1436 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1437 ///
1438 /// let foo = AtomicBool::new(false);
1439 /// assert_eq!(foo.fetch_not(Ordering::SeqCst), false);
1440 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1441 /// ```
1442 #[inline]
1443 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1444 pub fn fetch_not(&self, order: Ordering) -> bool {
1445 self.fetch_xor(true, order)
1446 }
1447
1448 /// Logical "not" with a boolean value.
1449 ///
1450 /// Performs a logical "not" operation on the current value, and sets
1451 /// the new value to the result.
1452 ///
1453 /// Unlike `fetch_not`, this does not return the previous value.
1454 ///
1455 /// `not` takes an [`Ordering`] argument which describes the memory ordering
1456 /// of this operation. All ordering modes are possible. Note that using
1457 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1458 /// using [`Release`] makes the load part [`Relaxed`].
1459 ///
1460 /// This function may generate more efficient code than `fetch_not` on some platforms.
1461 ///
1462 /// - x86/x86_64: `lock xor` instead of `cmpxchg` loop
1463 /// - MSP430: `xor` instead of disabling interrupts
1464 ///
1465 /// Note: On x86/x86_64, the use of either function should not usually
1466 /// affect the generated code, because LLVM can properly optimize the case
1467 /// where the result is unused.
1468 ///
1469 /// # Examples
1470 ///
1471 /// ```
1472 /// use portable_atomic::{AtomicBool, Ordering};
1473 ///
1474 /// let foo = AtomicBool::new(true);
1475 /// foo.not(Ordering::SeqCst);
1476 /// assert_eq!(foo.load(Ordering::SeqCst), false);
1477 ///
1478 /// let foo = AtomicBool::new(false);
1479 /// foo.not(Ordering::SeqCst);
1480 /// assert_eq!(foo.load(Ordering::SeqCst), true);
1481 /// ```
1482 #[inline]
1483 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1484 pub fn not(&self, order: Ordering) {
1485 self.xor(true, order);
1486 }
1487
1488 /// Fetches the value, and applies a function to it that returns an optional
1489 /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1490 /// returned `Some(_)`, else `Err(previous_value)`.
1491 ///
1492 /// Note: This may call the function multiple times if the value has been
1493 /// changed from other threads in the meantime, as long as the function
1494 /// returns `Some(_)`, but the function will have been applied only once to
1495 /// the stored value.
1496 ///
1497 /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1498 /// ordering of this operation. The first describes the required ordering for
1499 /// when the operation finally succeeds while the second describes the
1500 /// required ordering for loads. These correspond to the success and failure
1501 /// orderings of [`compare_exchange`](Self::compare_exchange) respectively.
1502 ///
1503 /// Using [`Acquire`] as success ordering makes the store part of this
1504 /// operation [`Relaxed`], and using [`Release`] makes the final successful
1505 /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1506 /// [`Acquire`] or [`Relaxed`].
1507 ///
1508 /// # Considerations
1509 ///
1510 /// This method is not magic; it is not provided by the hardware.
1511 /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
1512 /// and suffers from the same drawbacks.
1513 /// In particular, this method will not circumvent the [ABA Problem].
1514 ///
1515 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1516 ///
1517 /// # Panics
1518 ///
1519 /// Panics if `fetch_order` is [`Release`], [`AcqRel`].
1520 ///
1521 /// # Examples
1522 ///
1523 /// ```
1524 /// use portable_atomic::{AtomicBool, Ordering};
1525 ///
1526 /// let x = AtomicBool::new(false);
1527 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
1528 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
1529 /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
1530 /// assert_eq!(x.load(Ordering::SeqCst), false);
1531 /// ```
1532 #[inline]
1533 #[cfg_attr(
1534 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1535 track_caller
1536 )]
1537 pub fn fetch_update<F>(
1538 &self,
1539 set_order: Ordering,
1540 fetch_order: Ordering,
1541 mut f: F,
1542 ) -> Result<bool, bool>
1543 where
1544 F: FnMut(bool) -> Option<bool>,
1545 {
1546 let mut prev = self.load(fetch_order);
1547 while let Some(next) = f(prev) {
1548 match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1549 x @ Ok(_) => return x,
1550 Err(next_prev) => prev = next_prev,
1551 }
1552 }
1553 Err(prev)
1554 }
1555 } // cfg_has_atomic_cas_or_amo32!
1556
1557 const_fn! {
1558 // This function is actually `const fn`-compatible on Rust 1.32+,
1559 // but makes `const fn` only on Rust 1.58+ to match other atomic types.
1560 const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
1561 /// Returns a mutable pointer to the underlying [`bool`].
1562 ///
1563 /// Returning an `*mut` pointer from a shared reference to this atomic is
1564 /// safe because the atomic types work with interior mutability. Any use of
1565 /// the returned raw pointer requires an `unsafe` block and has to uphold
1566 /// the safety requirements. If there is concurrent access, note the following
1567 /// additional safety requirements:
1568 ///
1569 /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
1570 /// operations on it must be atomic.
1571 /// - Otherwise, any concurrent operations on it must be compatible with
1572 /// operations performed by this atomic type.
1573 ///
1574 /// This is `const fn` on Rust 1.58+.
1575 #[inline]
1576 pub const fn as_ptr(&self) -> *mut bool {
1577 self.v.get() as *mut bool
1578 }
1579 }
1580
1581 #[inline(always)]
1582 fn as_atomic_u8(&self) -> &imp::AtomicU8 {
1583 // SAFETY: AtomicBool and imp::AtomicU8 have the same layout,
1584 // and both access data in the same way.
1585 unsafe { &*(self as *const Self as *const imp::AtomicU8) }
1586 }
1587}
1588// See https://github.com/taiki-e/portable-atomic/issues/180
1589#[cfg(not(feature = "require-cas"))]
1590cfg_no_atomic_cas! {
1591#[doc(hidden)]
1592#[allow(unused_variables, clippy::unused_self, clippy::extra_unused_lifetimes)]
1593impl<'a> AtomicBool {
1594 cfg_no_atomic_cas_or_amo32! {
1595 #[inline]
1596 pub fn swap(&self, val: bool, order: Ordering) -> bool
1597 where
1598 &'a Self: HasSwap,
1599 {
1600 unimplemented!()
1601 }
1602 #[inline]
1603 pub fn compare_exchange(
1604 &self,
1605 current: bool,
1606 new: bool,
1607 success: Ordering,
1608 failure: Ordering,
1609 ) -> Result<bool, bool>
1610 where
1611 &'a Self: HasCompareExchange,
1612 {
1613 unimplemented!()
1614 }
1615 #[inline]
1616 pub fn compare_exchange_weak(
1617 &self,
1618 current: bool,
1619 new: bool,
1620 success: Ordering,
1621 failure: Ordering,
1622 ) -> Result<bool, bool>
1623 where
1624 &'a Self: HasCompareExchangeWeak,
1625 {
1626 unimplemented!()
1627 }
1628 #[inline]
1629 pub fn fetch_and(&self, val: bool, order: Ordering) -> bool
1630 where
1631 &'a Self: HasFetchAnd,
1632 {
1633 unimplemented!()
1634 }
1635 #[inline]
1636 pub fn and(&self, val: bool, order: Ordering)
1637 where
1638 &'a Self: HasAnd,
1639 {
1640 unimplemented!()
1641 }
1642 #[inline]
1643 pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool
1644 where
1645 &'a Self: HasFetchNand,
1646 {
1647 unimplemented!()
1648 }
1649 #[inline]
1650 pub fn fetch_or(&self, val: bool, order: Ordering) -> bool
1651 where
1652 &'a Self: HasFetchOr,
1653 {
1654 unimplemented!()
1655 }
1656 #[inline]
1657 pub fn or(&self, val: bool, order: Ordering)
1658 where
1659 &'a Self: HasOr,
1660 {
1661 unimplemented!()
1662 }
1663 #[inline]
1664 pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool
1665 where
1666 &'a Self: HasFetchXor,
1667 {
1668 unimplemented!()
1669 }
1670 #[inline]
1671 pub fn xor(&self, val: bool, order: Ordering)
1672 where
1673 &'a Self: HasXor,
1674 {
1675 unimplemented!()
1676 }
1677 #[inline]
1678 pub fn fetch_not(&self, order: Ordering) -> bool
1679 where
1680 &'a Self: HasFetchNot,
1681 {
1682 unimplemented!()
1683 }
1684 #[inline]
1685 pub fn not(&self, order: Ordering)
1686 where
1687 &'a Self: HasNot,
1688 {
1689 unimplemented!()
1690 }
1691 #[inline]
1692 pub fn fetch_update<F>(
1693 &self,
1694 set_order: Ordering,
1695 fetch_order: Ordering,
1696 f: F,
1697 ) -> Result<bool, bool>
1698 where
1699 F: FnMut(bool) -> Option<bool>,
1700 &'a Self: HasFetchUpdate,
1701 {
1702 unimplemented!()
1703 }
1704 } // cfg_no_atomic_cas_or_amo32!
1705}
1706} // cfg_no_atomic_cas!
1707} // cfg_has_atomic_8!
1708
1709cfg_has_atomic_ptr! {
1710/// A raw pointer type which can be safely shared between threads.
1711///
1712/// This type has the same in-memory representation as a `*mut T`.
1713///
1714/// If the compiler and the platform support atomic loads and stores of pointers,
1715/// this type is a wrapper for the standard library's
1716/// [`AtomicPtr`](core::sync::atomic::AtomicPtr). If the platform supports it
1717/// but the compiler does not, atomic operations are implemented using inline
1718/// assembly.
1719// We can use #[repr(transparent)] here, but #[repr(C, align(N))]
1720// will show clearer docs.
1721#[cfg_attr(target_pointer_width = "16", repr(C, align(2)))]
1722#[cfg_attr(target_pointer_width = "32", repr(C, align(4)))]
1723#[cfg_attr(target_pointer_width = "64", repr(C, align(8)))]
1724#[cfg_attr(target_pointer_width = "128", repr(C, align(16)))]
1725pub struct AtomicPtr<T> {
1726 inner: imp::AtomicPtr<T>,
1727}
1728
1729impl<T> Default for AtomicPtr<T> {
1730 /// Creates a null `AtomicPtr<T>`.
1731 #[inline]
1732 fn default() -> Self {
1733 Self::new(ptr::null_mut())
1734 }
1735}
1736
1737impl<T> From<*mut T> for AtomicPtr<T> {
1738 #[inline]
1739 fn from(p: *mut T) -> Self {
1740 Self::new(p)
1741 }
1742}
1743
1744impl<T> fmt::Debug for AtomicPtr<T> {
1745 #[inline] // fmt is not hot path, but #[inline] on fmt seems to still be useful: https://github.com/rust-lang/rust/pull/117727
1746 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1747 // std atomic types use Relaxed in Debug::fmt: https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L2188
1748 fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
1749 }
1750}
1751
1752impl<T> fmt::Pointer for AtomicPtr<T> {
1753 #[inline] // fmt is not hot path, but #[inline] on fmt seems to still be useful: https://github.com/rust-lang/rust/pull/117727
1754 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1755 // std atomic types use Relaxed in Debug::fmt: https://github.com/rust-lang/rust/blob/1.84.0/library/core/src/sync/atomic.rs#L2188
1756 fmt::Pointer::fmt(&self.load(Ordering::Relaxed), f)
1757 }
1758}
1759
1760// UnwindSafe is implicitly implemented.
1761#[cfg(not(portable_atomic_no_core_unwind_safe))]
1762impl<T> core::panic::RefUnwindSafe for AtomicPtr<T> {}
1763#[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
1764impl<T> std::panic::RefUnwindSafe for AtomicPtr<T> {}
1765
1766impl<T> AtomicPtr<T> {
1767 /// Creates a new `AtomicPtr`.
1768 ///
1769 /// # Examples
1770 ///
1771 /// ```
1772 /// use portable_atomic::AtomicPtr;
1773 ///
1774 /// let ptr = &mut 5;
1775 /// let atomic_ptr = AtomicPtr::new(ptr);
1776 /// ```
1777 #[inline]
1778 #[must_use]
1779 pub const fn new(p: *mut T) -> Self {
1780 static_assert_layout!(AtomicPtr<()>, *mut ());
1781 Self { inner: imp::AtomicPtr::new(p) }
1782 }
1783
1784 // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
1785 const_fn! {
1786 const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
1787 /// Creates a new `AtomicPtr` from a pointer.
1788 ///
1789 /// This is `const fn` on Rust 1.83+.
1790 ///
1791 /// # Safety
1792 ///
1793 /// * `ptr` must be aligned to `align_of::<AtomicPtr<T>>()` (note that on some platforms this
1794 /// can be bigger than `align_of::<*mut T>()`).
1795 /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
1796 /// * If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
1797 /// behind `ptr` must have a happens-before relationship with atomic accesses via the returned
1798 /// value (or vice-versa).
1799 /// * In other words, time periods where the value is accessed atomically may not overlap
1800 /// with periods where the value is accessed non-atomically.
1801 /// * This requirement is trivially satisfied if `ptr` is never used non-atomically for the
1802 /// duration of lifetime `'a`. Most use cases should be able to follow this guideline.
1803 /// * This requirement is also trivially satisfied if all accesses (atomic or not) are done
1804 /// from the same thread.
1805 /// * If this atomic type is *not* lock-free:
1806 /// * Any accesses to the value behind `ptr` must have a happens-before relationship
1807 /// with accesses via the returned value (or vice-versa).
1808 /// * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
1809 /// be compatible with operations performed by this atomic type.
1810 /// * This method must not be used to create overlapping or mixed-size atomic accesses, as
1811 /// these are not supported by the memory model.
1812 ///
1813 /// [valid]: core::ptr#safety
1814 #[inline]
1815 #[must_use]
1816 pub const unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a Self {
1817 #[allow(clippy::cast_ptr_alignment)]
1818 // SAFETY: guaranteed by the caller
1819 unsafe { &*(ptr as *mut Self) }
1820 }
1821 }
1822
1823 /// Returns `true` if operations on values of this type are lock-free.
1824 ///
1825 /// If the compiler or the platform doesn't support the necessary
1826 /// atomic instructions, global locks for every potentially
1827 /// concurrent atomic operation will be used.
1828 ///
1829 /// # Examples
1830 ///
1831 /// ```
1832 /// use portable_atomic::AtomicPtr;
1833 ///
1834 /// let is_lock_free = AtomicPtr::<()>::is_lock_free();
1835 /// ```
1836 #[inline]
1837 #[must_use]
1838 pub fn is_lock_free() -> bool {
1839 <imp::AtomicPtr<T>>::is_lock_free()
1840 }
1841
1842 /// Returns `true` if operations on values of this type are lock-free.
1843 ///
1844 /// If the compiler or the platform doesn't support the necessary
1845 /// atomic instructions, global locks for every potentially
1846 /// concurrent atomic operation will be used.
1847 ///
1848 /// **Note:** If the atomic operation relies on dynamic CPU feature detection,
1849 /// this type may be lock-free even if the function returns false.
1850 ///
1851 /// # Examples
1852 ///
1853 /// ```
1854 /// use portable_atomic::AtomicPtr;
1855 ///
1856 /// const IS_ALWAYS_LOCK_FREE: bool = AtomicPtr::<()>::is_always_lock_free();
1857 /// ```
1858 #[inline]
1859 #[must_use]
1860 pub const fn is_always_lock_free() -> bool {
1861 <imp::AtomicPtr<T>>::IS_ALWAYS_LOCK_FREE
1862 }
1863 #[cfg(test)]
1864 const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
1865
1866 const_fn! {
1867 const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
1868 /// Returns a mutable reference to the underlying pointer.
1869 ///
1870 /// This is safe because the mutable reference guarantees that no other threads are
1871 /// concurrently accessing the atomic data.
1872 ///
1873 /// This is `const fn` on Rust 1.83+.
1874 ///
1875 /// # Examples
1876 ///
1877 /// ```
1878 /// use portable_atomic::{AtomicPtr, Ordering};
1879 ///
1880 /// let mut data = 10;
1881 /// let mut atomic_ptr = AtomicPtr::new(&mut data);
1882 /// let mut other_data = 5;
1883 /// *atomic_ptr.get_mut() = &mut other_data;
1884 /// assert_eq!(unsafe { *atomic_ptr.load(Ordering::SeqCst) }, 5);
1885 /// ```
1886 #[inline]
1887 pub const fn get_mut(&mut self) -> &mut *mut T {
1888 // SAFETY: the mutable reference guarantees unique ownership.
1889 // (core::sync::atomic::Atomic*::get_mut is not const yet)
1890 unsafe { &mut *self.as_ptr() }
1891 }
1892 }
1893
1894 // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
1895 // https://github.com/rust-lang/rust/issues/76314
1896
1897 const_fn! {
1898 const_if: #[cfg(not(portable_atomic_no_const_transmute))];
1899 /// Consumes the atomic and returns the contained value.
1900 ///
1901 /// This is safe because passing `self` by value guarantees that no other threads are
1902 /// concurrently accessing the atomic data.
1903 ///
1904 /// This is `const fn` on Rust 1.56+.
1905 ///
1906 /// # Examples
1907 ///
1908 /// ```
1909 /// use portable_atomic::AtomicPtr;
1910 ///
1911 /// let mut data = 5;
1912 /// let atomic_ptr = AtomicPtr::new(&mut data);
1913 /// assert_eq!(unsafe { *atomic_ptr.into_inner() }, 5);
1914 /// ```
1915 #[inline]
1916 pub const fn into_inner(self) -> *mut T {
1917 // SAFETY: AtomicPtr<T> and *mut T have the same size and in-memory representations,
1918 // so they can be safely transmuted.
1919 // (const UnsafeCell::into_inner is unstable)
1920 unsafe { core::mem::transmute(self) }
1921 }
1922 }
1923
1924 /// Loads a value from the pointer.
1925 ///
1926 /// `load` takes an [`Ordering`] argument which describes the memory ordering
1927 /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
1928 ///
1929 /// # Panics
1930 ///
1931 /// Panics if `order` is [`Release`] or [`AcqRel`].
1932 ///
1933 /// # Examples
1934 ///
1935 /// ```
1936 /// use portable_atomic::{AtomicPtr, Ordering};
1937 ///
1938 /// let ptr = &mut 5;
1939 /// let some_ptr = AtomicPtr::new(ptr);
1940 ///
1941 /// let value = some_ptr.load(Ordering::Relaxed);
1942 /// ```
1943 #[inline]
1944 #[cfg_attr(
1945 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1946 track_caller
1947 )]
1948 pub fn load(&self, order: Ordering) -> *mut T {
1949 self.inner.load(order)
1950 }
1951
1952 /// Stores a value into the pointer.
1953 ///
1954 /// `store` takes an [`Ordering`] argument which describes the memory ordering
1955 /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
1956 ///
1957 /// # Panics
1958 ///
1959 /// Panics if `order` is [`Acquire`] or [`AcqRel`].
1960 ///
1961 /// # Examples
1962 ///
1963 /// ```
1964 /// use portable_atomic::{AtomicPtr, Ordering};
1965 ///
1966 /// let ptr = &mut 5;
1967 /// let some_ptr = AtomicPtr::new(ptr);
1968 ///
1969 /// let other_ptr = &mut 10;
1970 ///
1971 /// some_ptr.store(other_ptr, Ordering::Relaxed);
1972 /// ```
1973 #[inline]
1974 #[cfg_attr(
1975 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1976 track_caller
1977 )]
1978 pub fn store(&self, ptr: *mut T, order: Ordering) {
1979 self.inner.store(ptr, order);
1980 }
1981
1982 cfg_has_atomic_cas_or_amo32! {
1983 /// Stores a value into the pointer, returning the previous value.
1984 ///
1985 /// `swap` takes an [`Ordering`] argument which describes the memory ordering
1986 /// of this operation. All ordering modes are possible. Note that using
1987 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1988 /// using [`Release`] makes the load part [`Relaxed`].
1989 ///
1990 /// # Examples
1991 ///
1992 /// ```
1993 /// use portable_atomic::{AtomicPtr, Ordering};
1994 ///
1995 /// let ptr = &mut 5;
1996 /// let some_ptr = AtomicPtr::new(ptr);
1997 ///
1998 /// let other_ptr = &mut 10;
1999 ///
2000 /// let value = some_ptr.swap(other_ptr, Ordering::Relaxed);
2001 /// ```
2002 #[inline]
2003 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2004 pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T {
2005 self.inner.swap(ptr, order)
2006 }
2007
2008 cfg_has_atomic_cas! {
2009 /// Stores a value into the pointer if the current value is the same as the `current` value.
2010 ///
2011 /// The return value is a result indicating whether the new value was written and containing
2012 /// the previous value. On success this value is guaranteed to be equal to `current`.
2013 ///
2014 /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
2015 /// ordering of this operation. `success` describes the required ordering for the
2016 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
2017 /// `failure` describes the required ordering for the load operation that takes place when
2018 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
2019 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
2020 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2021 ///
2022 /// # Panics
2023 ///
2024 /// Panics if `failure` is [`Release`], [`AcqRel`].
2025 ///
2026 /// # Examples
2027 ///
2028 /// ```
2029 /// use portable_atomic::{AtomicPtr, Ordering};
2030 ///
2031 /// let ptr = &mut 5;
2032 /// let some_ptr = AtomicPtr::new(ptr);
2033 ///
2034 /// let other_ptr = &mut 10;
2035 ///
2036 /// let value = some_ptr.compare_exchange(ptr, other_ptr, Ordering::SeqCst, Ordering::Relaxed);
2037 /// ```
2038 #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
2039 #[inline]
2040 #[cfg_attr(
2041 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
2042 track_caller
2043 )]
2044 pub fn compare_exchange(
2045 &self,
2046 current: *mut T,
2047 new: *mut T,
2048 success: Ordering,
2049 failure: Ordering,
2050 ) -> Result<*mut T, *mut T> {
2051 self.inner.compare_exchange(current, new, success, failure)
2052 }
2053
2054 /// Stores a value into the pointer if the current value is the same as the `current` value.
2055 ///
2056 /// Unlike [`AtomicPtr::compare_exchange`], this function is allowed to spuriously fail even when the
2057 /// comparison succeeds, which can result in more efficient code on some platforms. The
2058 /// return value is a result indicating whether the new value was written and containing the
2059 /// previous value.
2060 ///
2061 /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
2062 /// ordering of this operation. `success` describes the required ordering for the
2063 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
2064 /// `failure` describes the required ordering for the load operation that takes place when
2065 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
2066 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
2067 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2068 ///
2069 /// # Panics
2070 ///
2071 /// Panics if `failure` is [`Release`], [`AcqRel`].
2072 ///
2073 /// # Examples
2074 ///
2075 /// ```
2076 /// use portable_atomic::{AtomicPtr, Ordering};
2077 ///
2078 /// let some_ptr = AtomicPtr::new(&mut 5);
2079 ///
2080 /// let new = &mut 10;
2081 /// let mut old = some_ptr.load(Ordering::Relaxed);
2082 /// loop {
2083 /// match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
2084 /// Ok(_) => break,
2085 /// Err(x) => old = x,
2086 /// }
2087 /// }
2088 /// ```
2089 #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
2090 #[inline]
2091 #[cfg_attr(
2092 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
2093 track_caller
2094 )]
2095 pub fn compare_exchange_weak(
2096 &self,
2097 current: *mut T,
2098 new: *mut T,
2099 success: Ordering,
2100 failure: Ordering,
2101 ) -> Result<*mut T, *mut T> {
2102 self.inner.compare_exchange_weak(current, new, success, failure)
2103 }
2104
2105 /// Fetches the value, and applies a function to it that returns an optional
2106 /// new value. Returns a `Result` of `Ok(previous_value)` if the function
2107 /// returned `Some(_)`, else `Err(previous_value)`.
2108 ///
2109 /// Note: This may call the function multiple times if the value has been
2110 /// changed from other threads in the meantime, as long as the function
2111 /// returns `Some(_)`, but the function will have been applied only once to
2112 /// the stored value.
2113 ///
2114 /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
2115 /// ordering of this operation. The first describes the required ordering for
2116 /// when the operation finally succeeds while the second describes the
2117 /// required ordering for loads. These correspond to the success and failure
2118 /// orderings of [`compare_exchange`](Self::compare_exchange) respectively.
2119 ///
2120 /// Using [`Acquire`] as success ordering makes the store part of this
2121 /// operation [`Relaxed`], and using [`Release`] makes the final successful
2122 /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
2123 /// [`Acquire`] or [`Relaxed`].
2124 ///
2125 /// # Panics
2126 ///
2127 /// Panics if `fetch_order` is [`Release`], [`AcqRel`].
2128 ///
2129 /// # Considerations
2130 ///
2131 /// This method is not magic; it is not provided by the hardware.
2132 /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
2133 /// and suffers from the same drawbacks.
2134 /// In particular, this method will not circumvent the [ABA Problem].
2135 ///
2136 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
2137 ///
2138 /// # Examples
2139 ///
2140 /// ```
2141 /// use portable_atomic::{AtomicPtr, Ordering};
2142 ///
2143 /// let ptr: *mut _ = &mut 5;
2144 /// let some_ptr = AtomicPtr::new(ptr);
2145 ///
2146 /// let new: *mut _ = &mut 10;
2147 /// assert_eq!(some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
2148 /// let result = some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
2149 /// if x == ptr {
2150 /// Some(new)
2151 /// } else {
2152 /// None
2153 /// }
2154 /// });
2155 /// assert_eq!(result, Ok(ptr));
2156 /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
2157 /// ```
2158 #[inline]
2159 #[cfg_attr(
2160 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
2161 track_caller
2162 )]
2163 pub fn fetch_update<F>(
2164 &self,
2165 set_order: Ordering,
2166 fetch_order: Ordering,
2167 mut f: F,
2168 ) -> Result<*mut T, *mut T>
2169 where
2170 F: FnMut(*mut T) -> Option<*mut T>,
2171 {
2172 let mut prev = self.load(fetch_order);
2173 while let Some(next) = f(prev) {
2174 match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
2175 x @ Ok(_) => return x,
2176 Err(next_prev) => prev = next_prev,
2177 }
2178 }
2179 Err(prev)
2180 }
2181 } // cfg_has_atomic_cas!
2182
2183 /// Offsets the pointer's address by adding `val` (in units of `T`),
2184 /// returning the previous pointer.
2185 ///
2186 /// This is equivalent to using [`wrapping_add`] to atomically perform the
2187 /// equivalent of `ptr = ptr.wrapping_add(val);`.
2188 ///
2189 /// This method operates in units of `T`, which means that it cannot be used
2190 /// to offset the pointer by an amount which is not a multiple of
2191 /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2192 /// work with a deliberately misaligned pointer. In such cases, you may use
2193 /// the [`fetch_byte_add`](Self::fetch_byte_add) method instead.
2194 ///
2195 /// `fetch_ptr_add` takes an [`Ordering`] argument which describes the
2196 /// memory ordering of this operation. All ordering modes are possible. Note
2197 /// that using [`Acquire`] makes the store part of this operation
2198 /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2199 ///
2200 /// [`wrapping_add`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_add
2201 ///
2202 /// # Examples
2203 ///
2204 /// ```
2205 /// # #![allow(unstable_name_collisions)]
2206 /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2207 /// use portable_atomic::{AtomicPtr, Ordering};
2208 ///
2209 /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2210 /// assert_eq!(atom.fetch_ptr_add(1, Ordering::Relaxed).addr(), 0);
2211 /// // Note: units of `size_of::<i64>()`.
2212 /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 8);
2213 /// ```
2214 #[inline]
2215 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2216 pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T {
2217 self.fetch_byte_add(val.wrapping_mul(core::mem::size_of::<T>()), order)
2218 }
2219
2220 /// Offsets the pointer's address by subtracting `val` (in units of `T`),
2221 /// returning the previous pointer.
2222 ///
2223 /// This is equivalent to using [`wrapping_sub`] to atomically perform the
2224 /// equivalent of `ptr = ptr.wrapping_sub(val);`.
2225 ///
2226 /// This method operates in units of `T`, which means that it cannot be used
2227 /// to offset the pointer by an amount which is not a multiple of
2228 /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2229 /// work with a deliberately misaligned pointer. In such cases, you may use
2230 /// the [`fetch_byte_sub`](Self::fetch_byte_sub) method instead.
2231 ///
2232 /// `fetch_ptr_sub` takes an [`Ordering`] argument which describes the memory
2233 /// ordering of this operation. All ordering modes are possible. Note that
2234 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2235 /// and using [`Release`] makes the load part [`Relaxed`].
2236 ///
2237 /// [`wrapping_sub`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_sub
2238 ///
2239 /// # Examples
2240 ///
2241 /// ```
2242 /// use portable_atomic::{AtomicPtr, Ordering};
2243 ///
2244 /// let array = [1i32, 2i32];
2245 /// let atom = AtomicPtr::new(array.as_ptr().wrapping_add(1) as *mut _);
2246 ///
2247 /// assert!(core::ptr::eq(atom.fetch_ptr_sub(1, Ordering::Relaxed), &array[1]));
2248 /// assert!(core::ptr::eq(atom.load(Ordering::Relaxed), &array[0]));
2249 /// ```
2250 #[inline]
2251 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2252 pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T {
2253 self.fetch_byte_sub(val.wrapping_mul(core::mem::size_of::<T>()), order)
2254 }
2255
2256 /// Offsets the pointer's address by adding `val` *bytes*, returning the
2257 /// previous pointer.
2258 ///
2259 /// This is equivalent to using [`wrapping_add`] and [`cast`] to atomically
2260 /// perform `ptr = ptr.cast::<u8>().wrapping_add(val).cast::<T>()`.
2261 ///
2262 /// `fetch_byte_add` takes an [`Ordering`] argument which describes the
2263 /// memory ordering of this operation. All ordering modes are possible. Note
2264 /// that using [`Acquire`] makes the store part of this operation
2265 /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2266 ///
2267 /// [`wrapping_add`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_add
2268 /// [`cast`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.cast
2269 ///
2270 /// # Examples
2271 ///
2272 /// ```
2273 /// # #![allow(unstable_name_collisions)]
2274 /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2275 /// use portable_atomic::{AtomicPtr, Ordering};
2276 ///
2277 /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2278 /// assert_eq!(atom.fetch_byte_add(1, Ordering::Relaxed).addr(), 0);
2279 /// // Note: in units of bytes, not `size_of::<i64>()`.
2280 /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 1);
2281 /// ```
2282 #[inline]
2283 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2284 pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T {
2285 self.inner.fetch_byte_add(val, order)
2286 }
2287
2288 /// Offsets the pointer's address by subtracting `val` *bytes*, returning the
2289 /// previous pointer.
2290 ///
2291 /// This is equivalent to using [`wrapping_sub`] and [`cast`] to atomically
2292 /// perform `ptr = ptr.cast::<u8>().wrapping_sub(val).cast::<T>()`.
2293 ///
2294 /// `fetch_byte_sub` takes an [`Ordering`] argument which describes the
2295 /// memory ordering of this operation. All ordering modes are possible. Note
2296 /// that using [`Acquire`] makes the store part of this operation
2297 /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2298 ///
2299 /// [`wrapping_sub`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_sub
2300 /// [`cast`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.cast
2301 ///
2302 /// # Examples
2303 ///
2304 /// ```
2305 /// # #![allow(unstable_name_collisions)]
2306 /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2307 /// use portable_atomic::{AtomicPtr, Ordering};
2308 ///
2309 /// let atom = AtomicPtr::<i64>::new(sptr::invalid_mut(1));
2310 /// assert_eq!(atom.fetch_byte_sub(1, Ordering::Relaxed).addr(), 1);
2311 /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 0);
2312 /// ```
2313 #[inline]
2314 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2315 pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T {
2316 self.inner.fetch_byte_sub(val, order)
2317 }
2318
2319 /// Performs a bitwise "or" operation on the address of the current pointer,
2320 /// and the argument `val`, and stores a pointer with provenance of the
2321 /// current pointer and the resulting address.
2322 ///
2323 /// This is equivalent to using [`map_addr`] to atomically perform
2324 /// `ptr = ptr.map_addr(|a| a | val)`. This can be used in tagged
2325 /// pointer schemes to atomically set tag bits.
2326 ///
2327 /// **Caveat**: This operation returns the previous value. To compute the
2328 /// stored value without losing provenance, you may use [`map_addr`]. For
2329 /// example: `a.fetch_or(val).map_addr(|a| a | val)`.
2330 ///
2331 /// `fetch_or` takes an [`Ordering`] argument which describes the memory
2332 /// ordering of this operation. All ordering modes are possible. Note that
2333 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2334 /// and using [`Release`] makes the load part [`Relaxed`].
2335 ///
2336 /// This API and its claimed semantics are part of the Strict Provenance
2337 /// experiment, see the [module documentation for `ptr`][core::ptr] for
2338 /// details.
2339 ///
2340 /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr
2341 ///
2342 /// # Examples
2343 ///
2344 /// ```
2345 /// # #![allow(unstable_name_collisions)]
2346 /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2347 /// use portable_atomic::{AtomicPtr, Ordering};
2348 ///
2349 /// let pointer = &mut 3i64 as *mut i64;
2350 ///
2351 /// let atom = AtomicPtr::<i64>::new(pointer);
2352 /// // Tag the bottom bit of the pointer.
2353 /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 0);
2354 /// // Extract and untag.
2355 /// let tagged = atom.load(Ordering::Relaxed);
2356 /// assert_eq!(tagged.addr() & 1, 1);
2357 /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
2358 /// ```
2359 #[inline]
2360 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2361 pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T {
2362 self.inner.fetch_or(val, order)
2363 }
2364
2365 /// Performs a bitwise "and" operation on the address of the current
2366 /// pointer, and the argument `val`, and stores a pointer with provenance of
2367 /// the current pointer and the resulting address.
2368 ///
2369 /// This is equivalent to using [`map_addr`] to atomically perform
2370 /// `ptr = ptr.map_addr(|a| a & val)`. This can be used in tagged
2371 /// pointer schemes to atomically unset tag bits.
2372 ///
2373 /// **Caveat**: This operation returns the previous value. To compute the
2374 /// stored value without losing provenance, you may use [`map_addr`]. For
2375 /// example: `a.fetch_and(val).map_addr(|a| a & val)`.
2376 ///
2377 /// `fetch_and` takes an [`Ordering`] argument which describes the memory
2378 /// ordering of this operation. All ordering modes are possible. Note that
2379 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2380 /// and using [`Release`] makes the load part [`Relaxed`].
2381 ///
2382 /// This API and its claimed semantics are part of the Strict Provenance
2383 /// experiment, see the [module documentation for `ptr`][core::ptr] for
2384 /// details.
2385 ///
2386 /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr
2387 ///
2388 /// # Examples
2389 ///
2390 /// ```
2391 /// # #![allow(unstable_name_collisions)]
2392 /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2393 /// use portable_atomic::{AtomicPtr, Ordering};
2394 ///
2395 /// let pointer = &mut 3i64 as *mut i64;
2396 /// // A tagged pointer
2397 /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
2398 /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 1);
2399 /// // Untag, and extract the previously tagged pointer.
2400 /// let untagged = atom.fetch_and(!1, Ordering::Relaxed).map_addr(|a| a & !1);
2401 /// assert_eq!(untagged, pointer);
2402 /// ```
2403 #[inline]
2404 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2405 pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T {
2406 self.inner.fetch_and(val, order)
2407 }
2408
2409 /// Performs a bitwise "xor" operation on the address of the current
2410 /// pointer, and the argument `val`, and stores a pointer with provenance of
2411 /// the current pointer and the resulting address.
2412 ///
2413 /// This is equivalent to using [`map_addr`] to atomically perform
2414 /// `ptr = ptr.map_addr(|a| a ^ val)`. This can be used in tagged
2415 /// pointer schemes to atomically toggle tag bits.
2416 ///
2417 /// **Caveat**: This operation returns the previous value. To compute the
2418 /// stored value without losing provenance, you may use [`map_addr`]. For
2419 /// example: `a.fetch_xor(val).map_addr(|a| a ^ val)`.
2420 ///
2421 /// `fetch_xor` takes an [`Ordering`] argument which describes the memory
2422 /// ordering of this operation. All ordering modes are possible. Note that
2423 /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2424 /// and using [`Release`] makes the load part [`Relaxed`].
2425 ///
2426 /// This API and its claimed semantics are part of the Strict Provenance
2427 /// experiment, see the [module documentation for `ptr`][core::ptr] for
2428 /// details.
2429 ///
2430 /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr
2431 ///
2432 /// # Examples
2433 ///
2434 /// ```
2435 /// # #![allow(unstable_name_collisions)]
2436 /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2437 /// use portable_atomic::{AtomicPtr, Ordering};
2438 ///
2439 /// let pointer = &mut 3i64 as *mut i64;
2440 /// let atom = AtomicPtr::<i64>::new(pointer);
2441 ///
2442 /// // Toggle a tag bit on the pointer.
2443 /// atom.fetch_xor(1, Ordering::Relaxed);
2444 /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2445 /// ```
2446 #[inline]
2447 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2448 pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T {
2449 self.inner.fetch_xor(val, order)
2450 }
2451
2452 /// Sets the bit at the specified bit-position to 1.
2453 ///
2454 /// Returns `true` if the specified bit was previously set to 1.
2455 ///
2456 /// `bit_set` takes an [`Ordering`] argument which describes the memory ordering
2457 /// of this operation. All ordering modes are possible. Note that using
2458 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2459 /// using [`Release`] makes the load part [`Relaxed`].
2460 ///
2461 /// This corresponds to x86's `lock bts`, and the implementation calls them on x86/x86_64.
2462 ///
2463 /// # Examples
2464 ///
2465 /// ```
2466 /// # #![allow(unstable_name_collisions)]
2467 /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2468 /// use portable_atomic::{AtomicPtr, Ordering};
2469 ///
2470 /// let pointer = &mut 3i64 as *mut i64;
2471 ///
2472 /// let atom = AtomicPtr::<i64>::new(pointer);
2473 /// // Tag the bottom bit of the pointer.
2474 /// assert!(!atom.bit_set(0, Ordering::Relaxed));
2475 /// // Extract and untag.
2476 /// let tagged = atom.load(Ordering::Relaxed);
2477 /// assert_eq!(tagged.addr() & 1, 1);
2478 /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
2479 /// ```
2480 #[inline]
2481 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2482 pub fn bit_set(&self, bit: u32, order: Ordering) -> bool {
2483 self.inner.bit_set(bit, order)
2484 }
2485
2486 /// Clears the bit at the specified bit-position to 1.
2487 ///
2488 /// Returns `true` if the specified bit was previously set to 1.
2489 ///
2490 /// `bit_clear` takes an [`Ordering`] argument which describes the memory ordering
2491 /// of this operation. All ordering modes are possible. Note that using
2492 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2493 /// using [`Release`] makes the load part [`Relaxed`].
2494 ///
2495 /// This corresponds to x86's `lock btr`, and the implementation calls them on x86/x86_64.
2496 ///
2497 /// # Examples
2498 ///
2499 /// ```
2500 /// # #![allow(unstable_name_collisions)]
2501 /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2502 /// use portable_atomic::{AtomicPtr, Ordering};
2503 ///
2504 /// let pointer = &mut 3i64 as *mut i64;
2505 /// // A tagged pointer
2506 /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
2507 /// assert!(atom.bit_set(0, Ordering::Relaxed));
2508 /// // Untag
2509 /// assert!(atom.bit_clear(0, Ordering::Relaxed));
2510 /// ```
2511 #[inline]
2512 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2513 pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool {
2514 self.inner.bit_clear(bit, order)
2515 }
2516
2517 /// Toggles the bit at the specified bit-position.
2518 ///
2519 /// Returns `true` if the specified bit was previously set to 1.
2520 ///
2521 /// `bit_toggle` takes an [`Ordering`] argument which describes the memory ordering
2522 /// of this operation. All ordering modes are possible. Note that using
2523 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2524 /// using [`Release`] makes the load part [`Relaxed`].
2525 ///
2526 /// This corresponds to x86's `lock btc`, and the implementation calls them on x86/x86_64.
2527 ///
2528 /// # Examples
2529 ///
2530 /// ```
2531 /// # #![allow(unstable_name_collisions)]
2532 /// # #[allow(unused_imports)] use sptr::Strict as _; // strict provenance polyfill for old rustc
2533 /// use portable_atomic::{AtomicPtr, Ordering};
2534 ///
2535 /// let pointer = &mut 3i64 as *mut i64;
2536 /// let atom = AtomicPtr::<i64>::new(pointer);
2537 ///
2538 /// // Toggle a tag bit on the pointer.
2539 /// atom.bit_toggle(0, Ordering::Relaxed);
2540 /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2541 /// ```
2542 #[inline]
2543 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2544 pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool {
2545 self.inner.bit_toggle(bit, order)
2546 }
2547 } // cfg_has_atomic_cas_or_amo32!
2548
2549 const_fn! {
2550 const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
2551 /// Returns a mutable pointer to the underlying pointer.
2552 ///
2553 /// Returning an `*mut` pointer from a shared reference to this atomic is
2554 /// safe because the atomic types work with interior mutability. Any use of
2555 /// the returned raw pointer requires an `unsafe` block and has to uphold
2556 /// the safety requirements. If there is concurrent access, note the following
2557 /// additional safety requirements:
2558 ///
2559 /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
2560 /// operations on it must be atomic.
2561 /// - Otherwise, any concurrent operations on it must be compatible with
2562 /// operations performed by this atomic type.
2563 ///
2564 /// This is `const fn` on Rust 1.58+.
2565 #[inline]
2566 pub const fn as_ptr(&self) -> *mut *mut T {
2567 self.inner.as_ptr()
2568 }
2569 }
2570}
2571// See https://github.com/taiki-e/portable-atomic/issues/180
2572#[cfg(not(feature = "require-cas"))]
2573cfg_no_atomic_cas! {
2574#[doc(hidden)]
2575#[allow(unused_variables, clippy::unused_self, clippy::extra_unused_lifetimes)]
2576impl<'a, T: 'a> AtomicPtr<T> {
2577 cfg_no_atomic_cas_or_amo32! {
2578 #[inline]
2579 pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T
2580 where
2581 &'a Self: HasSwap,
2582 {
2583 unimplemented!()
2584 }
2585 } // cfg_no_atomic_cas_or_amo32!
2586 #[inline]
2587 pub fn compare_exchange(
2588 &self,
2589 current: *mut T,
2590 new: *mut T,
2591 success: Ordering,
2592 failure: Ordering,
2593 ) -> Result<*mut T, *mut T>
2594 where
2595 &'a Self: HasCompareExchange,
2596 {
2597 unimplemented!()
2598 }
2599 #[inline]
2600 pub fn compare_exchange_weak(
2601 &self,
2602 current: *mut T,
2603 new: *mut T,
2604 success: Ordering,
2605 failure: Ordering,
2606 ) -> Result<*mut T, *mut T>
2607 where
2608 &'a Self: HasCompareExchangeWeak,
2609 {
2610 unimplemented!()
2611 }
2612 #[inline]
2613 pub fn fetch_update<F>(
2614 &self,
2615 set_order: Ordering,
2616 fetch_order: Ordering,
2617 f: F,
2618 ) -> Result<*mut T, *mut T>
2619 where
2620 F: FnMut(*mut T) -> Option<*mut T>,
2621 &'a Self: HasFetchUpdate,
2622 {
2623 unimplemented!()
2624 }
2625 cfg_no_atomic_cas_or_amo32! {
2626 #[inline]
2627 pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T
2628 where
2629 &'a Self: HasFetchPtrAdd,
2630 {
2631 unimplemented!()
2632 }
2633 #[inline]
2634 pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T
2635 where
2636 &'a Self: HasFetchPtrSub,
2637 {
2638 unimplemented!()
2639 }
2640 #[inline]
2641 pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T
2642 where
2643 &'a Self: HasFetchByteAdd,
2644 {
2645 unimplemented!()
2646 }
2647 #[inline]
2648 pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T
2649 where
2650 &'a Self: HasFetchByteSub,
2651 {
2652 unimplemented!()
2653 }
2654 #[inline]
2655 pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T
2656 where
2657 &'a Self: HasFetchOr,
2658 {
2659 unimplemented!()
2660 }
2661 #[inline]
2662 pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T
2663 where
2664 &'a Self: HasFetchAnd,
2665 {
2666 unimplemented!()
2667 }
2668 #[inline]
2669 pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T
2670 where
2671 &'a Self: HasFetchXor,
2672 {
2673 unimplemented!()
2674 }
2675 #[inline]
2676 pub fn bit_set(&self, bit: u32, order: Ordering) -> bool
2677 where
2678 &'a Self: HasBitSet,
2679 {
2680 unimplemented!()
2681 }
2682 #[inline]
2683 pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool
2684 where
2685 &'a Self: HasBitClear,
2686 {
2687 unimplemented!()
2688 }
2689 #[inline]
2690 pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool
2691 where
2692 &'a Self: HasBitToggle,
2693 {
2694 unimplemented!()
2695 }
2696 } // cfg_no_atomic_cas_or_amo32!
2697}
2698} // cfg_no_atomic_cas!
2699} // cfg_has_atomic_ptr!
2700
2701macro_rules! atomic_int {
2702 // Atomic{I,U}* impls
2703 ($atomic_type:ident, $int_type:ident, $align:literal,
2704 $cfg_has_atomic_cas_or_amo32_or_8:ident, $cfg_no_atomic_cas_or_amo32_or_8:ident
2705 $(, #[$cfg_float:meta] $atomic_float_type:ident, $float_type:ident)?
2706 ) => {
2707 doc_comment! {
2708 concat!("An integer type which can be safely shared between threads.
2709
2710This type has the same in-memory representation as the underlying integer type,
2711[`", stringify!($int_type), "`].
2712
2713If the compiler and the platform support atomic loads and stores of [`", stringify!($int_type),
2714"`], this type is a wrapper for the standard library's `", stringify!($atomic_type),
2715"`. If the platform supports it but the compiler does not, atomic operations are implemented using
2716inline assembly. Otherwise synchronizes using global locks.
2717You can call [`", stringify!($atomic_type), "::is_lock_free()`] to check whether
2718atomic instructions or locks will be used.
2719"
2720 ),
2721 // We can use #[repr(transparent)] here, but #[repr(C, align(N))]
2722 // will show clearer docs.
2723 #[repr(C, align($align))]
2724 pub struct $atomic_type {
2725 inner: imp::$atomic_type,
2726 }
2727 }
2728
2729 impl Default for $atomic_type {
2730 #[inline]
2731 fn default() -> Self {
2732 Self::new($int_type::default())
2733 }
2734 }
2735
2736 impl From<$int_type> for $atomic_type {
2737 #[inline]
2738 fn from(v: $int_type) -> Self {
2739 Self::new(v)
2740 }
2741 }
2742
2743 // UnwindSafe is implicitly implemented.
2744 #[cfg(not(portable_atomic_no_core_unwind_safe))]
2745 impl core::panic::RefUnwindSafe for $atomic_type {}
2746 #[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
2747 impl std::panic::RefUnwindSafe for $atomic_type {}
2748
2749 impl_debug_and_serde!($atomic_type);
2750
2751 impl $atomic_type {
2752 doc_comment! {
2753 concat!(
2754 "Creates a new atomic integer.
2755
2756# Examples
2757
2758```
2759use portable_atomic::", stringify!($atomic_type), ";
2760
2761let atomic_forty_two = ", stringify!($atomic_type), "::new(42);
2762```"
2763 ),
2764 #[inline]
2765 #[must_use]
2766 pub const fn new(v: $int_type) -> Self {
2767 static_assert_layout!($atomic_type, $int_type);
2768 Self { inner: imp::$atomic_type::new(v) }
2769 }
2770 }
2771
2772 // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
2773 #[cfg(not(portable_atomic_no_const_mut_refs))]
2774 doc_comment! {
2775 concat!("Creates a new reference to an atomic integer from a pointer.
2776
2777This is `const fn` on Rust 1.83+.
2778
2779# Safety
2780
2781* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
2782 can be bigger than `align_of::<", stringify!($int_type), ">()`).
2783* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
2784* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
2785 behind `ptr` must have a happens-before relationship with atomic accesses via
2786 the returned value (or vice-versa).
2787 * In other words, time periods where the value is accessed atomically may not
2788 overlap with periods where the value is accessed non-atomically.
2789 * This requirement is trivially satisfied if `ptr` is never used non-atomically
2790 for the duration of lifetime `'a`. Most use cases should be able to follow
2791 this guideline.
2792 * This requirement is also trivially satisfied if all accesses (atomic or not) are
2793 done from the same thread.
2794* If this atomic type is *not* lock-free:
2795 * Any accesses to the value behind `ptr` must have a happens-before relationship
2796 with accesses via the returned value (or vice-versa).
2797 * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
2798 be compatible with operations performed by this atomic type.
2799* This method must not be used to create overlapping or mixed-size atomic
2800 accesses, as these are not supported by the memory model.
2801
2802[valid]: core::ptr#safety"),
2803 #[inline]
2804 #[must_use]
2805 pub const unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a Self {
2806 #[allow(clippy::cast_ptr_alignment)]
2807 // SAFETY: guaranteed by the caller
2808 unsafe { &*(ptr as *mut Self) }
2809 }
2810 }
2811 #[cfg(portable_atomic_no_const_mut_refs)]
2812 doc_comment! {
2813 concat!("Creates a new reference to an atomic integer from a pointer.
2814
2815This is `const fn` on Rust 1.83+.
2816
2817# Safety
2818
2819* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
2820 can be bigger than `align_of::<", stringify!($int_type), ">()`).
2821* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
2822* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
2823 behind `ptr` must have a happens-before relationship with atomic accesses via
2824 the returned value (or vice-versa).
2825 * In other words, time periods where the value is accessed atomically may not
2826 overlap with periods where the value is accessed non-atomically.
2827 * This requirement is trivially satisfied if `ptr` is never used non-atomically
2828 for the duration of lifetime `'a`. Most use cases should be able to follow
2829 this guideline.
2830 * This requirement is also trivially satisfied if all accesses (atomic or not) are
2831 done from the same thread.
2832* If this atomic type is *not* lock-free:
2833 * Any accesses to the value behind `ptr` must have a happens-before relationship
2834 with accesses via the returned value (or vice-versa).
2835 * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
2836 be compatible with operations performed by this atomic type.
2837* This method must not be used to create overlapping or mixed-size atomic
2838 accesses, as these are not supported by the memory model.
2839
2840[valid]: core::ptr#safety"),
2841 #[inline]
2842 #[must_use]
2843 pub unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a Self {
2844 #[allow(clippy::cast_ptr_alignment)]
2845 // SAFETY: guaranteed by the caller
2846 unsafe { &*(ptr as *mut Self) }
2847 }
2848 }
2849
2850 doc_comment! {
2851 concat!("Returns `true` if operations on values of this type are lock-free.
2852
2853If the compiler or the platform doesn't support the necessary
2854atomic instructions, global locks for every potentially
2855concurrent atomic operation will be used.
2856
2857# Examples
2858
2859```
2860use portable_atomic::", stringify!($atomic_type), ";
2861
2862let is_lock_free = ", stringify!($atomic_type), "::is_lock_free();
2863```"),
2864 #[inline]
2865 #[must_use]
2866 pub fn is_lock_free() -> bool {
2867 <imp::$atomic_type>::is_lock_free()
2868 }
2869 }
2870
2871 doc_comment! {
2872 concat!("Returns `true` if operations on values of this type are lock-free.
2873
2874If the compiler or the platform doesn't support the necessary
2875atomic instructions, global locks for every potentially
2876concurrent atomic operation will be used.
2877
2878**Note:** If the atomic operation relies on dynamic CPU feature detection,
2879this type may be lock-free even if the function returns false.
2880
2881# Examples
2882
2883```
2884use portable_atomic::", stringify!($atomic_type), ";
2885
2886const IS_ALWAYS_LOCK_FREE: bool = ", stringify!($atomic_type), "::is_always_lock_free();
2887```"),
2888 #[inline]
2889 #[must_use]
2890 pub const fn is_always_lock_free() -> bool {
2891 <imp::$atomic_type>::IS_ALWAYS_LOCK_FREE
2892 }
2893 }
2894 #[cfg(test)]
2895 #[cfg_attr(all(valgrind, target_arch = "powerpc64"), allow(dead_code))] // TODO(powerpc64): Hang (as of Valgrind 3.26)
2896 const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
2897
2898 #[cfg(not(portable_atomic_no_const_mut_refs))]
2899 doc_comment! {
2900 concat!("Returns a mutable reference to the underlying integer.\n
2901This is safe because the mutable reference guarantees that no other threads are
2902concurrently accessing the atomic data.
2903
2904This is `const fn` on Rust 1.83+.
2905
2906# Examples
2907
2908```
2909use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2910
2911let mut some_var = ", stringify!($atomic_type), "::new(10);
2912assert_eq!(*some_var.get_mut(), 10);
2913*some_var.get_mut() = 5;
2914assert_eq!(some_var.load(Ordering::SeqCst), 5);
2915```"),
2916 #[inline]
2917 pub const fn get_mut(&mut self) -> &mut $int_type {
2918 // SAFETY: the mutable reference guarantees unique ownership.
2919 // (core::sync::atomic::Atomic*::get_mut is not const yet)
2920 unsafe { &mut *self.as_ptr() }
2921 }
2922 }
2923 #[cfg(portable_atomic_no_const_mut_refs)]
2924 doc_comment! {
2925 concat!("Returns a mutable reference to the underlying integer.\n
2926This is safe because the mutable reference guarantees that no other threads are
2927concurrently accessing the atomic data.
2928
2929This is `const fn` on Rust 1.83+.
2930
2931# Examples
2932
2933```
2934use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2935
2936let mut some_var = ", stringify!($atomic_type), "::new(10);
2937assert_eq!(*some_var.get_mut(), 10);
2938*some_var.get_mut() = 5;
2939assert_eq!(some_var.load(Ordering::SeqCst), 5);
2940```"),
2941 #[inline]
2942 pub fn get_mut(&mut self) -> &mut $int_type {
2943 // SAFETY: the mutable reference guarantees unique ownership.
2944 unsafe { &mut *self.as_ptr() }
2945 }
2946 }
2947
2948 // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
2949 // https://github.com/rust-lang/rust/issues/76314
2950
2951 #[cfg(not(portable_atomic_no_const_transmute))]
2952 doc_comment! {
2953 concat!("Consumes the atomic and returns the contained value.
2954
2955This is safe because passing `self` by value guarantees that no other threads are
2956concurrently accessing the atomic data.
2957
2958This is `const fn` on Rust 1.56+.
2959
2960# Examples
2961
2962```
2963use portable_atomic::", stringify!($atomic_type), ";
2964
2965let some_var = ", stringify!($atomic_type), "::new(5);
2966assert_eq!(some_var.into_inner(), 5);
2967```"),
2968 #[inline]
2969 pub const fn into_inner(self) -> $int_type {
2970 // SAFETY: $atomic_type and $int_type have the same size and in-memory representations,
2971 // so they can be safely transmuted.
2972 // (const UnsafeCell::into_inner is unstable)
2973 unsafe { core::mem::transmute(self) }
2974 }
2975 }
2976 #[cfg(portable_atomic_no_const_transmute)]
2977 doc_comment! {
2978 concat!("Consumes the atomic and returns the contained value.
2979
2980This is safe because passing `self` by value guarantees that no other threads are
2981concurrently accessing the atomic data.
2982
2983This is `const fn` on Rust 1.56+.
2984
2985# Examples
2986
2987```
2988use portable_atomic::", stringify!($atomic_type), ";
2989
2990let some_var = ", stringify!($atomic_type), "::new(5);
2991assert_eq!(some_var.into_inner(), 5);
2992```"),
2993 #[inline]
2994 pub fn into_inner(self) -> $int_type {
2995 // SAFETY: $atomic_type and $int_type have the same size and in-memory representations,
2996 // so they can be safely transmuted.
2997 // (const UnsafeCell::into_inner is unstable)
2998 unsafe { core::mem::transmute(self) }
2999 }
3000 }
3001
3002 doc_comment! {
3003 concat!("Loads a value from the atomic integer.
3004
3005`load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
3006Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
3007
3008# Panics
3009
3010Panics if `order` is [`Release`] or [`AcqRel`].
3011
3012# Examples
3013
3014```
3015use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3016
3017let some_var = ", stringify!($atomic_type), "::new(5);
3018
3019assert_eq!(some_var.load(Ordering::Relaxed), 5);
3020```"),
3021 #[inline]
3022 #[cfg_attr(
3023 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3024 track_caller
3025 )]
3026 pub fn load(&self, order: Ordering) -> $int_type {
3027 self.inner.load(order)
3028 }
3029 }
3030
3031 doc_comment! {
3032 concat!("Stores a value into the atomic integer.
3033
3034`store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
3035Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
3036
3037# Panics
3038
3039Panics if `order` is [`Acquire`] or [`AcqRel`].
3040
3041# Examples
3042
3043```
3044use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3045
3046let some_var = ", stringify!($atomic_type), "::new(5);
3047
3048some_var.store(10, Ordering::Relaxed);
3049assert_eq!(some_var.load(Ordering::Relaxed), 10);
3050```"),
3051 #[inline]
3052 #[cfg_attr(
3053 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3054 track_caller
3055 )]
3056 pub fn store(&self, val: $int_type, order: Ordering) {
3057 self.inner.store(val, order)
3058 }
3059 }
3060
3061 cfg_has_atomic_cas_or_amo32! {
3062 $cfg_has_atomic_cas_or_amo32_or_8! {
3063 doc_comment! {
3064 concat!("Stores a value into the atomic integer, returning the previous value.
3065
3066`swap` takes an [`Ordering`] argument which describes the memory ordering
3067of this operation. All ordering modes are possible. Note that using
3068[`Acquire`] makes the store part of this operation [`Relaxed`], and
3069using [`Release`] makes the load part [`Relaxed`].
3070
3071# Examples
3072
3073```
3074use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3075
3076let some_var = ", stringify!($atomic_type), "::new(5);
3077
3078assert_eq!(some_var.swap(10, Ordering::Relaxed), 5);
3079```"),
3080 #[inline]
3081 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3082 pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type {
3083 self.inner.swap(val, order)
3084 }
3085 }
3086 } // $cfg_has_atomic_cas_or_amo32_or_8!
3087
3088 cfg_has_atomic_cas! {
3089 doc_comment! {
3090 concat!("Stores a value into the atomic integer if the current value is the same as
3091the `current` value.
3092
3093The return value is a result indicating whether the new value was written and
3094containing the previous value. On success this value is guaranteed to be equal to
3095`current`.
3096
3097`compare_exchange` takes two [`Ordering`] arguments to describe the memory
3098ordering of this operation. `success` describes the required ordering for the
3099read-modify-write operation that takes place if the comparison with `current` succeeds.
3100`failure` describes the required ordering for the load operation that takes place when
3101the comparison fails. Using [`Acquire`] as success ordering makes the store part
3102of this operation [`Relaxed`], and using [`Release`] makes the successful load
3103[`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3104
3105# Panics
3106
3107Panics if `failure` is [`Release`], [`AcqRel`].
3108
3109# Examples
3110
3111```
3112use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3113
3114let some_var = ", stringify!($atomic_type), "::new(5);
3115
3116assert_eq!(
3117 some_var.compare_exchange(5, 10, Ordering::Acquire, Ordering::Relaxed),
3118 Ok(5),
3119);
3120assert_eq!(some_var.load(Ordering::Relaxed), 10);
3121
3122assert_eq!(
3123 some_var.compare_exchange(6, 12, Ordering::SeqCst, Ordering::Acquire),
3124 Err(10),
3125);
3126assert_eq!(some_var.load(Ordering::Relaxed), 10);
3127```"),
3128 #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
3129 #[inline]
3130 #[cfg_attr(
3131 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3132 track_caller
3133 )]
3134 pub fn compare_exchange(
3135 &self,
3136 current: $int_type,
3137 new: $int_type,
3138 success: Ordering,
3139 failure: Ordering,
3140 ) -> Result<$int_type, $int_type> {
3141 self.inner.compare_exchange(current, new, success, failure)
3142 }
3143 }
3144
3145 doc_comment! {
3146 concat!("Stores a value into the atomic integer if the current value is the same as
3147the `current` value.
3148Unlike [`compare_exchange`](Self::compare_exchange)
3149this function is allowed to spuriously fail even
3150when the comparison succeeds, which can result in more efficient code on some
3151platforms. The return value is a result indicating whether the new value was
3152written and containing the previous value.
3153
3154`compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
3155ordering of this operation. `success` describes the required ordering for the
3156read-modify-write operation that takes place if the comparison with `current` succeeds.
3157`failure` describes the required ordering for the load operation that takes place when
3158the comparison fails. Using [`Acquire`] as success ordering makes the store part
3159of this operation [`Relaxed`], and using [`Release`] makes the successful load
3160[`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3161
3162# Panics
3163
3164Panics if `failure` is [`Release`], [`AcqRel`].
3165
3166# Examples
3167
3168```
3169use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3170
3171let val = ", stringify!($atomic_type), "::new(4);
3172
3173let mut old = val.load(Ordering::Relaxed);
3174loop {
3175 let new = old * 2;
3176 match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
3177 Ok(_) => break,
3178 Err(x) => old = x,
3179 }
3180}
3181```"),
3182 #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
3183 #[inline]
3184 #[cfg_attr(
3185 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3186 track_caller
3187 )]
3188 pub fn compare_exchange_weak(
3189 &self,
3190 current: $int_type,
3191 new: $int_type,
3192 success: Ordering,
3193 failure: Ordering,
3194 ) -> Result<$int_type, $int_type> {
3195 self.inner.compare_exchange_weak(current, new, success, failure)
3196 }
3197 }
3198 } // cfg_has_atomic_cas!
3199
3200 $cfg_has_atomic_cas_or_amo32_or_8! {
3201 doc_comment! {
3202 concat!("Adds to the current value, returning the previous value.
3203
3204This operation wraps around on overflow.
3205
3206`fetch_add` takes an [`Ordering`] argument which describes the memory ordering
3207of this operation. All ordering modes are possible. Note that using
3208[`Acquire`] makes the store part of this operation [`Relaxed`], and
3209using [`Release`] makes the load part [`Relaxed`].
3210
3211# Examples
3212
3213```
3214use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3215
3216let foo = ", stringify!($atomic_type), "::new(0);
3217assert_eq!(foo.fetch_add(10, Ordering::SeqCst), 0);
3218assert_eq!(foo.load(Ordering::SeqCst), 10);
3219```"),
3220 #[inline]
3221 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3222 pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type {
3223 self.inner.fetch_add(val, order)
3224 }
3225 }
3226
3227 doc_comment! {
3228 concat!("Adds to the current value.
3229
3230This operation wraps around on overflow.
3231
3232Unlike `fetch_add`, this does not return the previous value.
3233
3234`add` takes an [`Ordering`] argument which describes the memory ordering
3235of this operation. All ordering modes are possible. Note that using
3236[`Acquire`] makes the store part of this operation [`Relaxed`], and
3237using [`Release`] makes the load part [`Relaxed`].
3238
3239This function may generate more efficient code than `fetch_add` on some platforms.
3240
3241- MSP430: `add` instead of disabling interrupts ({8,16}-bit atomics)
3242
3243# Examples
3244
3245```
3246use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3247
3248let foo = ", stringify!($atomic_type), "::new(0);
3249foo.add(10, Ordering::SeqCst);
3250assert_eq!(foo.load(Ordering::SeqCst), 10);
3251```"),
3252 #[inline]
3253 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3254 pub fn add(&self, val: $int_type, order: Ordering) {
3255 self.inner.add(val, order);
3256 }
3257 }
3258
3259 doc_comment! {
3260 concat!("Subtracts from the current value, returning the previous value.
3261
3262This operation wraps around on overflow.
3263
3264`fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
3265of this operation. All ordering modes are possible. Note that using
3266[`Acquire`] makes the store part of this operation [`Relaxed`], and
3267using [`Release`] makes the load part [`Relaxed`].
3268
3269# Examples
3270
3271```
3272use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3273
3274let foo = ", stringify!($atomic_type), "::new(20);
3275assert_eq!(foo.fetch_sub(10, Ordering::SeqCst), 20);
3276assert_eq!(foo.load(Ordering::SeqCst), 10);
3277```"),
3278 #[inline]
3279 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3280 pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type {
3281 self.inner.fetch_sub(val, order)
3282 }
3283 }
3284
3285 doc_comment! {
3286 concat!("Subtracts from the current value.
3287
3288This operation wraps around on overflow.
3289
3290Unlike `fetch_sub`, this does not return the previous value.
3291
3292`sub` takes an [`Ordering`] argument which describes the memory ordering
3293of this operation. All ordering modes are possible. Note that using
3294[`Acquire`] makes the store part of this operation [`Relaxed`], and
3295using [`Release`] makes the load part [`Relaxed`].
3296
3297This function may generate more efficient code than `fetch_sub` on some platforms.
3298
3299- MSP430: `sub` instead of disabling interrupts ({8,16}-bit atomics)
3300
3301# Examples
3302
3303```
3304use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3305
3306let foo = ", stringify!($atomic_type), "::new(20);
3307foo.sub(10, Ordering::SeqCst);
3308assert_eq!(foo.load(Ordering::SeqCst), 10);
3309```"),
3310 #[inline]
3311 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3312 pub fn sub(&self, val: $int_type, order: Ordering) {
3313 self.inner.sub(val, order);
3314 }
3315 }
3316 } // $cfg_has_atomic_cas_or_amo32_or_8!
3317
3318 doc_comment! {
3319 concat!("Bitwise \"and\" with the current value.
3320
3321Performs a bitwise \"and\" operation on the current value and the argument `val`, and
3322sets the new value to the result.
3323
3324Returns the previous value.
3325
3326`fetch_and` takes an [`Ordering`] argument which describes the memory ordering
3327of this operation. All ordering modes are possible. Note that using
3328[`Acquire`] makes the store part of this operation [`Relaxed`], and
3329using [`Release`] makes the load part [`Relaxed`].
3330
3331# Examples
3332
3333```
3334use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3335
3336let foo = ", stringify!($atomic_type), "::new(0b101101);
3337assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
3338assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
3339```"),
3340 #[inline]
3341 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3342 pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type {
3343 self.inner.fetch_and(val, order)
3344 }
3345 }
3346
3347 doc_comment! {
3348 concat!("Bitwise \"and\" with the current value.
3349
3350Performs a bitwise \"and\" operation on the current value and the argument `val`, and
3351sets the new value to the result.
3352
3353Unlike `fetch_and`, this does not return the previous value.
3354
3355`and` takes an [`Ordering`] argument which describes the memory ordering
3356of this operation. All ordering modes are possible. Note that using
3357[`Acquire`] makes the store part of this operation [`Relaxed`], and
3358using [`Release`] makes the load part [`Relaxed`].
3359
3360This function may generate more efficient code than `fetch_and` on some platforms.
3361
3362- x86/x86_64: `lock and` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3363- MSP430: `and` instead of disabling interrupts ({8,16}-bit atomics)
3364
3365Note: On x86/x86_64, the use of either function should not usually
3366affect the generated code, because LLVM can properly optimize the case
3367where the result is unused.
3368
3369# Examples
3370
3371```
3372use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3373
3374let foo = ", stringify!($atomic_type), "::new(0b101101);
3375assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
3376assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
3377```"),
3378 #[inline]
3379 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3380 pub fn and(&self, val: $int_type, order: Ordering) {
3381 self.inner.and(val, order);
3382 }
3383 }
3384
3385 cfg_has_atomic_cas! {
3386 doc_comment! {
3387 concat!("Bitwise \"nand\" with the current value.
3388
3389Performs a bitwise \"nand\" operation on the current value and the argument `val`, and
3390sets the new value to the result.
3391
3392Returns the previous value.
3393
3394`fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
3395of this operation. All ordering modes are possible. Note that using
3396[`Acquire`] makes the store part of this operation [`Relaxed`], and
3397using [`Release`] makes the load part [`Relaxed`].
3398
3399# Examples
3400
3401```
3402use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3403
3404let foo = ", stringify!($atomic_type), "::new(0x13);
3405assert_eq!(foo.fetch_nand(0x31, Ordering::SeqCst), 0x13);
3406assert_eq!(foo.load(Ordering::SeqCst), !(0x13 & 0x31));
3407```"),
3408 #[inline]
3409 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3410 pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type {
3411 self.inner.fetch_nand(val, order)
3412 }
3413 }
3414 } // cfg_has_atomic_cas!
3415
3416 doc_comment! {
3417 concat!("Bitwise \"or\" with the current value.
3418
3419Performs a bitwise \"or\" operation on the current value and the argument `val`, and
3420sets the new value to the result.
3421
3422Returns the previous value.
3423
3424`fetch_or` takes an [`Ordering`] argument which describes the memory ordering
3425of this operation. All ordering modes are possible. Note that using
3426[`Acquire`] makes the store part of this operation [`Relaxed`], and
3427using [`Release`] makes the load part [`Relaxed`].
3428
3429# Examples
3430
3431```
3432use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3433
3434let foo = ", stringify!($atomic_type), "::new(0b101101);
3435assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
3436assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
3437```"),
3438 #[inline]
3439 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3440 pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type {
3441 self.inner.fetch_or(val, order)
3442 }
3443 }
3444
3445 doc_comment! {
3446 concat!("Bitwise \"or\" with the current value.
3447
3448Performs a bitwise \"or\" operation on the current value and the argument `val`, and
3449sets the new value to the result.
3450
3451Unlike `fetch_or`, this does not return the previous value.
3452
3453`or` takes an [`Ordering`] argument which describes the memory ordering
3454of this operation. All ordering modes are possible. Note that using
3455[`Acquire`] makes the store part of this operation [`Relaxed`], and
3456using [`Release`] makes the load part [`Relaxed`].
3457
3458This function may generate more efficient code than `fetch_or` on some platforms.
3459
3460- x86/x86_64: `lock or` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3461- MSP430: `or` instead of disabling interrupts ({8,16}-bit atomics)
3462
3463Note: On x86/x86_64, the use of either function should not usually
3464affect the generated code, because LLVM can properly optimize the case
3465where the result is unused.
3466
3467# Examples
3468
3469```
3470use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3471
3472let foo = ", stringify!($atomic_type), "::new(0b101101);
3473assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
3474assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
3475```"),
3476 #[inline]
3477 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3478 pub fn or(&self, val: $int_type, order: Ordering) {
3479 self.inner.or(val, order);
3480 }
3481 }
3482
3483 doc_comment! {
3484 concat!("Bitwise \"xor\" with the current value.
3485
3486Performs a bitwise \"xor\" operation on the current value and the argument `val`, and
3487sets the new value to the result.
3488
3489Returns the previous value.
3490
3491`fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
3492of this operation. All ordering modes are possible. Note that using
3493[`Acquire`] makes the store part of this operation [`Relaxed`], and
3494using [`Release`] makes the load part [`Relaxed`].
3495
3496# Examples
3497
3498```
3499use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3500
3501let foo = ", stringify!($atomic_type), "::new(0b101101);
3502assert_eq!(foo.fetch_xor(0b110011, Ordering::SeqCst), 0b101101);
3503assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
3504```"),
3505 #[inline]
3506 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3507 pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type {
3508 self.inner.fetch_xor(val, order)
3509 }
3510 }
3511
3512 doc_comment! {
3513 concat!("Bitwise \"xor\" with the current value.
3514
3515Performs a bitwise \"xor\" operation on the current value and the argument `val`, and
3516sets the new value to the result.
3517
3518Unlike `fetch_xor`, this does not return the previous value.
3519
3520`xor` takes an [`Ordering`] argument which describes the memory ordering
3521of this operation. All ordering modes are possible. Note that using
3522[`Acquire`] makes the store part of this operation [`Relaxed`], and
3523using [`Release`] makes the load part [`Relaxed`].
3524
3525This function may generate more efficient code than `fetch_xor` on some platforms.
3526
3527- x86/x86_64: `lock xor` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3528- MSP430: `xor` instead of disabling interrupts ({8,16}-bit atomics)
3529
3530Note: On x86/x86_64, the use of either function should not usually
3531affect the generated code, because LLVM can properly optimize the case
3532where the result is unused.
3533
3534# Examples
3535
3536```
3537use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3538
3539let foo = ", stringify!($atomic_type), "::new(0b101101);
3540foo.xor(0b110011, Ordering::SeqCst);
3541assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
3542```"),
3543 #[inline]
3544 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3545 pub fn xor(&self, val: $int_type, order: Ordering) {
3546 self.inner.xor(val, order);
3547 }
3548 }
3549
3550 cfg_has_atomic_cas! {
3551 doc_comment! {
3552 concat!("Fetches the value, and applies a function to it that returns an optional
3553new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
3554`Err(previous_value)`.
3555
3556Note: This may call the function multiple times if the value has been changed from other threads in
3557the meantime, as long as the function returns `Some(_)`, but the function will have been applied
3558only once to the stored value.
3559
3560`fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3561The first describes the required ordering for when the operation finally succeeds while the second
3562describes the required ordering for loads. These correspond to the success and failure orderings of
3563[`compare_exchange`](Self::compare_exchange) respectively.
3564
3565Using [`Acquire`] as success ordering makes the store part
3566of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3567[`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3568
3569# Panics
3570
3571Panics if `fetch_order` is [`Release`], [`AcqRel`].
3572
3573# Considerations
3574
3575This method is not magic; it is not provided by the hardware.
3576It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
3577and suffers from the same drawbacks.
3578In particular, this method will not circumvent the [ABA Problem].
3579
3580[ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3581
3582# Examples
3583
3584```
3585use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3586
3587let x = ", stringify!($atomic_type), "::new(7);
3588assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
3589assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
3590assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
3591assert_eq!(x.load(Ordering::SeqCst), 9);
3592```"),
3593 #[inline]
3594 #[cfg_attr(
3595 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3596 track_caller
3597 )]
3598 pub fn fetch_update<F>(
3599 &self,
3600 set_order: Ordering,
3601 fetch_order: Ordering,
3602 mut f: F,
3603 ) -> Result<$int_type, $int_type>
3604 where
3605 F: FnMut($int_type) -> Option<$int_type>,
3606 {
3607 let mut prev = self.load(fetch_order);
3608 while let Some(next) = f(prev) {
3609 match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
3610 x @ Ok(_) => return x,
3611 Err(next_prev) => prev = next_prev,
3612 }
3613 }
3614 Err(prev)
3615 }
3616 }
3617 } // cfg_has_atomic_cas!
3618
3619 $cfg_has_atomic_cas_or_amo32_or_8! {
3620 doc_comment! {
3621 concat!("Maximum with the current value.
3622
3623Finds the maximum of the current value and the argument `val`, and
3624sets the new value to the result.
3625
3626Returns the previous value.
3627
3628`fetch_max` takes an [`Ordering`] argument which describes the memory ordering
3629of this operation. All ordering modes are possible. Note that using
3630[`Acquire`] makes the store part of this operation [`Relaxed`], and
3631using [`Release`] makes the load part [`Relaxed`].
3632
3633# Examples
3634
3635```
3636use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3637
3638let foo = ", stringify!($atomic_type), "::new(23);
3639assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
3640assert_eq!(foo.load(Ordering::SeqCst), 42);
3641```
3642
3643If you want to obtain the maximum value in one step, you can use the following:
3644
3645```
3646use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3647
3648let foo = ", stringify!($atomic_type), "::new(23);
3649let bar = 42;
3650let max_foo = foo.fetch_max(bar, Ordering::SeqCst).max(bar);
3651assert!(max_foo == 42);
3652```"),
3653 #[inline]
3654 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3655 pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type {
3656 self.inner.fetch_max(val, order)
3657 }
3658 }
3659
3660 doc_comment! {
3661 concat!("Minimum with the current value.
3662
3663Finds the minimum of the current value and the argument `val`, and
3664sets the new value to the result.
3665
3666Returns the previous value.
3667
3668`fetch_min` takes an [`Ordering`] argument which describes the memory ordering
3669of this operation. All ordering modes are possible. Note that using
3670[`Acquire`] makes the store part of this operation [`Relaxed`], and
3671using [`Release`] makes the load part [`Relaxed`].
3672
3673# Examples
3674
3675```
3676use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3677
3678let foo = ", stringify!($atomic_type), "::new(23);
3679assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
3680assert_eq!(foo.load(Ordering::Relaxed), 23);
3681assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
3682assert_eq!(foo.load(Ordering::Relaxed), 22);
3683```
3684
3685If you want to obtain the minimum value in one step, you can use the following:
3686
3687```
3688use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3689
3690let foo = ", stringify!($atomic_type), "::new(23);
3691let bar = 12;
3692let min_foo = foo.fetch_min(bar, Ordering::SeqCst).min(bar);
3693assert_eq!(min_foo, 12);
3694```"),
3695 #[inline]
3696 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3697 pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type {
3698 self.inner.fetch_min(val, order)
3699 }
3700 }
3701 } // $cfg_has_atomic_cas_or_amo32_or_8!
3702
3703 doc_comment! {
3704 concat!("Sets the bit at the specified bit-position to 1.
3705
3706Returns `true` if the specified bit was previously set to 1.
3707
3708`bit_set` takes an [`Ordering`] argument which describes the memory ordering
3709of this operation. All ordering modes are possible. Note that using
3710[`Acquire`] makes the store part of this operation [`Relaxed`], and
3711using [`Release`] makes the load part [`Relaxed`].
3712
3713This corresponds to x86's `lock bts`, and the implementation calls them on x86/x86_64.
3714
3715# Examples
3716
3717```
3718use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3719
3720let foo = ", stringify!($atomic_type), "::new(0b0000);
3721assert!(!foo.bit_set(0, Ordering::Relaxed));
3722assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
3723assert!(foo.bit_set(0, Ordering::Relaxed));
3724assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
3725```"),
3726 #[inline]
3727 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3728 pub fn bit_set(&self, bit: u32, order: Ordering) -> bool {
3729 self.inner.bit_set(bit, order)
3730 }
3731 }
3732
3733 doc_comment! {
3734 concat!("Clears the bit at the specified bit-position to 1.
3735
3736Returns `true` if the specified bit was previously set to 1.
3737
3738`bit_clear` takes an [`Ordering`] argument which describes the memory ordering
3739of this operation. All ordering modes are possible. Note that using
3740[`Acquire`] makes the store part of this operation [`Relaxed`], and
3741using [`Release`] makes the load part [`Relaxed`].
3742
3743This corresponds to x86's `lock btr`, and the implementation calls them on x86/x86_64.
3744
3745# Examples
3746
3747```
3748use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3749
3750let foo = ", stringify!($atomic_type), "::new(0b0001);
3751assert!(foo.bit_clear(0, Ordering::Relaxed));
3752assert_eq!(foo.load(Ordering::Relaxed), 0b0000);
3753```"),
3754 #[inline]
3755 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3756 pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool {
3757 self.inner.bit_clear(bit, order)
3758 }
3759 }
3760
3761 doc_comment! {
3762 concat!("Toggles the bit at the specified bit-position.
3763
3764Returns `true` if the specified bit was previously set to 1.
3765
3766`bit_toggle` takes an [`Ordering`] argument which describes the memory ordering
3767of this operation. All ordering modes are possible. Note that using
3768[`Acquire`] makes the store part of this operation [`Relaxed`], and
3769using [`Release`] makes the load part [`Relaxed`].
3770
3771This corresponds to x86's `lock btc`, and the implementation calls them on x86/x86_64.
3772
3773# Examples
3774
3775```
3776use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3777
3778let foo = ", stringify!($atomic_type), "::new(0b0000);
3779assert!(!foo.bit_toggle(0, Ordering::Relaxed));
3780assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
3781assert!(foo.bit_toggle(0, Ordering::Relaxed));
3782assert_eq!(foo.load(Ordering::Relaxed), 0b0000);
3783```"),
3784 #[inline]
3785 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3786 pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool {
3787 self.inner.bit_toggle(bit, order)
3788 }
3789 }
3790
3791 doc_comment! {
3792 concat!("Logical negates the current value, and sets the new value to the result.
3793
3794Returns the previous value.
3795
3796`fetch_not` takes an [`Ordering`] argument which describes the memory ordering
3797of this operation. All ordering modes are possible. Note that using
3798[`Acquire`] makes the store part of this operation [`Relaxed`], and
3799using [`Release`] makes the load part [`Relaxed`].
3800
3801# Examples
3802
3803```
3804use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3805
3806let foo = ", stringify!($atomic_type), "::new(0);
3807assert_eq!(foo.fetch_not(Ordering::Relaxed), 0);
3808assert_eq!(foo.load(Ordering::Relaxed), !0);
3809```"),
3810 #[inline]
3811 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3812 pub fn fetch_not(&self, order: Ordering) -> $int_type {
3813 self.inner.fetch_not(order)
3814 }
3815 }
3816
3817 doc_comment! {
3818 concat!("Logical negates the current value, and sets the new value to the result.
3819
3820Unlike `fetch_not`, this does not return the previous value.
3821
3822`not` takes an [`Ordering`] argument which describes the memory ordering
3823of this operation. All ordering modes are possible. Note that using
3824[`Acquire`] makes the store part of this operation [`Relaxed`], and
3825using [`Release`] makes the load part [`Relaxed`].
3826
3827This function may generate more efficient code than `fetch_not` on some platforms.
3828
3829- x86/x86_64: `lock not` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3830- MSP430: `inv` instead of disabling interrupts ({8,16}-bit atomics)
3831
3832# Examples
3833
3834```
3835use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3836
3837let foo = ", stringify!($atomic_type), "::new(0);
3838foo.not(Ordering::Relaxed);
3839assert_eq!(foo.load(Ordering::Relaxed), !0);
3840```"),
3841 #[inline]
3842 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3843 pub fn not(&self, order: Ordering) {
3844 self.inner.not(order);
3845 }
3846 }
3847
3848 cfg_has_atomic_cas! {
3849 doc_comment! {
3850 concat!("Negates the current value, and sets the new value to the result.
3851
3852Returns the previous value.
3853
3854`fetch_neg` takes an [`Ordering`] argument which describes the memory ordering
3855of this operation. All ordering modes are possible. Note that using
3856[`Acquire`] makes the store part of this operation [`Relaxed`], and
3857using [`Release`] makes the load part [`Relaxed`].
3858
3859# Examples
3860
3861```
3862use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3863
3864let foo = ", stringify!($atomic_type), "::new(5);
3865assert_eq!(foo.fetch_neg(Ordering::Relaxed), 5);
3866assert_eq!(foo.load(Ordering::Relaxed), 5_", stringify!($int_type), ".wrapping_neg());
3867assert_eq!(foo.fetch_neg(Ordering::Relaxed), 5_", stringify!($int_type), ".wrapping_neg());
3868assert_eq!(foo.load(Ordering::Relaxed), 5);
3869```"),
3870 #[inline]
3871 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3872 pub fn fetch_neg(&self, order: Ordering) -> $int_type {
3873 self.inner.fetch_neg(order)
3874 }
3875 }
3876
3877 doc_comment! {
3878 concat!("Negates the current value, and sets the new value to the result.
3879
3880Unlike `fetch_neg`, this does not return the previous value.
3881
3882`neg` takes an [`Ordering`] argument which describes the memory ordering
3883of this operation. All ordering modes are possible. Note that using
3884[`Acquire`] makes the store part of this operation [`Relaxed`], and
3885using [`Release`] makes the load part [`Relaxed`].
3886
3887This function may generate more efficient code than `fetch_neg` on some platforms.
3888
3889- x86/x86_64: `lock neg` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3890
3891# Examples
3892
3893```
3894use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3895
3896let foo = ", stringify!($atomic_type), "::new(5);
3897foo.neg(Ordering::Relaxed);
3898assert_eq!(foo.load(Ordering::Relaxed), 5_", stringify!($int_type), ".wrapping_neg());
3899foo.neg(Ordering::Relaxed);
3900assert_eq!(foo.load(Ordering::Relaxed), 5);
3901```"),
3902 #[inline]
3903 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3904 pub fn neg(&self, order: Ordering) {
3905 self.inner.neg(order);
3906 }
3907 }
3908 } // cfg_has_atomic_cas!
3909 } // cfg_has_atomic_cas_or_amo32!
3910
3911 const_fn! {
3912 const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
3913 /// Returns a mutable pointer to the underlying integer.
3914 ///
3915 /// Returning an `*mut` pointer from a shared reference to this atomic is
3916 /// safe because the atomic types work with interior mutability. Any use of
3917 /// the returned raw pointer requires an `unsafe` block and has to uphold
3918 /// the safety requirements. If there is concurrent access, note the following
3919 /// additional safety requirements:
3920 ///
3921 /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
3922 /// operations on it must be atomic.
3923 /// - Otherwise, any concurrent operations on it must be compatible with
3924 /// operations performed by this atomic type.
3925 ///
3926 /// This is `const fn` on Rust 1.58+.
3927 #[inline]
3928 pub const fn as_ptr(&self) -> *mut $int_type {
3929 self.inner.as_ptr()
3930 }
3931 }
3932 }
3933 // See https://github.com/taiki-e/portable-atomic/issues/180
3934 #[cfg(not(feature = "require-cas"))]
3935 cfg_no_atomic_cas! {
3936 #[doc(hidden)]
3937 #[allow(unused_variables, clippy::unused_self, clippy::extra_unused_lifetimes)]
3938 impl<'a> $atomic_type {
3939 $cfg_no_atomic_cas_or_amo32_or_8! {
3940 #[inline]
3941 pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type
3942 where
3943 &'a Self: HasSwap,
3944 {
3945 unimplemented!()
3946 }
3947 } // $cfg_no_atomic_cas_or_amo32_or_8!
3948 #[inline]
3949 pub fn compare_exchange(
3950 &self,
3951 current: $int_type,
3952 new: $int_type,
3953 success: Ordering,
3954 failure: Ordering,
3955 ) -> Result<$int_type, $int_type>
3956 where
3957 &'a Self: HasCompareExchange,
3958 {
3959 unimplemented!()
3960 }
3961 #[inline]
3962 pub fn compare_exchange_weak(
3963 &self,
3964 current: $int_type,
3965 new: $int_type,
3966 success: Ordering,
3967 failure: Ordering,
3968 ) -> Result<$int_type, $int_type>
3969 where
3970 &'a Self: HasCompareExchangeWeak,
3971 {
3972 unimplemented!()
3973 }
3974 $cfg_no_atomic_cas_or_amo32_or_8! {
3975 #[inline]
3976 pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type
3977 where
3978 &'a Self: HasFetchAdd,
3979 {
3980 unimplemented!()
3981 }
3982 #[inline]
3983 pub fn add(&self, val: $int_type, order: Ordering)
3984 where
3985 &'a Self: HasAdd,
3986 {
3987 unimplemented!()
3988 }
3989 #[inline]
3990 pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type
3991 where
3992 &'a Self: HasFetchSub,
3993 {
3994 unimplemented!()
3995 }
3996 #[inline]
3997 pub fn sub(&self, val: $int_type, order: Ordering)
3998 where
3999 &'a Self: HasSub,
4000 {
4001 unimplemented!()
4002 }
4003 } // $cfg_no_atomic_cas_or_amo32_or_8!
4004 cfg_no_atomic_cas_or_amo32! {
4005 #[inline]
4006 pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type
4007 where
4008 &'a Self: HasFetchAnd,
4009 {
4010 unimplemented!()
4011 }
4012 #[inline]
4013 pub fn and(&self, val: $int_type, order: Ordering)
4014 where
4015 &'a Self: HasAnd,
4016 {
4017 unimplemented!()
4018 }
4019 } // cfg_no_atomic_cas_or_amo32!
4020 #[inline]
4021 pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type
4022 where
4023 &'a Self: HasFetchNand,
4024 {
4025 unimplemented!()
4026 }
4027 cfg_no_atomic_cas_or_amo32! {
4028 #[inline]
4029 pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type
4030 where
4031 &'a Self: HasFetchOr,
4032 {
4033 unimplemented!()
4034 }
4035 #[inline]
4036 pub fn or(&self, val: $int_type, order: Ordering)
4037 where
4038 &'a Self: HasOr,
4039 {
4040 unimplemented!()
4041 }
4042 #[inline]
4043 pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type
4044 where
4045 &'a Self: HasFetchXor,
4046 {
4047 unimplemented!()
4048 }
4049 #[inline]
4050 pub fn xor(&self, val: $int_type, order: Ordering)
4051 where
4052 &'a Self: HasXor,
4053 {
4054 unimplemented!()
4055 }
4056 } // cfg_no_atomic_cas_or_amo32!
4057 #[inline]
4058 pub fn fetch_update<F>(
4059 &self,
4060 set_order: Ordering,
4061 fetch_order: Ordering,
4062 f: F,
4063 ) -> Result<$int_type, $int_type>
4064 where
4065 F: FnMut($int_type) -> Option<$int_type>,
4066 &'a Self: HasFetchUpdate,
4067 {
4068 unimplemented!()
4069 }
4070 $cfg_no_atomic_cas_or_amo32_or_8! {
4071 #[inline]
4072 pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type
4073 where
4074 &'a Self: HasFetchMax,
4075 {
4076 unimplemented!()
4077 }
4078 #[inline]
4079 pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type
4080 where
4081 &'a Self: HasFetchMin,
4082 {
4083 unimplemented!()
4084 }
4085 } // $cfg_no_atomic_cas_or_amo32_or_8!
4086 cfg_no_atomic_cas_or_amo32! {
4087 #[inline]
4088 pub fn bit_set(&self, bit: u32, order: Ordering) -> bool
4089 where
4090 &'a Self: HasBitSet,
4091 {
4092 unimplemented!()
4093 }
4094 #[inline]
4095 pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool
4096 where
4097 &'a Self: HasBitClear,
4098 {
4099 unimplemented!()
4100 }
4101 #[inline]
4102 pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool
4103 where
4104 &'a Self: HasBitToggle,
4105 {
4106 unimplemented!()
4107 }
4108 #[inline]
4109 pub fn fetch_not(&self, order: Ordering) -> $int_type
4110 where
4111 &'a Self: HasFetchNot,
4112 {
4113 unimplemented!()
4114 }
4115 #[inline]
4116 pub fn not(&self, order: Ordering)
4117 where
4118 &'a Self: HasNot,
4119 {
4120 unimplemented!()
4121 }
4122 } // cfg_no_atomic_cas_or_amo32!
4123 #[inline]
4124 pub fn fetch_neg(&self, order: Ordering) -> $int_type
4125 where
4126 &'a Self: HasFetchNeg,
4127 {
4128 unimplemented!()
4129 }
4130 #[inline]
4131 pub fn neg(&self, order: Ordering)
4132 where
4133 &'a Self: HasNeg,
4134 {
4135 unimplemented!()
4136 }
4137 }
4138 } // cfg_no_atomic_cas!
4139 $(
4140 #[$cfg_float]
4141 atomic_int!(float,
4142 #[$cfg_float] $atomic_float_type, $float_type, $atomic_type, $int_type, $align
4143 );
4144 )?
4145 };
4146
4147 // AtomicF* impls
4148 (float,
4149 #[$cfg_float:meta]
4150 $atomic_type:ident,
4151 $float_type:ident,
4152 $atomic_int_type:ident,
4153 $int_type:ident,
4154 $align:literal
4155 ) => {
4156 doc_comment! {
4157 concat!("A floating point type which can be safely shared between threads.
4158
4159This type has the same in-memory representation as the underlying floating point type,
4160[`", stringify!($float_type), "`].
4161"
4162 ),
4163 #[cfg_attr(docsrs, doc($cfg_float))]
4164 // We can use #[repr(transparent)] here, but #[repr(C, align(N))]
4165 // will show clearer docs.
4166 #[repr(C, align($align))]
4167 pub struct $atomic_type {
4168 inner: imp::float::$atomic_type,
4169 }
4170 }
4171
4172 impl Default for $atomic_type {
4173 #[inline]
4174 fn default() -> Self {
4175 Self::new($float_type::default())
4176 }
4177 }
4178
4179 impl From<$float_type> for $atomic_type {
4180 #[inline]
4181 fn from(v: $float_type) -> Self {
4182 Self::new(v)
4183 }
4184 }
4185
4186 // UnwindSafe is implicitly implemented.
4187 #[cfg(not(portable_atomic_no_core_unwind_safe))]
4188 impl core::panic::RefUnwindSafe for $atomic_type {}
4189 #[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
4190 impl std::panic::RefUnwindSafe for $atomic_type {}
4191
4192 impl_debug_and_serde!($atomic_type);
4193
4194 impl $atomic_type {
4195 /// Creates a new atomic float.
4196 #[inline]
4197 #[must_use]
4198 pub const fn new(v: $float_type) -> Self {
4199 static_assert_layout!($atomic_type, $float_type);
4200 Self { inner: imp::float::$atomic_type::new(v) }
4201 }
4202
4203 // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
4204 #[cfg(not(portable_atomic_no_const_mut_refs))]
4205 doc_comment! {
4206 concat!("Creates a new reference to an atomic float from a pointer.
4207
4208This is `const fn` on Rust 1.83+.
4209
4210# Safety
4211
4212* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
4213 can be bigger than `align_of::<", stringify!($float_type), ">()`).
4214* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
4215* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
4216 behind `ptr` must have a happens-before relationship with atomic accesses via
4217 the returned value (or vice-versa).
4218 * In other words, time periods where the value is accessed atomically may not
4219 overlap with periods where the value is accessed non-atomically.
4220 * This requirement is trivially satisfied if `ptr` is never used non-atomically
4221 for the duration of lifetime `'a`. Most use cases should be able to follow
4222 this guideline.
4223 * This requirement is also trivially satisfied if all accesses (atomic or not) are
4224 done from the same thread.
4225* If this atomic type is *not* lock-free:
4226 * Any accesses to the value behind `ptr` must have a happens-before relationship
4227 with accesses via the returned value (or vice-versa).
4228 * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
4229 be compatible with operations performed by this atomic type.
4230* This method must not be used to create overlapping or mixed-size atomic
4231 accesses, as these are not supported by the memory model.
4232
4233[valid]: core::ptr#safety"),
4234 #[inline]
4235 #[must_use]
4236 pub const unsafe fn from_ptr<'a>(ptr: *mut $float_type) -> &'a Self {
4237 #[allow(clippy::cast_ptr_alignment)]
4238 // SAFETY: guaranteed by the caller
4239 unsafe { &*(ptr as *mut Self) }
4240 }
4241 }
4242 #[cfg(portable_atomic_no_const_mut_refs)]
4243 doc_comment! {
4244 concat!("Creates a new reference to an atomic float from a pointer.
4245
4246This is `const fn` on Rust 1.83+.
4247
4248# Safety
4249
4250* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
4251 can be bigger than `align_of::<", stringify!($float_type), ">()`).
4252* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
4253* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
4254 behind `ptr` must have a happens-before relationship with atomic accesses via
4255 the returned value (or vice-versa).
4256 * In other words, time periods where the value is accessed atomically may not
4257 overlap with periods where the value is accessed non-atomically.
4258 * This requirement is trivially satisfied if `ptr` is never used non-atomically
4259 for the duration of lifetime `'a`. Most use cases should be able to follow
4260 this guideline.
4261 * This requirement is also trivially satisfied if all accesses (atomic or not) are
4262 done from the same thread.
4263* If this atomic type is *not* lock-free:
4264 * Any accesses to the value behind `ptr` must have a happens-before relationship
4265 with accesses via the returned value (or vice-versa).
4266 * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
4267 be compatible with operations performed by this atomic type.
4268* This method must not be used to create overlapping or mixed-size atomic
4269 accesses, as these are not supported by the memory model.
4270
4271[valid]: core::ptr#safety"),
4272 #[inline]
4273 #[must_use]
4274 pub unsafe fn from_ptr<'a>(ptr: *mut $float_type) -> &'a Self {
4275 #[allow(clippy::cast_ptr_alignment)]
4276 // SAFETY: guaranteed by the caller
4277 unsafe { &*(ptr as *mut Self) }
4278 }
4279 }
4280
4281 /// Returns `true` if operations on values of this type are lock-free.
4282 ///
4283 /// If the compiler or the platform doesn't support the necessary
4284 /// atomic instructions, global locks for every potentially
4285 /// concurrent atomic operation will be used.
4286 #[inline]
4287 #[must_use]
4288 pub fn is_lock_free() -> bool {
4289 <imp::float::$atomic_type>::is_lock_free()
4290 }
4291
4292 /// Returns `true` if operations on values of this type are lock-free.
4293 ///
4294 /// If the compiler or the platform doesn't support the necessary
4295 /// atomic instructions, global locks for every potentially
4296 /// concurrent atomic operation will be used.
4297 ///
4298 /// **Note:** If the atomic operation relies on dynamic CPU feature detection,
4299 /// this type may be lock-free even if the function returns false.
4300 #[inline]
4301 #[must_use]
4302 pub const fn is_always_lock_free() -> bool {
4303 <imp::float::$atomic_type>::IS_ALWAYS_LOCK_FREE
4304 }
4305 #[cfg(test)]
4306 const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
4307
4308 const_fn! {
4309 const_if: #[cfg(not(portable_atomic_no_const_mut_refs))];
4310 /// Returns a mutable reference to the underlying float.
4311 ///
4312 /// This is safe because the mutable reference guarantees that no other threads are
4313 /// concurrently accessing the atomic data.
4314 ///
4315 /// This is `const fn` on Rust 1.83+.
4316 #[inline]
4317 pub const fn get_mut(&mut self) -> &mut $float_type {
4318 // SAFETY: the mutable reference guarantees unique ownership.
4319 unsafe { &mut *self.as_ptr() }
4320 }
4321 }
4322
4323 // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
4324 // https://github.com/rust-lang/rust/issues/76314
4325
4326 const_fn! {
4327 const_if: #[cfg(not(portable_atomic_no_const_transmute))];
4328 /// Consumes the atomic and returns the contained value.
4329 ///
4330 /// This is safe because passing `self` by value guarantees that no other threads are
4331 /// concurrently accessing the atomic data.
4332 ///
4333 /// This is `const fn` on Rust 1.56+.
4334 #[inline]
4335 pub const fn into_inner(self) -> $float_type {
4336 // SAFETY: $atomic_type and $float_type have the same size and in-memory representations,
4337 // so they can be safely transmuted.
4338 // (const UnsafeCell::into_inner is unstable)
4339 unsafe { core::mem::transmute(self) }
4340 }
4341 }
4342
4343 /// Loads a value from the atomic float.
4344 ///
4345 /// `load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
4346 /// Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
4347 ///
4348 /// # Panics
4349 ///
4350 /// Panics if `order` is [`Release`] or [`AcqRel`].
4351 #[inline]
4352 #[cfg_attr(
4353 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4354 track_caller
4355 )]
4356 pub fn load(&self, order: Ordering) -> $float_type {
4357 self.inner.load(order)
4358 }
4359
4360 /// Stores a value into the atomic float.
4361 ///
4362 /// `store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
4363 /// Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
4364 ///
4365 /// # Panics
4366 ///
4367 /// Panics if `order` is [`Acquire`] or [`AcqRel`].
4368 #[inline]
4369 #[cfg_attr(
4370 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4371 track_caller
4372 )]
4373 pub fn store(&self, val: $float_type, order: Ordering) {
4374 self.inner.store(val, order)
4375 }
4376
4377 cfg_has_atomic_cas_or_amo32! {
4378 /// Stores a value into the atomic float, returning the previous value.
4379 ///
4380 /// `swap` takes an [`Ordering`] argument which describes the memory ordering
4381 /// of this operation. All ordering modes are possible. Note that using
4382 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4383 /// using [`Release`] makes the load part [`Relaxed`].
4384 #[inline]
4385 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4386 pub fn swap(&self, val: $float_type, order: Ordering) -> $float_type {
4387 self.inner.swap(val, order)
4388 }
4389
4390 cfg_has_atomic_cas! {
4391 /// Stores a value into the atomic float if the current value is the same as
4392 /// the `current` value.
4393 ///
4394 /// The return value is a result indicating whether the new value was written and
4395 /// containing the previous value. On success this value is guaranteed to be equal to
4396 /// `current`.
4397 ///
4398 /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
4399 /// ordering of this operation. `success` describes the required ordering for the
4400 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
4401 /// `failure` describes the required ordering for the load operation that takes place when
4402 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
4403 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
4404 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
4405 ///
4406 /// # Panics
4407 ///
4408 /// Panics if `failure` is [`Release`], [`AcqRel`].
4409 #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
4410 #[inline]
4411 #[cfg_attr(
4412 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4413 track_caller
4414 )]
4415 pub fn compare_exchange(
4416 &self,
4417 current: $float_type,
4418 new: $float_type,
4419 success: Ordering,
4420 failure: Ordering,
4421 ) -> Result<$float_type, $float_type> {
4422 self.inner.compare_exchange(current, new, success, failure)
4423 }
4424
4425 /// Stores a value into the atomic float if the current value is the same as
4426 /// the `current` value.
4427 /// Unlike [`compare_exchange`](Self::compare_exchange)
4428 /// this function is allowed to spuriously fail even
4429 /// when the comparison succeeds, which can result in more efficient code on some
4430 /// platforms. The return value is a result indicating whether the new value was
4431 /// written and containing the previous value.
4432 ///
4433 /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
4434 /// ordering of this operation. `success` describes the required ordering for the
4435 /// read-modify-write operation that takes place if the comparison with `current` succeeds.
4436 /// `failure` describes the required ordering for the load operation that takes place when
4437 /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
4438 /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
4439 /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
4440 ///
4441 /// # Panics
4442 ///
4443 /// Panics if `failure` is [`Release`], [`AcqRel`].
4444 #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
4445 #[inline]
4446 #[cfg_attr(
4447 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4448 track_caller
4449 )]
4450 pub fn compare_exchange_weak(
4451 &self,
4452 current: $float_type,
4453 new: $float_type,
4454 success: Ordering,
4455 failure: Ordering,
4456 ) -> Result<$float_type, $float_type> {
4457 self.inner.compare_exchange_weak(current, new, success, failure)
4458 }
4459
4460 /// Adds to the current value, returning the previous value.
4461 ///
4462 /// This operation wraps around on overflow.
4463 ///
4464 /// `fetch_add` takes an [`Ordering`] argument which describes the memory ordering
4465 /// of this operation. All ordering modes are possible. Note that using
4466 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4467 /// using [`Release`] makes the load part [`Relaxed`].
4468 #[inline]
4469 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4470 pub fn fetch_add(&self, val: $float_type, order: Ordering) -> $float_type {
4471 self.inner.fetch_add(val, order)
4472 }
4473
4474 /// Subtracts from the current value, returning the previous value.
4475 ///
4476 /// This operation wraps around on overflow.
4477 ///
4478 /// `fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
4479 /// of this operation. All ordering modes are possible. Note that using
4480 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4481 /// using [`Release`] makes the load part [`Relaxed`].
4482 #[inline]
4483 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4484 pub fn fetch_sub(&self, val: $float_type, order: Ordering) -> $float_type {
4485 self.inner.fetch_sub(val, order)
4486 }
4487
4488 /// Fetches the value, and applies a function to it that returns an optional
4489 /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
4490 /// `Err(previous_value)`.
4491 ///
4492 /// Note: This may call the function multiple times if the value has been changed from other threads in
4493 /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
4494 /// only once to the stored value.
4495 ///
4496 /// `fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
4497 /// The first describes the required ordering for when the operation finally succeeds while the second
4498 /// describes the required ordering for loads. These correspond to the success and failure orderings of
4499 /// [`compare_exchange`](Self::compare_exchange) respectively.
4500 ///
4501 /// Using [`Acquire`] as success ordering makes the store part
4502 /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
4503 /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
4504 ///
4505 /// # Panics
4506 ///
4507 /// Panics if `fetch_order` is [`Release`], [`AcqRel`].
4508 ///
4509 /// # Considerations
4510 ///
4511 /// This method is not magic; it is not provided by the hardware.
4512 /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
4513 /// and suffers from the same drawbacks.
4514 /// In particular, this method will not circumvent the [ABA Problem].
4515 ///
4516 /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
4517 #[inline]
4518 #[cfg_attr(
4519 any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4520 track_caller
4521 )]
4522 pub fn fetch_update<F>(
4523 &self,
4524 set_order: Ordering,
4525 fetch_order: Ordering,
4526 mut f: F,
4527 ) -> Result<$float_type, $float_type>
4528 where
4529 F: FnMut($float_type) -> Option<$float_type>,
4530 {
4531 let mut prev = self.load(fetch_order);
4532 while let Some(next) = f(prev) {
4533 match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
4534 x @ Ok(_) => return x,
4535 Err(next_prev) => prev = next_prev,
4536 }
4537 }
4538 Err(prev)
4539 }
4540
4541 /// Maximum with the current value.
4542 ///
4543 /// Finds the maximum of the current value and the argument `val`, and
4544 /// sets the new value to the result.
4545 ///
4546 /// Returns the previous value.
4547 ///
4548 /// `fetch_max` takes an [`Ordering`] argument which describes the memory ordering
4549 /// of this operation. All ordering modes are possible. Note that using
4550 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4551 /// using [`Release`] makes the load part [`Relaxed`].
4552 #[inline]
4553 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4554 pub fn fetch_max(&self, val: $float_type, order: Ordering) -> $float_type {
4555 self.inner.fetch_max(val, order)
4556 }
4557
4558 /// Minimum with the current value.
4559 ///
4560 /// Finds the minimum of the current value and the argument `val`, and
4561 /// sets the new value to the result.
4562 ///
4563 /// Returns the previous value.
4564 ///
4565 /// `fetch_min` takes an [`Ordering`] argument which describes the memory ordering
4566 /// of this operation. All ordering modes are possible. Note that using
4567 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4568 /// using [`Release`] makes the load part [`Relaxed`].
4569 #[inline]
4570 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4571 pub fn fetch_min(&self, val: $float_type, order: Ordering) -> $float_type {
4572 self.inner.fetch_min(val, order)
4573 }
4574 } // cfg_has_atomic_cas!
4575
4576 /// Negates the current value, and sets the new value to the result.
4577 ///
4578 /// Returns the previous value.
4579 ///
4580 /// `fetch_neg` takes an [`Ordering`] argument which describes the memory ordering
4581 /// of this operation. All ordering modes are possible. Note that using
4582 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4583 /// using [`Release`] makes the load part [`Relaxed`].
4584 #[inline]
4585 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4586 pub fn fetch_neg(&self, order: Ordering) -> $float_type {
4587 self.inner.fetch_neg(order)
4588 }
4589
4590 /// Computes the absolute value of the current value, and sets the
4591 /// new value to the result.
4592 ///
4593 /// Returns the previous value.
4594 ///
4595 /// `fetch_abs` takes an [`Ordering`] argument which describes the memory ordering
4596 /// of this operation. All ordering modes are possible. Note that using
4597 /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4598 /// using [`Release`] makes the load part [`Relaxed`].
4599 #[inline]
4600 #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4601 pub fn fetch_abs(&self, order: Ordering) -> $float_type {
4602 self.inner.fetch_abs(order)
4603 }
4604 } // cfg_has_atomic_cas_or_amo32!
4605
4606 #[cfg(not(portable_atomic_no_const_raw_ptr_deref))]
4607 doc_comment! {
4608 concat!("Raw transmutation to `&", stringify!($atomic_int_type), "`.
4609
4610See [`", stringify!($float_type) ,"::from_bits`] for some discussion of the
4611portability of this operation (there are almost no issues).
4612
4613This is `const fn` on Rust 1.58+."),
4614 #[inline]
4615 pub const fn as_bits(&self) -> &$atomic_int_type {
4616 self.inner.as_bits()
4617 }
4618 }
4619 #[cfg(portable_atomic_no_const_raw_ptr_deref)]
4620 doc_comment! {
4621 concat!("Raw transmutation to `&", stringify!($atomic_int_type), "`.
4622
4623See [`", stringify!($float_type) ,"::from_bits`] for some discussion of the
4624portability of this operation (there are almost no issues).
4625
4626This is `const fn` on Rust 1.58+."),
4627 #[inline]
4628 pub fn as_bits(&self) -> &$atomic_int_type {
4629 self.inner.as_bits()
4630 }
4631 }
4632
4633 const_fn! {
4634 const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
4635 /// Returns a mutable pointer to the underlying float.
4636 ///
4637 /// Returning an `*mut` pointer from a shared reference to this atomic is
4638 /// safe because the atomic types work with interior mutability. Any use of
4639 /// the returned raw pointer requires an `unsafe` block and has to uphold
4640 /// the safety requirements. If there is concurrent access, note the following
4641 /// additional safety requirements:
4642 ///
4643 /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
4644 /// operations on it must be atomic.
4645 /// - Otherwise, any concurrent operations on it must be compatible with
4646 /// operations performed by this atomic type.
4647 ///
4648 /// This is `const fn` on Rust 1.58+.
4649 #[inline]
4650 pub const fn as_ptr(&self) -> *mut $float_type {
4651 self.inner.as_ptr()
4652 }
4653 }
4654 }
4655 // See https://github.com/taiki-e/portable-atomic/issues/180
4656 #[cfg(not(feature = "require-cas"))]
4657 cfg_no_atomic_cas! {
4658 #[doc(hidden)]
4659 #[allow(unused_variables, clippy::unused_self, clippy::extra_unused_lifetimes)]
4660 impl<'a> $atomic_type {
4661 cfg_no_atomic_cas_or_amo32! {
4662 #[inline]
4663 pub fn swap(&self, val: $float_type, order: Ordering) -> $float_type
4664 where
4665 &'a Self: HasSwap,
4666 {
4667 unimplemented!()
4668 }
4669 } // cfg_no_atomic_cas_or_amo32!
4670 #[inline]
4671 pub fn compare_exchange(
4672 &self,
4673 current: $float_type,
4674 new: $float_type,
4675 success: Ordering,
4676 failure: Ordering,
4677 ) -> Result<$float_type, $float_type>
4678 where
4679 &'a Self: HasCompareExchange,
4680 {
4681 unimplemented!()
4682 }
4683 #[inline]
4684 pub fn compare_exchange_weak(
4685 &self,
4686 current: $float_type,
4687 new: $float_type,
4688 success: Ordering,
4689 failure: Ordering,
4690 ) -> Result<$float_type, $float_type>
4691 where
4692 &'a Self: HasCompareExchangeWeak,
4693 {
4694 unimplemented!()
4695 }
4696 #[inline]
4697 pub fn fetch_add(&self, val: $float_type, order: Ordering) -> $float_type
4698 where
4699 &'a Self: HasFetchAdd,
4700 {
4701 unimplemented!()
4702 }
4703 #[inline]
4704 pub fn fetch_sub(&self, val: $float_type, order: Ordering) -> $float_type
4705 where
4706 &'a Self: HasFetchSub,
4707 {
4708 unimplemented!()
4709 }
4710 #[inline]
4711 pub fn fetch_update<F>(
4712 &self,
4713 set_order: Ordering,
4714 fetch_order: Ordering,
4715 f: F,
4716 ) -> Result<$float_type, $float_type>
4717 where
4718 F: FnMut($float_type) -> Option<$float_type>,
4719 &'a Self: HasFetchUpdate,
4720 {
4721 unimplemented!()
4722 }
4723 #[inline]
4724 pub fn fetch_max(&self, val: $float_type, order: Ordering) -> $float_type
4725 where
4726 &'a Self: HasFetchMax,
4727 {
4728 unimplemented!()
4729 }
4730 #[inline]
4731 pub fn fetch_min(&self, val: $float_type, order: Ordering) -> $float_type
4732 where
4733 &'a Self: HasFetchMin,
4734 {
4735 unimplemented!()
4736 }
4737 cfg_no_atomic_cas_or_amo32! {
4738 #[inline]
4739 pub fn fetch_neg(&self, order: Ordering) -> $float_type
4740 where
4741 &'a Self: HasFetchNeg,
4742 {
4743 unimplemented!()
4744 }
4745 #[inline]
4746 pub fn fetch_abs(&self, order: Ordering) -> $float_type
4747 where
4748 &'a Self: HasFetchAbs,
4749 {
4750 unimplemented!()
4751 }
4752 } // cfg_no_atomic_cas_or_amo32!
4753 }
4754 } // cfg_no_atomic_cas!
4755 };
4756}
4757
4758cfg_has_atomic_ptr! {
4759 #[cfg(target_pointer_width = "16")]
4760 atomic_int!(AtomicIsize, isize, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4761 #[cfg(target_pointer_width = "16")]
4762 atomic_int!(AtomicUsize, usize, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4763 #[cfg(target_pointer_width = "32")]
4764 atomic_int!(AtomicIsize, isize, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4765 #[cfg(target_pointer_width = "32")]
4766 atomic_int!(AtomicUsize, usize, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4767 #[cfg(target_pointer_width = "64")]
4768 atomic_int!(AtomicIsize, isize, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4769 #[cfg(target_pointer_width = "64")]
4770 atomic_int!(AtomicUsize, usize, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4771 #[cfg(target_pointer_width = "128")]
4772 atomic_int!(AtomicIsize, isize, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4773 #[cfg(target_pointer_width = "128")]
4774 atomic_int!(AtomicUsize, usize, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4775}
4776
4777cfg_has_atomic_8! {
4778 atomic_int!(AtomicI8, i8, 1, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4779 atomic_int!(AtomicU8, u8, 1, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4780}
4781cfg_has_atomic_16! {
4782 atomic_int!(AtomicI16, i16, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4783 atomic_int!(AtomicU16, u16, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8,
4784 #[cfg(all(feature = "float", portable_atomic_unstable_f16))] AtomicF16, f16);
4785}
4786cfg_has_atomic_32! {
4787 atomic_int!(AtomicI32, i32, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4788 atomic_int!(AtomicU32, u32, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32,
4789 #[cfg(feature = "float")] AtomicF32, f32);
4790}
4791cfg_has_atomic_64! {
4792 atomic_int!(AtomicI64, i64, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4793 atomic_int!(AtomicU64, u64, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32,
4794 #[cfg(feature = "float")] AtomicF64, f64);
4795}
4796cfg_has_atomic_128! {
4797 atomic_int!(AtomicI128, i128, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4798 atomic_int!(AtomicU128, u128, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32,
4799 #[cfg(all(feature = "float", portable_atomic_unstable_f128))] AtomicF128, f128);
4800}
4801
4802// See https://github.com/taiki-e/portable-atomic/issues/180
4803#[cfg(not(feature = "require-cas"))]
4804cfg_no_atomic_cas! {
4805cfg_no_atomic_cas_or_amo32! {
4806#[cfg(feature = "float")]
4807use self::diagnostic_helper::HasFetchAbs;
4808use self::diagnostic_helper::{
4809 HasAnd, HasBitClear, HasBitSet, HasBitToggle, HasFetchAnd, HasFetchByteAdd, HasFetchByteSub,
4810 HasFetchNot, HasFetchOr, HasFetchPtrAdd, HasFetchPtrSub, HasFetchXor, HasNot, HasOr, HasXor,
4811};
4812} // cfg_no_atomic_cas_or_amo32!
4813cfg_no_atomic_cas_or_amo8! {
4814use self::diagnostic_helper::{HasAdd, HasSub, HasSwap};
4815} // cfg_no_atomic_cas_or_amo8!
4816#[cfg_attr(not(feature = "float"), allow(unused_imports))]
4817use self::diagnostic_helper::{
4818 HasCompareExchange, HasCompareExchangeWeak, HasFetchAdd, HasFetchMax, HasFetchMin,
4819 HasFetchNand, HasFetchNeg, HasFetchSub, HasFetchUpdate, HasNeg,
4820};
4821#[cfg_attr(
4822 any(
4823 all(
4824 portable_atomic_no_atomic_load_store,
4825 not(any(
4826 target_arch = "avr",
4827 target_arch = "bpf",
4828 target_arch = "msp430",
4829 target_arch = "riscv32",
4830 target_arch = "riscv64",
4831 feature = "critical-section",
4832 portable_atomic_unsafe_assume_single_core,
4833 )),
4834 ),
4835 not(feature = "float"),
4836 ),
4837 allow(dead_code, unreachable_pub)
4838)]
4839#[allow(unknown_lints, unnameable_types)] // Not public API. unnameable_types is available on Rust 1.79+
4840mod diagnostic_helper {
4841 cfg_no_atomic_cas_or_amo8! {
4842 #[doc(hidden)]
4843 #[cfg_attr(
4844 not(portable_atomic_no_diagnostic_namespace),
4845 diagnostic::on_unimplemented(
4846 message = "`swap` requires atomic CAS but not available on this target by default",
4847 label = "this associated function is not available on this target by default",
4848 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4849 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4850 )
4851 )]
4852 pub trait HasSwap {}
4853 } // cfg_no_atomic_cas_or_amo8!
4854 #[doc(hidden)]
4855 #[cfg_attr(
4856 not(portable_atomic_no_diagnostic_namespace),
4857 diagnostic::on_unimplemented(
4858 message = "`compare_exchange` requires atomic CAS but not available on this target by default",
4859 label = "this associated function is not available on this target by default",
4860 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4861 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4862 )
4863 )]
4864 pub trait HasCompareExchange {}
4865 #[doc(hidden)]
4866 #[cfg_attr(
4867 not(portable_atomic_no_diagnostic_namespace),
4868 diagnostic::on_unimplemented(
4869 message = "`compare_exchange_weak` requires atomic CAS but not available on this target by default",
4870 label = "this associated function is not available on this target by default",
4871 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4872 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4873 )
4874 )]
4875 pub trait HasCompareExchangeWeak {}
4876 #[doc(hidden)]
4877 #[cfg_attr(
4878 not(portable_atomic_no_diagnostic_namespace),
4879 diagnostic::on_unimplemented(
4880 message = "`fetch_add` requires atomic CAS but not available on this target by default",
4881 label = "this associated function is not available on this target by default",
4882 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4883 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4884 )
4885 )]
4886 pub trait HasFetchAdd {}
4887 cfg_no_atomic_cas_or_amo8! {
4888 #[doc(hidden)]
4889 #[cfg_attr(
4890 not(portable_atomic_no_diagnostic_namespace),
4891 diagnostic::on_unimplemented(
4892 message = "`add` requires atomic CAS but not available on this target by default",
4893 label = "this associated function is not available on this target by default",
4894 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4895 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4896 )
4897 )]
4898 pub trait HasAdd {}
4899 } // cfg_no_atomic_cas_or_amo8!
4900 #[doc(hidden)]
4901 #[cfg_attr(
4902 not(portable_atomic_no_diagnostic_namespace),
4903 diagnostic::on_unimplemented(
4904 message = "`fetch_sub` requires atomic CAS but not available on this target by default",
4905 label = "this associated function is not available on this target by default",
4906 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4907 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4908 )
4909 )]
4910 pub trait HasFetchSub {}
4911 cfg_no_atomic_cas_or_amo8! {
4912 #[doc(hidden)]
4913 #[cfg_attr(
4914 not(portable_atomic_no_diagnostic_namespace),
4915 diagnostic::on_unimplemented(
4916 message = "`sub` requires atomic CAS but not available on this target by default",
4917 label = "this associated function is not available on this target by default",
4918 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4919 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4920 )
4921 )]
4922 pub trait HasSub {}
4923 } // cfg_no_atomic_cas_or_amo8!
4924 cfg_no_atomic_cas_or_amo32! {
4925 #[doc(hidden)]
4926 #[cfg_attr(
4927 not(portable_atomic_no_diagnostic_namespace),
4928 diagnostic::on_unimplemented(
4929 message = "`fetch_ptr_add` requires atomic CAS but not available on this target by default",
4930 label = "this associated function is not available on this target by default",
4931 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4932 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4933 )
4934 )]
4935 pub trait HasFetchPtrAdd {}
4936 #[doc(hidden)]
4937 #[cfg_attr(
4938 not(portable_atomic_no_diagnostic_namespace),
4939 diagnostic::on_unimplemented(
4940 message = "`fetch_ptr_sub` requires atomic CAS but not available on this target by default",
4941 label = "this associated function is not available on this target by default",
4942 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4943 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4944 )
4945 )]
4946 pub trait HasFetchPtrSub {}
4947 #[doc(hidden)]
4948 #[cfg_attr(
4949 not(portable_atomic_no_diagnostic_namespace),
4950 diagnostic::on_unimplemented(
4951 message = "`fetch_byte_add` requires atomic CAS but not available on this target by default",
4952 label = "this associated function is not available on this target by default",
4953 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4954 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4955 )
4956 )]
4957 pub trait HasFetchByteAdd {}
4958 #[doc(hidden)]
4959 #[cfg_attr(
4960 not(portable_atomic_no_diagnostic_namespace),
4961 diagnostic::on_unimplemented(
4962 message = "`fetch_byte_sub` requires atomic CAS but not available on this target by default",
4963 label = "this associated function is not available on this target by default",
4964 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4965 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4966 )
4967 )]
4968 pub trait HasFetchByteSub {}
4969 #[doc(hidden)]
4970 #[cfg_attr(
4971 not(portable_atomic_no_diagnostic_namespace),
4972 diagnostic::on_unimplemented(
4973 message = "`fetch_and` requires atomic CAS but not available on this target by default",
4974 label = "this associated function is not available on this target by default",
4975 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4976 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4977 )
4978 )]
4979 pub trait HasFetchAnd {}
4980 #[doc(hidden)]
4981 #[cfg_attr(
4982 not(portable_atomic_no_diagnostic_namespace),
4983 diagnostic::on_unimplemented(
4984 message = "`and` requires atomic CAS but not available on this target by default",
4985 label = "this associated function is not available on this target by default",
4986 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4987 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4988 )
4989 )]
4990 pub trait HasAnd {}
4991 } // cfg_no_atomic_cas_or_amo32!
4992 #[doc(hidden)]
4993 #[cfg_attr(
4994 not(portable_atomic_no_diagnostic_namespace),
4995 diagnostic::on_unimplemented(
4996 message = "`fetch_nand` requires atomic CAS but not available on this target by default",
4997 label = "this associated function is not available on this target by default",
4998 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
4999 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5000 )
5001 )]
5002 pub trait HasFetchNand {}
5003 cfg_no_atomic_cas_or_amo32! {
5004 #[doc(hidden)]
5005 #[cfg_attr(
5006 not(portable_atomic_no_diagnostic_namespace),
5007 diagnostic::on_unimplemented(
5008 message = "`fetch_or` requires atomic CAS but not available on this target by default",
5009 label = "this associated function is not available on this target by default",
5010 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5011 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5012 )
5013 )]
5014 pub trait HasFetchOr {}
5015 #[doc(hidden)]
5016 #[cfg_attr(
5017 not(portable_atomic_no_diagnostic_namespace),
5018 diagnostic::on_unimplemented(
5019 message = "`or` requires atomic CAS but not available on this target by default",
5020 label = "this associated function is not available on this target by default",
5021 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5022 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5023 )
5024 )]
5025 pub trait HasOr {}
5026 #[doc(hidden)]
5027 #[cfg_attr(
5028 not(portable_atomic_no_diagnostic_namespace),
5029 diagnostic::on_unimplemented(
5030 message = "`fetch_xor` requires atomic CAS but not available on this target by default",
5031 label = "this associated function is not available on this target by default",
5032 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5033 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5034 )
5035 )]
5036 pub trait HasFetchXor {}
5037 #[doc(hidden)]
5038 #[cfg_attr(
5039 not(portable_atomic_no_diagnostic_namespace),
5040 diagnostic::on_unimplemented(
5041 message = "`xor` requires atomic CAS but not available on this target by default",
5042 label = "this associated function is not available on this target by default",
5043 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5044 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5045 )
5046 )]
5047 pub trait HasXor {}
5048 #[doc(hidden)]
5049 #[cfg_attr(
5050 not(portable_atomic_no_diagnostic_namespace),
5051 diagnostic::on_unimplemented(
5052 message = "`fetch_not` requires atomic CAS but not available on this target by default",
5053 label = "this associated function is not available on this target by default",
5054 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5055 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5056 )
5057 )]
5058 pub trait HasFetchNot {}
5059 #[doc(hidden)]
5060 #[cfg_attr(
5061 not(portable_atomic_no_diagnostic_namespace),
5062 diagnostic::on_unimplemented(
5063 message = "`not` requires atomic CAS but not available on this target by default",
5064 label = "this associated function is not available on this target by default",
5065 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5066 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5067 )
5068 )]
5069 pub trait HasNot {}
5070 } // cfg_no_atomic_cas_or_amo32!
5071 #[doc(hidden)]
5072 #[cfg_attr(
5073 not(portable_atomic_no_diagnostic_namespace),
5074 diagnostic::on_unimplemented(
5075 message = "`fetch_neg` requires atomic CAS but not available on this target by default",
5076 label = "this associated function is not available on this target by default",
5077 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5078 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5079 )
5080 )]
5081 pub trait HasFetchNeg {}
5082 #[doc(hidden)]
5083 #[cfg_attr(
5084 not(portable_atomic_no_diagnostic_namespace),
5085 diagnostic::on_unimplemented(
5086 message = "`neg` requires atomic CAS but not available on this target by default",
5087 label = "this associated function is not available on this target by default",
5088 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5089 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5090 )
5091 )]
5092 pub trait HasNeg {}
5093 cfg_no_atomic_cas_or_amo32! {
5094 #[cfg(feature = "float")]
5095 #[cfg_attr(target_pointer_width = "16", allow(dead_code, unreachable_pub))]
5096 #[doc(hidden)]
5097 #[cfg_attr(
5098 not(portable_atomic_no_diagnostic_namespace),
5099 diagnostic::on_unimplemented(
5100 message = "`fetch_abs` requires atomic CAS but not available on this target by default",
5101 label = "this associated function is not available on this target by default",
5102 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5103 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5104 )
5105 )]
5106 pub trait HasFetchAbs {}
5107 } // cfg_no_atomic_cas_or_amo32!
5108 #[doc(hidden)]
5109 #[cfg_attr(
5110 not(portable_atomic_no_diagnostic_namespace),
5111 diagnostic::on_unimplemented(
5112 message = "`fetch_min` requires atomic CAS but not available on this target by default",
5113 label = "this associated function is not available on this target by default",
5114 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5115 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5116 )
5117 )]
5118 pub trait HasFetchMin {}
5119 #[doc(hidden)]
5120 #[cfg_attr(
5121 not(portable_atomic_no_diagnostic_namespace),
5122 diagnostic::on_unimplemented(
5123 message = "`fetch_max` requires atomic CAS but not available on this target by default",
5124 label = "this associated function is not available on this target by default",
5125 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5126 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5127 )
5128 )]
5129 pub trait HasFetchMax {}
5130 #[doc(hidden)]
5131 #[cfg_attr(
5132 not(portable_atomic_no_diagnostic_namespace),
5133 diagnostic::on_unimplemented(
5134 message = "`fetch_update` requires atomic CAS but not available on this target by default",
5135 label = "this associated function is not available on this target by default",
5136 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5137 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5138 )
5139 )]
5140 pub trait HasFetchUpdate {}
5141 cfg_no_atomic_cas_or_amo32! {
5142 #[doc(hidden)]
5143 #[cfg_attr(
5144 not(portable_atomic_no_diagnostic_namespace),
5145 diagnostic::on_unimplemented(
5146 message = "`bit_set` requires atomic CAS but not available on this target by default",
5147 label = "this associated function is not available on this target by default",
5148 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5149 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5150 )
5151 )]
5152 pub trait HasBitSet {}
5153 #[doc(hidden)]
5154 #[cfg_attr(
5155 not(portable_atomic_no_diagnostic_namespace),
5156 diagnostic::on_unimplemented(
5157 message = "`bit_clear` requires atomic CAS but not available on this target by default",
5158 label = "this associated function is not available on this target by default",
5159 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5160 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5161 )
5162 )]
5163 pub trait HasBitClear {}
5164 #[doc(hidden)]
5165 #[cfg_attr(
5166 not(portable_atomic_no_diagnostic_namespace),
5167 diagnostic::on_unimplemented(
5168 message = "`bit_toggle` requires atomic CAS but not available on this target by default",
5169 label = "this associated function is not available on this target by default",
5170 note = "consider enabling one of the `critical-section` feature or `unsafe-assume-single-core` feature (or `portable_atomic_unsafe_assume_single_core` cfg)",
5171 note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
5172 )
5173 )]
5174 pub trait HasBitToggle {}
5175 } // cfg_no_atomic_cas_or_amo32!
5176}
5177} // cfg_no_atomic_cas!