pub fn impls_copy_weak<T>() -> boolwhere
T: ?Sized,alloc and unreliable only.Expand description
Returns true if the given type implements Copy.
Use define_impls_trait_ignore_lt_fn macro to generate other
trait implementation check functions.
Library tests ensure that the impls_trait checks are performed
at compile time and fully optimized with no runtime cost at
opt-level >= 1. Note that the release profile uses
opt-level = 3 by default.
§Reliability
While it is unlikely, there is still a possibility that this function may return false negatives in future Rust versions.
The correctness of the results returned by the functions depends on the following:
- Documented behavior that if
TimplementsEq, twoRcs that point to the same allocation are always equal: https://doc.rust-lang.org/1.82.0/std/rc/struct.Rc.html#method.eq. - Undocumented behavior that the
Rc::partial_eqimplementation forT: Eqwill not usePartialEq::eqif bothRcs point to the same memory location. - The assumption that the undocumented short-circuit behavior described above will be retained for optimization purposes.
There is no formal guarantee that the undocumented behavior
described above will be retained. If the implementation changes
in a future Rust version, the function may return a false
negative, that is, it may return false, even though T
implements the trait. However, the implementation guarantees
that a false positive result is impossible, i.e., the function
will never return true if T does not implement the trait in
any future Rust version.
Details:
- https://internals.rust-lang.org/t/rc-uses-visibly-behavior-changing-specialization-is-that-okay/16173/6,
- https://users.rust-lang.org/t/hack-to-specialize-w-write-for-vec-u8/100366,
- https://doc.rust-lang.org/1.82.0/std/rc/struct.Rc.html#method.eq,
- https://github.com/rust-lang/rust/issues/42655.
§Examples
use core::sync::atomic::{AtomicU32, Ordering as AtomicOrdering};
use try_specialize::unreliable::impls_copy_weak;
#[derive(Eq, PartialEq, Debug)]
pub struct ArrayLike<T, const N: usize> {
}
impl<T, const N: usize> From<[T; N]> for ArrayLike<T, N> {
#[inline]
fn from(value: [T; N]) -> Self {
// ...
}
}
impl<T, const N: usize> AsRef<[T; N]> for ArrayLike<T, N> {
#[inline]
fn as_ref(&self) -> &[T; N] {
// ...
}
}
static DEBUG: AtomicU32 = AtomicU32::new(0);
impl<T, const N: usize> Clone for ArrayLike<T, N>
where
T: Clone
{
#[inline]
fn clone(&self) -> Self {
if impls_copy_weak::<T>() {
DEBUG.store(101, AtomicOrdering::Relaxed);
// Fast path for `T: Copy`.
unsafe { std::mem::transmute_copy(self) }
} else {
DEBUG.store(202, AtomicOrdering::Relaxed);
Self::from(self.as_ref().clone())
}
}
}
#[derive(Clone, Eq, PartialEq, Debug)]
struct NonCopiable<T>(pub T);
assert_eq!(
ArrayLike::from([1, 2, 3]).clone(),
ArrayLike::from([1, 2, 3])
);
assert_eq!(DEBUG.load(AtomicOrdering::Relaxed), 101);
assert_eq!(
ArrayLike::from([NonCopiable(1), NonCopiable(2)]).clone(),
ArrayLike::from([NonCopiable(1), NonCopiable(2)])
);
assert_eq!(DEBUG.load(AtomicOrdering::Relaxed), 202);