pub fn impls_eq_weak<T>() -> boolwhere
T: ?Sized,alloc and unreliable only.Expand description
Returns true if the given type implements Eq.
Use define_impls_trait_ignore_lt_fn macro to generate other
trait implementation check functions.
Library tests ensure that the impls_trait checks are performed
at compile time and fully optimized with no runtime cost at
opt-level >= 1. Note that the release profile uses
opt-level = 3 by default.
§Reliability
While it is unlikely, there is still a possibility that this function may return false negatives in future Rust versions.
The correctness of the results returned by the functions depends on the following:
- Documented behavior that if
TimplementsEq, twoRcs that point to the same allocation are always equal: https://doc.rust-lang.org/1.82.0/std/rc/struct.Rc.html#method.eq. - Undocumented behavior that the
Rc::partial_eqimplementation forT: Eqwill not usePartialEq::eqif bothRcs point to the same memory location. - The assumption that the undocumented short-circuit behavior described above will be retained for optimization purposes.
There is no formal guarantee that the undocumented behavior
described above will be retained. If the implementation changes
in a future Rust version, the function may return a false
negative, that is, it may return false, even though T
implements the trait. However, the implementation guarantees
that a false positive result is impossible, i.e., the function
will never return true if T does not implement the trait in
any future Rust version.
Details:
- https://internals.rust-lang.org/t/rc-uses-visibly-behavior-changing-specialization-is-that-okay/16173/6,
- https://users.rust-lang.org/t/hack-to-specialize-w-write-for-vec-u8/100366,
- https://doc.rust-lang.org/1.82.0/std/rc/struct.Rc.html#method.eq,
- https://github.com/rust-lang/rust/issues/42655.
§Examples
use core::sync::atomic::{AtomicU32, Ordering as AtomicOrdering};
use try_specialize::unreliable::impls_eq_weak;
#[derive(Clone, Debug)]
pub struct ArcLike<T> {
// ...
}
impl<T> ArcLike<T> {
#[inline]
fn new(value: T) -> Self {
// ...
}
#[inline]
fn as_ptr(&self) -> *const T {
// ...
}
}
impl<T> AsRef<T> for ArcLike<T> {
#[inline]
fn as_ref(&self) -> &T {
// ...
}
}
impl<T> PartialEq for ArcLike<T>
where
T: PartialEq,
{
#[inline]
fn eq(&self, other: &Self) -> bool {
// Fast path for `T: Eq`.
if impls_eq_weak::<T>() && self.as_ptr() == other.as_ptr() {
// Fast path for `T: Eq` if pointers are equal.
return true;
}
self.as_ref() == other.as_ref()
}
}
#[derive(Copy, Clone, Eq, Debug)]
struct Wrapper<T>(pub T);
static COUNTER: AtomicU32 = AtomicU32::new(0);
impl<T> PartialEq for Wrapper<T>
where
T: PartialEq,
{
#[inline]
fn eq(&self, other: &Self) -> bool {
let _ = COUNTER.fetch_add(1, AtomicOrdering::Relaxed);
self.0 == other.0
}
}
let arc_like1 = ArcLike::new(Wrapper(42_u32));
let arc_like2 = arc_like1.clone();
assert_eq!(arc_like1, arc_like2);
// `u32` implements Eq. Fast path used. Counter not incremented.
assert_eq!(COUNTER.load(AtomicOrdering::Relaxed), 0);
let arc_like1 = ArcLike::new(Wrapper(123.456_f64));
let arc_like2 = arc_like1.clone();
assert_eq!(arc_like1, arc_like2);
// `f64` doesn't implement Eq. Fast path is not used.
// Counter incremented.
assert_eq!(COUNTER.load(AtomicOrdering::Relaxed), 1);