pub struct Backdrop<T, S: BackdropStrategy<T>> { /* private fields */ }
Expand description
Wrapper to drop any value at a later time, such as in a background thread.
Backdrop<T, Strategy>
is guaranteed to have the same in-memory representation as T
.
As such, wrapping (and unwrapping) a T
into a Backdrop<T, S>
has zero memory overhead.
Besides altering how T
is dropped, a Backdrop<T, S>
behaves as much as possible as a T
.
This is done by implementing Deref
and DerefMut
so most methods available for T
are also immediately available for Backdrop<T>
.
Backdrop<T, S>
also implements many common traits whenever T
implements these.
§Customizing the strategy
You customize what strategy is used by picking your desired S
parameter,
which can be any type that implements the BackdropStrategy
trait.
This crate comes with many common strategies, but you can also implement your own.
§Restrictions
Backdrop<T, Strategy>
does not restrict T
(besides T
needing to be Sized
). However,
Many Strategy
only implement BackdropStrategy<T>
when T
fits certain restrictions.
For instance, the TrashThreadStrategy
requires T
to be Send
since T
will be moved to another thread to be cleaned up there.
What about unsized/dynamically-sized types? The current implementation of Backdrop
restricts T
to be Sized
mostly for ease of implementation.
It is our expectation that your unsized datastructures probably are already nested in a std::boxed::Box<T>
or other smart pointer,
which you can wrap with Backdrop
as a whole.
(Side note: Zero-sized types can be wrapped by Backdrop
without problems.)
There is one final important restriction:
§The problem with Arc
A Backdrop<Arc<T>, S>
will not behave as you might expect:
It will cause the backdrop strategy to run whenever the reference count is decremented.
But what you probably want, is to run the backdrop strategy exactly when the last Arc<T>
is dropped
(AKA when the reference count drops to 0) and the contents of the Arc
go out of scope.
A Arc<Backdrop<Box<T>, S>>
will work as you expect, but you incur an extra pointer indirection (arc -> box -> T)
every time you read its internal value.
Instead, use the backdrop_arc
crate, which contains
a specialized Arc
datatype that does exactly what you want without a needless indirection.
Implementations§
Source§impl<T, Strategy: BackdropStrategy<T>> Backdrop<T, Strategy>
impl<T, Strategy: BackdropStrategy<T>> Backdrop<T, Strategy>
Sourcepub fn new(val: T) -> Self
pub fn new(val: T) -> Self
Construct a new Backdrop<T, S>
from any T. This is a zero-cost operation.
From now on, T will no longer be dropped normally,
but instead it will be dropped using the implementation of the given BackdropStrategy
.
use backdrop::*;
// Either specify the return type:
let mynum: Backdrop<usize, LeakStrategy> = Backdrop::new(42);
// Or use the 'Turbofish' syntax on the function call:
let mynum2 = Backdrop::<_, LeakStrategy>::new(42);
// Or use one of the shorthand type aliases:
let mynum3 = LeakBackdrop::new(42);
assert_eq!(mynum, mynum2);
assert_eq!(mynum2, mynum3);
// <- Because we are using the LeakStrategy, we leak memory here. Fun! :-)
This function is the inverse of Backdrop::into_inner
.
Examples found in repository?
19fn main() {
20 let boxed = setup();
21 let not_backdropped = boxed.clone();
22 time("none", move || {
23 assert_eq!(not_backdropped.len(), LEN);
24 // Destructor runs here
25 });
26
27 let backdropped: TrivialBackdrop<_> = Backdrop::new(boxed.clone());
28 time("fake backdrop", move || {
29 assert_eq!(backdropped.len(), LEN);
30 // Destructor runs here
31 });
32
33 let backdropped: thread::ThreadBackdrop<_> = Backdrop::new(boxed.clone());
34 time("thread backdrop", move || {
35 assert_eq!(backdropped.len(), LEN);
36 // Destructor runs here
37 });
38
39 TrashThreadStrategy::with_trash_thread(||{
40 let backdropped: thread::TrashThreadBackdrop<_> = Backdrop::new(boxed.clone());
41 time("trash thread backdrop", move || {
42 assert_eq!(backdropped.len(), LEN);
43 // Destructor runs here
44 });
45 });
46
47 TrashQueueStrategy::ensure_initialized();
48 let backdropped = Backdrop::<_, TrashQueueStrategy>::new(boxed.clone());
49 time("(single threaded) trash queue backdrop", move || {
50 assert_eq!(backdropped.len(), LEN);
51 // Destructor runs here
52 });
53
54 time("(single threaded) trash queue backdrop (actually cleaning up later)", move || {
55 TrashQueueStrategy::cleanup_all();
56 });
57
58 #[cfg(miri)]
59 {
60 println!("Skipping Tokio examples when running on Miri, since it does not support Tokio yet");
61 }
62 #[cfg(not(miri))]
63 {
64 ::tokio::runtime::Builder::new_multi_thread()
65 .enable_all()
66 .build()
67 .unwrap()
68 .block_on(async {
69 let backdropped: crate::tokio::TokioTaskBackdrop<_> = Backdrop::new(boxed.clone());
70 time("tokio task (multithread runner)", move || {
71 assert_eq!(backdropped.len(), LEN);
72 // Destructor runs here
73 });
74
75 let backdropped: crate::tokio::TokioBlockingTaskBackdrop<_> = Backdrop::new(boxed.clone());
76 time("tokio blocking task (multithread runner)", move || {
77 assert_eq!(backdropped.len(), LEN);
78 // Destructor runs here
79 });
80 });
81
82 ::tokio::runtime::Builder::new_current_thread()
83 .enable_all()
84 .build()
85 .unwrap()
86 .block_on(async {
87 let backdropped: crate::tokio::TokioTaskBackdrop<_> = Backdrop::new(setup());
88 time("tokio task (current thread runner)", move || {
89 assert_eq!(backdropped.len(), LEN);
90 // Destructor runs here
91 });
92
93 let backdropped: crate::tokio::TokioBlockingTaskBackdrop<_> = Backdrop::new(setup());
94 time("tokio blocking task (current thread runner)", move || {
95 assert_eq!(backdropped.len(), LEN);
96 // Destructor runs here
97 });
98 });
99 }
100}
Sourcepub fn into_inner(this: Self) -> T
pub fn into_inner(this: Self) -> T
Turns a Backdrop<T, S>
back into a normal T.
This undoes the effect of Backdrop.
The resulting T will be dropped again using normal rules.
This function is the inverse of Backdrop::new
.
This is a zero-cost operation.
This is an associated function, so call it using fully-qualified syntax.
Sourcepub fn change_strategy<S2: BackdropStrategy<T>>(this: Self) -> Backdrop<T, S2>
pub fn change_strategy<S2: BackdropStrategy<T>>(this: Self) -> Backdrop<T, S2>
Changes the strategy used for a Backdrop.
This is a zero-cost operation
This is an associated function, so call it using fully-qualified syntax.
use backdrop::*;
let foo = LeakBackdrop::new(42);
let foo = Backdrop::change_strategy::<TrivialStrategy>(foo);
// Now `foo` will be dropped according to TrivialStrategy (which does the normal drop rules)
// rather than LeakStrategy (which does not cleanup by leaking memory)
Trait Implementations§
Source§impl<T: Archive, S> Archive for Backdrop<T, S>where
S: BackdropStrategy<T>,
Available on crate feature rkyv
only.
impl<T: Archive, S> Archive for Backdrop<T, S>where
S: BackdropStrategy<T>,
rkyv
only.Source§impl<C: ?Sized, T: CheckBytes<C>, S> CheckBytes<C> for Backdrop<T, S>where
S: BackdropStrategy<T>,
Available on crate feature bytecheck
only.
impl<C: ?Sized, T: CheckBytes<C>, S> CheckBytes<C> for Backdrop<T, S>where
S: BackdropStrategy<T>,
bytecheck
only.Source§impl<T, S: BackdropStrategy<T>> DerefMut for Backdrop<T, S>
impl<T, S: BackdropStrategy<T>> DerefMut for Backdrop<T, S>
Source§impl<Des, T, S> Deserialize<Backdrop<T, S>, Des> for Archived<T>
Available on crate feature rkyv
only.
impl<Des, T, S> Deserialize<Backdrop<T, S>, Des> for Archived<T>
rkyv
only.Source§impl<T, Strategy: BackdropStrategy<T>> Drop for Backdrop<T, Strategy>
This is where the magic happens: Instead of dropping T
normally, we run Strategy::execute
on it.
impl<T, Strategy: BackdropStrategy<T>> Drop for Backdrop<T, Strategy>
This is where the magic happens: Instead of dropping T
normally, we run Strategy::execute
on it.
Source§impl<T, S> From<T> for Backdrop<T, S>where
S: BackdropStrategy<T>,
Converting between a T and a Backdrop<T, S> is a zero-cost operation
impl<T, S> From<T> for Backdrop<T, S>where
S: BackdropStrategy<T>,
Converting between a T and a Backdrop<T, S> is a zero-cost operation
c.f. Backdrop::new
Source§impl<T: Ord, S> Ord for Backdrop<T, S>where
S: BackdropStrategy<T>,
impl<T: Ord, S> Ord for Backdrop<T, S>where
S: BackdropStrategy<T>,
Source§impl<T: PartialOrd, S> PartialOrd for Backdrop<T, S>where
S: BackdropStrategy<T>,
impl<T: PartialOrd, S> PartialOrd for Backdrop<T, S>where
S: BackdropStrategy<T>,
Source§impl<Ser, T: Archive + Serialize<Ser>, S> Serialize<Ser> for Backdrop<T, S>where
Ser: Fallible,
S: BackdropStrategy<T>,
Available on crate feature rkyv
only.
impl<Ser, T: Archive + Serialize<Ser>, S> Serialize<Ser> for Backdrop<T, S>where
Ser: Fallible,
S: BackdropStrategy<T>,
rkyv
only.Source§impl<T, S: BackdropStrategy<T>> Deref for Backdrop<T, S>
impl<T, S: BackdropStrategy<T>> Deref for Backdrop<T, S>
impl<T: Eq, S> Eq for Backdrop<T, S>where
S: BackdropStrategy<T>,
Auto Trait Implementations§
impl<T, S> Freeze for Backdrop<T, S>where
T: Freeze,
impl<T, S> RefUnwindSafe for Backdrop<T, S>where
T: RefUnwindSafe,
S: RefUnwindSafe,
impl<T, S> Send for Backdrop<T, S>
impl<T, S> Sync for Backdrop<T, S>
impl<T, S> Unpin for Backdrop<T, S>
impl<T, S> UnwindSafe for Backdrop<T, S>where
T: UnwindSafe,
S: UnwindSafe,
Blanket Implementations§
Source§impl<T> ArchivePointee for T
impl<T> ArchivePointee for T
Source§type ArchivedMetadata = ()
type ArchivedMetadata = ()
Source§fn pointer_metadata(
_: &<T as ArchivePointee>::ArchivedMetadata,
) -> <T as Pointee>::Metadata
fn pointer_metadata( _: &<T as ArchivePointee>::ArchivedMetadata, ) -> <T as Pointee>::Metadata
Source§impl<T> ArchiveUnsized for Twhere
T: Archive,
impl<T> ArchiveUnsized for Twhere
T: Archive,
Source§type Archived = <T as Archive>::Archived
type Archived = <T as Archive>::Archived
Archive
, it may be unsized. Read more