#[repr(transparent)]pub struct Backdrop<T, S: BackdropStrategy<T>> { /* private fields */ }Expand description
Wrapper to drop any value at a later time, such as in a background thread.
Backdrop<T, Strategy> is guaranteed to have the same in-memory representation as T.
As such, wrapping (and unwrapping) a T into a Backdrop<T, S> has zero memory overhead.
Besides altering how T is dropped, a Backdrop<T, S> behaves as much as possible as a T.
This is done by implementing Deref and DerefMut
so most methods available for T are also immediately available for Backdrop<T>.
Backdrop<T, S> also implements many common traits whenever T implements these.
Customizing the strategy
You customize what strategy is used by picking your desired S parameter,
which can be any type that implements the BackdropStrategy trait.
This crate comes with many common strategies, but you can also implement your own.
Restrictions
Backdrop<T, Strategy> does not restrict T (besides T needing to be Sized). However,
Many Strategy only implement BackdropStrategy<T> when T fits certain restrictions.
For instance, the TrashThreadStrategy requires T to be Send since T will be moved to another thread to be cleaned up there.
What about unsized/dynamically-sized types? The current implementation of Backdrop restricts T to be Sized mostly for ease of implementation.
It is our expectation that your unsized datastructures probably are already nested in a std::boxed::Box<T> or other smart pointer,
which you can wrap with Backdrop as a whole.
(Side note: Zero-sized types can be wrapped by Backdrop without problems.)
There is one final important restriction:
The problem with Arc
A Backdrop<Arc<T>> will not behave as you might expect:
It will cause the backdrop strategy to run whenever the reference count is decremented.
But what you probably want, is to run the backdrop strategy exactly when the last Arc<T> is dropped
(AKA when the reference count drops to 0) and the contents of the Arc go out of scope.
Progress on a crate containing a dedicated ‘BackdropArc’ type is under way.
Implementations§
source§impl<T, Strategy: BackdropStrategy<T>> Backdrop<T, Strategy>
impl<T, Strategy: BackdropStrategy<T>> Backdrop<T, Strategy>
sourcepub fn new(val: T) -> Self
pub fn new(val: T) -> Self
Construct a new Backdrop<T, S> from any T. This is a zero-cost operation.
From now on, T will no longer be dropped normally,
but instead it will be dropped using the implementation of the given BackdropStrategy.
use backdrop::*;
// Either specify the return type:
let mynum: Backdrop<usize, LeakStrategy> = Backdrop::new(42);
// Or use the 'Turbofish' syntax on the function call:
let mynum2 = Backdrop::<_, LeakStrategy>::new(42);
// Or use one of the shorthand type aliases:
let mynum3 = LeakBackdrop::new(42);
assert_eq!(mynum, mynum2);
assert_eq!(mynum2, mynum3);
// <- Because we are using the LeakStrategy, we leak memory here. Fun! :-)This function is the inverse of Backdrop::into_inner.
Examples found in repository?
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100
fn main() {
let boxed = setup();
let not_backdropped = boxed.clone();
time("none", move || {
assert_eq!(not_backdropped.len(), LEN);
// Destructor runs here
});
let backdropped: TrivialBackdrop<_> = Backdrop::new(boxed.clone());
time("fake backdrop", move || {
assert_eq!(backdropped.len(), LEN);
// Destructor runs here
});
let backdropped: thread::ThreadBackdrop<_> = Backdrop::new(boxed.clone());
time("thread backdrop", move || {
assert_eq!(backdropped.len(), LEN);
// Destructor runs here
});
TrashThreadStrategy::with_trash_thread(||{
let backdropped: thread::TrashThreadBackdrop<_> = Backdrop::new(boxed.clone());
time("trash thread backdrop", move || {
assert_eq!(backdropped.len(), LEN);
// Destructor runs here
});
});
TrashQueueStrategy::ensure_initialized();
let backdropped = Backdrop::<_, TrashQueueStrategy>::new(boxed.clone());
time("(single threaded) trash queue backdrop", move || {
assert_eq!(backdropped.len(), LEN);
// Destructor runs here
});
time("(single threaded) trash queue backdrop (actually cleaning up later)", move || {
TrashQueueStrategy::cleanup_all();
});
#[cfg(miri)]
{
println!("Skipping Tokio examples when running on Miri, since it does not support Tokio yet");
}
#[cfg(not(miri))]
{
::tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap()
.block_on(async {
let backdropped: crate::tokio::TokioTaskBackdrop<_> = Backdrop::new(boxed.clone());
time("tokio task (multithread runner)", move || {
assert_eq!(backdropped.len(), LEN);
// Destructor runs here
});
let backdropped: crate::tokio::TokioBlockingTaskBackdrop<_> = Backdrop::new(boxed.clone());
time("tokio blocking task (multithread runner)", move || {
assert_eq!(backdropped.len(), LEN);
// Destructor runs here
});
});
::tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
.unwrap()
.block_on(async {
let backdropped: crate::tokio::TokioTaskBackdrop<_> = Backdrop::new(setup());
time("tokio task (current thread runner)", move || {
assert_eq!(backdropped.len(), LEN);
// Destructor runs here
});
let backdropped: crate::tokio::TokioBlockingTaskBackdrop<_> = Backdrop::new(setup());
time("tokio blocking task (current thread runner)", move || {
assert_eq!(backdropped.len(), LEN);
// Destructor runs here
});
});
}
}sourcepub fn into_inner(this: Self) -> T
pub fn into_inner(this: Self) -> T
Turns a Backdrop<T, S> back into a normal T.
This undoes the effect of Backdrop.
The resulting T will be dropped again using normal rules.
This function is the inverse of Backdrop::new.
This is a zero-cost operation.
This is an associated function, so call it using fully-qualified syntax.
sourcepub fn change_strategy<S2: BackdropStrategy<T>>(this: Self) -> Backdrop<T, S2>
pub fn change_strategy<S2: BackdropStrategy<T>>(this: Self) -> Backdrop<T, S2>
Changes the strategy used for a Backdrop.
This is a zero-cost operation
This is an associated function, so call it using fully-qualified syntax.
use backdrop::*;
let foo = LeakBackdrop::new(42);
let foo = Backdrop::change_strategy::<TrivialStrategy>(foo);
// Now `foo` will be dropped according to TrivialStrategy (which does the normal drop rules)
// rather than LeakStrategy (which does not cleanup by leaking memory)Trait Implementations§
source§impl<T, S: BackdropStrategy<T>> Deref for Backdrop<T, S>
impl<T, S: BackdropStrategy<T>> Deref for Backdrop<T, S>
source§impl<T, S: BackdropStrategy<T>> DerefMut for Backdrop<T, S>
impl<T, S: BackdropStrategy<T>> DerefMut for Backdrop<T, S>
source§impl<T, Strategy: BackdropStrategy<T>> Drop for Backdrop<T, Strategy>
impl<T, Strategy: BackdropStrategy<T>> Drop for Backdrop<T, Strategy>
This is where the magic happens: Instead of dropping T normally, we run Strategy::execute on it.
source§impl<T, S> From<T> for Backdrop<T, S>where
S: BackdropStrategy<T>,
impl<T, S> From<T> for Backdrop<T, S>where S: BackdropStrategy<T>,
Converting between a T and a Backdrop<T, S> is a zero-cost operation
c.f. Backdrop::new
source§impl<T: Ord, S> Ord for Backdrop<T, S>where
S: BackdropStrategy<T>,
impl<T: Ord, S> Ord for Backdrop<T, S>where S: BackdropStrategy<T>,
source§impl<T: PartialEq, S> PartialEq<Backdrop<T, S>> for Backdrop<T, S>where
S: BackdropStrategy<T>,
impl<T: PartialEq, S> PartialEq<Backdrop<T, S>> for Backdrop<T, S>where S: BackdropStrategy<T>,
source§impl<T: PartialOrd, S> PartialOrd<Backdrop<T, S>> for Backdrop<T, S>where
S: BackdropStrategy<T>,
impl<T: PartialOrd, S> PartialOrd<Backdrop<T, S>> for Backdrop<T, S>where S: BackdropStrategy<T>,
1.0.0 · source§fn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
self and other) and is used by the <=
operator. Read more