Crate sharify[−][src]
This crate allows backing types with shared memory to send them cheaply between processes. Here’s an example of doing so with a slice:
use sharify::SharedMut; use std::{iter, sync::mpsc::channel, thread}; // Create a slice backed by shared memory. let mut shared_slice: SharedMut<[u64]> = SharedMut::new(&(0, 1_000_000))?; // Write some data to it. for (src, dst) in iter::successors(Some(0), |&p| Some(p + 1)) .zip(shared_slice.as_view_mut().iter_mut()) { *dst = src; } // The shared slice can be sent between processes cheaply without copying the // data. What is shown here for threads works equally well for processes, // e.g. using the ipc_channel crate. let (tx, rx) = channel::<SharedMut<[u64]>>(); let handle = thread::spawn(move || { let shared_slice = rx.recv().unwrap(); // Get a view into the shared memory let view: &[u64] = shared_slice.as_view(); assert_eq!(view.len(), 1_000_000); assert!(iter::successors(Some(0), |&p| Some(p + 1)) .zip(view.iter()) .all(|(a, &b)| a == b)); }); tx.send(shared_slice)?; handle.join().unwrap();
The Shared
and SharedMut
structs wrap types to be backed by shared
memory. They handle cheap serialization / deserialization by only
serializing the metadata required to recreate the struct on the
deserialization side. As a result, Shared
and SharedMut
can be used
with inter-process channels (e.g. the ipc-channel crate) the
same way that the wrapped types are used with Rust’s
builtin or crossbeam inter-thread channels without copying the
underlying data.
Memory is managed through reference counts in the underlying shared memory. The wrappers behave as follows:
Mutability |
Trait bounds on |
Ownership | Shared memory freed when... | |
---|---|---|---|---|
Immutable |
Multiple ownership tracked with refcount, implements |
…an instance with exclusive ownership of the shared memory drops and the serialization count is 0. |
||
Mutable |
Exclusive ownership, but implements |
…an instance drops without serialization. |
⚠️ Safety and the serialization count
When serializing a Shared
/SharedMut
to send it between processes
the underlying shared memory must not be freed. However, calling
Shared::into_serialized
/SharedMut::into_serialized
consumes
the Self
instance acting as the memory RAII guard. As
a result, not deserializing at the other end can leak the shared memory.
While this is not inherently unsafe
,
it must be kept in mind when serializing.
Shared
s keep count of how many instances accessing the same shared
memory have been serialized without a matching deserialization. Only when
this serialization count is 0, i.e. there are
no ‘dangling’ serializations, will the shared memory be freed when an
instance with exclusive ownership drops. This is necessary so that the
shared memory persists when a Shared
with exclusive access is
serialized/deserialized while sent between processes.
The downside is that this approach only allows usage patterns where each
serialization is paired with exactly one deserialization. If multiple
receivers deserialize a Shared
from a single serialization and drop, the
shared memory may be freed before other receivers attempt to deserialize a
different serialization. See tests/serialization_count.rs
for an example
of this situation. The opposite scenario is also bad - serializing the same
Shared
instance multiple times through serde::Serialize::serialize
without matching deserializations will likely leak memory.
SharedMut
and serde::Serialize
A SharedMut
represents unique ownership of the underlying shared memory.
Because each serialization expects a matching deserialization, serializing
should consume Self
so that only one memory access exists in either
instance or serialized form. serde::Serialize::serialize
, however, takes
a &Self
argument, which leaves the SharedMut
intact. As a workaround
to provide integration with serde
, calls to
serde::Serialize::serialize
invalidate Self
through interior
mutability. Any future use of Self
produces a panic. This enforces
the intended usage of dropping a SharedMut
immediately after the
serialization call.
Backing custom types with shared memory
To be wrappable in Shared
, a type must implement
the ShmemBacked
and ShmemView
traits. SharedMut
s
have an additional ShmemViewMut
trait bound. See the example below for
how to back a custom type with shared memory.
use sharify::{Shared, ShmemBacked, ShmemView}; use std::{sync::mpsc::channel, thread}; // Holds a stack of images in contiguous memory. struct ImageStack { data: Vec<u8>, shape: [u16; 2], } // To back `ImageStack` with shared memory, it needs to implement `ShmemBacked`. unsafe impl ShmemBacked for ImageStack { // Constructor arguments, (shape, n_images, init value). type NewArg = ([u16; 2], usize, u8); // Information required to create a view of an `ImageStack` from raw memory. type MetaData = [u16; 2]; fn required_memory_arg((shape, n_images, _init): &Self::NewArg) -> usize { shape.iter().product::<u16>() as usize * n_images } fn required_memory_src(src: &Self) -> usize { src.data.len() } fn new(data: &mut [u8], (shape, _n_images, init): &Self::NewArg) -> Self::MetaData { data.fill(*init); *shape } fn new_from_src(data: &mut [u8], src: &Self) -> Self::MetaData { data.copy_from_slice(&src.data); src.shape } } // Create a referential struct as a view into the memory. struct ImageStackView<'a> { data: &'a [u8], shape: [u16; 2], } // The view must implement `ShmemView`. impl<'a> ShmemView<'a> for ImageStack { type View = ImageStackView<'a>; fn view(data: &'a [u8], shape: &'a <Self as ShmemBacked>::MetaData) -> Self::View { ImageStackView { data, shape: *shape, } } } // Existing stack with its data in a `Vec`. let stack = ImageStack { data: vec![0; 640 * 640 * 100], shape: [640, 640], }; // Copy the stack into shared memory. let shared_stack: Shared<ImageStack> = Shared::new_from_inner(&stack)?; // The `data` field is now backed by shared memory so the stack can be sent // between processes cheaply. What is shown here for threads works equally // well for processes, e.g. using the ipc_channel crate. let (tx, rx) = channel::<Shared<ImageStack>>(); let handle = thread::spawn(move || { let shared_stack = rx.recv().unwrap(); // Get a view into the shared memory. let view: ImageStackView = shared_stack.as_view(); assert!(view.data.iter().all(|&x| x == 0)); assert_eq!(view.shape, [640, 640]); }); tx.send(shared_stack)?; handle.join().unwrap();
ndarray
integration
By default the shared_ndarray
feature is enabled, which implements ShmemBacked
for ndarray::Array
and is useful for cheaply sending large arrays between
processes.
Re-exports
pub use sharify_ndarray::SharedArray; |
pub use sharify_ndarray::SharedArrayMut; |
Modules
sharify_ndarray |
Structs
Shared | Wrapper type for immutable access to shared memory from multiple processes. |
SharedMut | Safe mutable access to shared memory from multiple processes through unique ownership. |
Enums
Error |
Traits
ShmemBacked | Implemented for types which can be wrapped in a |
ShmemView | An immutable view into shared memory. |
ShmemViewMut | An mutable view into shared memory. |
Type Definitions
SharedSlice | A |
SharedSliceMut | A mutable |
SharedStr | A |
SharedStrMut | A mutable |