pub struct FullSync<'a, ItemType: Send + Sync + Debug, const BUFFER_SIZE: usize = 1024, const MAX_STREAMS: usize = 16> { /* private fields */ }Expand description
This channel uses the queues FullSyncMove (the highest throughput among all in ‘benches/’), which are the fastest for general purpose use and for most hardware but requires that elements are copied when dequeueing,
due to the full sync characteristics of the backing queue, which doesn’t allow enqueueing to happen independently of dequeueing.
Due to that, this channel requires that ItemTypes are Clone, since they will have to be moved around during dequeueing (as there is no way to keep the queue slot allocated during processing),
making this channel a typical best fit for small & trivial types.
Please, measure your Multis using all available channels FullSync, [OgreAtomicQueue] and, possibly, even [OgreMmapLog].
See also [multi::channels::ogre_full_sync_mpmc_queue].
Refresher: the backing queue requires BUFFER_SIZE to be a power of 2 – the same applies to MAX_STREAMS, which will also have its own queue
Trait Implementations§
source§impl<'a, ItemType: Send + Sync + Debug + 'a, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelCommon<'a, ItemType, Arc<ItemType, Global>> for FullSync<'a, ItemType, BUFFER_SIZE, MAX_STREAMS>
impl<'a, ItemType: Send + Sync + Debug + 'a, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelCommon<'a, ItemType, Arc<ItemType, Global>> for FullSync<'a, ItemType, BUFFER_SIZE, MAX_STREAMS>
source§fn new<IntoString: Into<String>>(streams_manager_name: IntoString) -> Arc<Self>
fn new<IntoString: Into<String>>(streams_manager_name: IntoString) -> Arc<Self>
namesource§fn flush<'life0, 'async_trait>(
&'life0 self,
timeout: Duration
) -> Pin<Box<dyn Future<Output = u32> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
fn flush<'life0, 'async_trait>( &'life0 self, timeout: Duration ) -> Pin<Box<dyn Future<Output = u32> + Send + 'async_trait>>where Self: 'async_trait, 'life0: 'async_trait,
timeout elapses.Returns the number of still unconsumed items – which is 0 if it was not interrupted by the timeout
source§fn is_channel_open(&self) -> bool
fn is_channel_open(&self) -> bool
source§fn gracefully_end_stream<'life0, 'async_trait>(
&'life0 self,
stream_id: u32,
timeout: Duration
) -> Pin<Box<dyn Future<Output = bool> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
fn gracefully_end_stream<'life0, 'async_trait>( &'life0 self, stream_id: u32, timeout: Duration ) -> Pin<Box<dyn Future<Output = bool> + Send + 'async_trait>>where Self: 'async_trait, 'life0: 'async_trait,
stream_id should cease its activities when there are no more elements left
to process, waiting for the operation to complete for up to timeout.Returns
true if the stream ended within the given timeout or false if it is still processing elements.source§fn gracefully_end_all_streams<'life0, 'async_trait>(
&'life0 self,
timeout: Duration
) -> Pin<Box<dyn Future<Output = u32> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
fn gracefully_end_all_streams<'life0, 'async_trait>( &'life0 self, timeout: Duration ) -> Pin<Box<dyn Future<Output = u32> + Send + 'async_trait>>where Self: 'async_trait, 'life0: 'async_trait,
timeout.Returns the number of un-ended streams – which is 0 if it was not interrupted by the timeout
source§fn cancel_all_streams(&self)
fn cancel_all_streams(&self)
In opposition to [end_all_streams()], this method does not wait for any confirmation, nor cares if there are remaining elements to be processed.
source§fn running_streams_count(&self) -> u32
fn running_streams_count(&self) -> u32
source§fn pending_items_count(&self) -> u32
fn pending_items_count(&self) -> u32
IMPLEMENTORS: #[inline(always)]
source§fn buffer_size(&self) -> u32
fn buffer_size(&self) -> u32
IMPLEMENTORS: #[inline(always)]
source§impl<'a, ItemType: 'a + Send + Sync + Debug, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelConsumer<'a, Arc<ItemType, Global>> for FullSync<'a, ItemType, BUFFER_SIZE, MAX_STREAMS>
impl<'a, ItemType: 'a + Send + Sync + Debug, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelConsumer<'a, Arc<ItemType, Global>> for FullSync<'a, ItemType, BUFFER_SIZE, MAX_STREAMS>
source§fn consume(&self, stream_id: u32) -> Option<Arc<ItemType>>
fn consume(&self, stream_id: u32) -> Option<Arc<ItemType>>
IMPLEMENTORS: use #[inline(always)]
source§fn keep_stream_running(&self, stream_id: u32) -> bool
fn keep_stream_running(&self, stream_id: u32) -> bool
false if the Stream has been signaled to end its operations, causing it to report “out-of-elements” as soon as possible.IMPLEMENTORS: use #[inline(always)]
source§fn register_stream_waker(&self, stream_id: u32, waker: &Waker)
fn register_stream_waker(&self, stream_id: u32, waker: &Waker)
stream_id may be awaken.IMPLEMENTORS: use #[inline(always)]
source§fn drop_resources(&self, stream_id: u32)
fn drop_resources(&self, stream_id: u32)
IMPLEMENTORS: use #[inline(always)]
source§impl<'a, ItemType: Send + Sync + Debug + 'a, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelMulti<'a, ItemType, Arc<ItemType, Global>> for FullSync<'a, ItemType, BUFFER_SIZE, MAX_STREAMS>
impl<'a, ItemType: Send + Sync + Debug + 'a, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelMulti<'a, ItemType, Arc<ItemType, Global>> for FullSync<'a, ItemType, BUFFER_SIZE, MAX_STREAMS>
source§fn create_stream_for_old_events(
self: &Arc<Self>
) -> (MutinyStream<'a, ItemType, Self, Arc<ItemType>>, u32)where
Self: ChannelConsumer<'a, Arc<ItemType>>,
fn create_stream_for_old_events( self: &Arc<Self> ) -> (MutinyStream<'a, ItemType, Self, Arc<ItemType>>, u32)where Self: ChannelConsumer<'a, Arc<ItemType>>,
Stream (and its stream_id) able to receive elements
that were sent through this channel before the call to this method.It is up to each implementor to define how back in the past those events may go, but it is known that
mmap log
based channels are able to see all past events.If called more than once, every stream will see all the past events available.
Currently
panics if called more times than allowed by [Multi]’s MAX_STREAMSsource§fn create_stream_for_new_events(
self: &Arc<Self>
) -> (MutinyStream<'a, ItemType, Self, Arc<ItemType>>, u32)
fn create_stream_for_new_events( self: &Arc<Self> ) -> (MutinyStream<'a, ItemType, Self, Arc<ItemType>>, u32)
Stream (and its stream_id) able to receive elements sent through this channel after the call to this method.If called more than once, each
Stream will see all new elements – “listener pattern”.Currently
panics if called more times than allowed by [Multi]’s MAX_STREAMSsource§fn create_streams_for_old_and_new_events(
self: &Arc<Self>
) -> ((MutinyStream<'a, ItemType, Self, Arc<ItemType>>, u32), (MutinyStream<'a, ItemType, Self, Arc<ItemType>>, u32))where
Self: ChannelConsumer<'a, Arc<ItemType>>,
fn create_streams_for_old_and_new_events( self: &Arc<Self> ) -> ((MutinyStream<'a, ItemType, Self, Arc<ItemType>>, u32), (MutinyStream<'a, ItemType, Self, Arc<ItemType>>, u32))where Self: ChannelConsumer<'a, Arc<ItemType>>,
source§fn create_stream_for_old_and_new_events(
self: &Arc<Self>
) -> (MutinyStream<'a, ItemType, Self, Arc<ItemType>>, u32)where
Self: ChannelConsumer<'a, Arc<ItemType>>,
fn create_stream_for_old_and_new_events( self: &Arc<Self> ) -> (MutinyStream<'a, ItemType, Self, Arc<ItemType>>, u32)where Self: ChannelConsumer<'a, Arc<ItemType>>,
Stream (and its stream_id) able to receive elements
that were sent through this channel either before and after the call to this method.It is up to each implementor to define how back in the past those events may go, but it is known that
mmap log
based channels are able to see all past events.Notice that, with this method, there is no way of discriminating where the “old” events end and where the “new” events start.
If called more than once, every stream will see all the past events available, as well as all future events after this method call.
Currently
panics if called more times than allowed by [Multi]’s MAX_STREAMSsource§impl<'a, ItemType: 'a + Send + Sync + Debug, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelProducer<'a, ItemType, Arc<ItemType, Global>> for FullSync<'a, ItemType, BUFFER_SIZE, MAX_STREAMS>
impl<'a, ItemType: 'a + Send + Sync + Debug, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelProducer<'a, ItemType, Arc<ItemType, Global>> for FullSync<'a, ItemType, BUFFER_SIZE, MAX_STREAMS>
source§fn send(&self, item: ItemType) -> RetryConsumerResult<(), ItemType, ()>
fn send(&self, item: ItemType) -> RetryConsumerResult<(), ItemType, ()>
item.See there for how to deal with the returned type. IMPLEMENTORS: #[inline(always)]
source§fn send_with<F: FnOnce(&mut ItemType)>(
&self,
setter: F
) -> RetryConsumerResult<(), F, ()>
fn send_with<F: FnOnce(&mut ItemType)>( &self, setter: F ) -> RetryConsumerResult<(), F, ()>
setter, passing a slot so the payload may be filled there, then sends the event through this channel asynchronously.The returned type is conversible to
Result<(), F> by calling .into() on it, returning Err<setter> when the buffer is full,
to allow the caller to try again; otherwise you may add any retrying logic using the keen-retry crate’s API like in: Read moresource§fn send_derived(&self, arc_item: &Arc<ItemType>) -> bool
fn send_derived(&self, arc_item: &Arc<ItemType>) -> bool
DerivedItemType instead of the ItemType, this method may be useful
– for instance: if the Stream consumes Arc<String> (the derived item type) and the channel is for Strings, With this method one may send an Arc directly.The default implementation, though, is made for types that don’t have a derived item type.
IMPLEMENTORS: #[inline(always)]