pub struct FullSync<'a, ItemType: Send + Sync + Debug, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> { /* private fields */ }Expand description
This channel uses the fastest of the queues FullSyncMove, which are the fastest for general purpose use and for most hardware but requires that elements are copied, due to the full sync characteristics
of the backing queue, which doesn’t allow enqueueing to happen independently of dequeueing.
Due to that, this channel requires that ItemTypes are Clone, since they will have to be moved around during dequeueing (as there is no way to keep the queue slot allocated during processing),
making this channel a typical best fit for small & trivial types.
Please, measure your Unis using all available channels FullSync, [OgreAtomicQueue] and, possibly, even [OgreMmapLog].
See also [uni::channels::ogre_full_sync_mpmc_queue].
Refresher: the backing queue requires “BUFFER_SIZE” to be a power of 2
Trait Implementations§
source§impl<'a, ItemType: Send + Sync + Debug + 'a, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelCommon<'a, ItemType, ItemType> for FullSync<'a, ItemType, BUFFER_SIZE, MAX_STREAMS>
impl<'a, ItemType: Send + Sync + Debug + 'a, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelCommon<'a, ItemType, ItemType> for FullSync<'a, ItemType, BUFFER_SIZE, MAX_STREAMS>
source§fn new<IntoString: Into<String>>(streams_manager_name: IntoString) -> Arc<Self>
fn new<IntoString: Into<String>>(streams_manager_name: IntoString) -> Arc<Self>
namesource§fn flush<'life0, 'async_trait>(
&'life0 self,
timeout: Duration
) -> Pin<Box<dyn Future<Output = u32> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
fn flush<'life0, 'async_trait>( &'life0 self, timeout: Duration ) -> Pin<Box<dyn Future<Output = u32> + Send + 'async_trait>>where Self: 'async_trait, 'life0: 'async_trait,
timeout elapses.Returns the number of still unconsumed items – which is 0 if it was not interrupted by the timeout
source§fn gracefully_end_stream<'life0, 'async_trait>(
&'life0 self,
stream_id: u32,
timeout: Duration
) -> Pin<Box<dyn Future<Output = bool> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
fn gracefully_end_stream<'life0, 'async_trait>( &'life0 self, stream_id: u32, timeout: Duration ) -> Pin<Box<dyn Future<Output = bool> + Send + 'async_trait>>where Self: 'async_trait, 'life0: 'async_trait,
stream_id should cease its activities when there are no more elements left
to process, waiting for the operation to complete for up to timeout.Returns
true if the stream ended within the given timeout or false if it is still processing elements.source§fn gracefully_end_all_streams<'life0, 'async_trait>(
&'life0 self,
timeout: Duration
) -> Pin<Box<dyn Future<Output = u32> + Send + 'async_trait>>where
Self: 'async_trait,
'life0: 'async_trait,
fn gracefully_end_all_streams<'life0, 'async_trait>( &'life0 self, timeout: Duration ) -> Pin<Box<dyn Future<Output = u32> + Send + 'async_trait>>where Self: 'async_trait, 'life0: 'async_trait,
timeout.Returns the number of un-ended streams – which is 0 if it was not interrupted by the timeout
source§fn cancel_all_streams(&self)
fn cancel_all_streams(&self)
In opposition to [end_all_streams()], this method does not wait for any confirmation, nor cares if there are remaining elements to be processed.
source§fn running_streams_count(&self) -> u32
fn running_streams_count(&self) -> u32
source§fn pending_items_count(&self) -> u32
fn pending_items_count(&self) -> u32
IMPLEMENTORS: #[inline(always)]
source§fn buffer_size(&self) -> u32
fn buffer_size(&self) -> u32
IMPLEMENTORS: #[inline(always)]
source§impl<'a, ItemType: 'a + Debug + Send + Sync, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelConsumer<'a, ItemType> for FullSync<'a, ItemType, BUFFER_SIZE, MAX_STREAMS>
impl<'a, ItemType: 'a + Debug + Send + Sync, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelConsumer<'a, ItemType> for FullSync<'a, ItemType, BUFFER_SIZE, MAX_STREAMS>
source§fn consume(&self, _stream_id: u32) -> Option<ItemType>
fn consume(&self, _stream_id: u32) -> Option<ItemType>
IMPLEMENTORS: use #[inline(always)]
source§fn keep_stream_running(&self, stream_id: u32) -> bool
fn keep_stream_running(&self, stream_id: u32) -> bool
false if the Stream has been signaled to end its operations, causing it to report “out-of-elements” as soon as possible.IMPLEMENTORS: use #[inline(always)]
source§fn register_stream_waker(&self, stream_id: u32, waker: &Waker)
fn register_stream_waker(&self, stream_id: u32, waker: &Waker)
stream_id may be awaken.IMPLEMENTORS: use #[inline(always)]
source§fn drop_resources(&self, stream_id: u32)
fn drop_resources(&self, stream_id: u32)
IMPLEMENTORS: use #[inline(always)]
source§impl<'a, ItemType: Send + Sync + Debug + 'a, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelProducer<'a, ItemType, ItemType> for FullSync<'a, ItemType, BUFFER_SIZE, MAX_STREAMS>
impl<'a, ItemType: Send + Sync + Debug + 'a, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelProducer<'a, ItemType, ItemType> for FullSync<'a, ItemType, BUFFER_SIZE, MAX_STREAMS>
source§fn try_send<F: FnOnce(&mut ItemType)>(&self, setter: F) -> bool
fn try_send<F: FnOnce(&mut ItemType)>(&self, setter: F) -> bool
setter, passing a slot so the payload may be filled, then sends the event through this channel asynchronously.– returns
false if the buffer was full and the item wasn’t sent; true otherwise.IMPLEMENTORS: #[inline(always)]
source§fn try_send_movable(&self, item: ItemType) -> bool
fn try_send_movable(&self, item: ItemType) -> bool
source§fn send<F: FnOnce(&mut ItemType)>(&self, _setter: F)
fn send<F: FnOnce(&mut ItemType)>(&self, _setter: F)
setter to fill the payload.If the channel is full, this function may wait until sending it is possible.
IMPLEMENTORS: #[inline(always]
source§fn send_derived(&self, _derived_item: &DerivedItemType)
fn send_derived(&self, _derived_item: &DerivedItemType)
DerivedItemType instead of the ItemType, this method may be useful
– for instance: the Stream consumes ArcThe default implementation, though, is made for types that don’t have a derived item type.
IMPLEMENTORS: #[inline(always)]
source§impl<'a, ItemType: Send + Sync + Debug + 'a, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelUni<'a, ItemType, ItemType> for FullSync<'a, ItemType, BUFFER_SIZE, MAX_STREAMS>
impl<'a, ItemType: Send + Sync + Debug + 'a, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelUni<'a, ItemType, ItemType> for FullSync<'a, ItemType, BUFFER_SIZE, MAX_STREAMS>
source§fn create_stream(
self: &Arc<Self>
) -> (MutinyStream<'a, ItemType, Self, ItemType>, u32)
fn create_stream( self: &Arc<Self> ) -> (MutinyStream<'a, ItemType, Self, ItemType>, u32)
Stream (and its stream_id) able to receive elements sent through this channel.If called more than once, each
Stream will receive a different element – “consumer pattern”.Currently
panics if called more times than allowed by [Uni]’s MAX_STREAMS