pub struct OgreArc<DataType, OgreAllocatorType>where
DataType: Debug + Send + Sync,
OgreAllocatorType: BoundedOgreAllocator<DataType> + Send + Sync + 'static,{ /* private fields */ }
Expand description
Wrapper type for data providing an atomic reference counter for dropping control, similar to Arc
,
but allowing a custom allocator to be used – BoundedOgreAllocator.
providing reference counting similar to Arc
Implementations§
Source§impl<DataType, OgreAllocatorType> OgreArc<DataType, OgreAllocatorType>
impl<DataType, OgreAllocatorType> OgreArc<DataType, OgreAllocatorType>
Sourcepub fn new(
allocator: &OgreAllocatorType,
) -> Option<(OgreArc<DataType, OgreAllocatorType>, &mut DataType)>
pub fn new( allocator: &OgreAllocatorType, ) -> Option<(OgreArc<DataType, OgreAllocatorType>, &mut DataType)>
Similar to Self::new_with().
Returns an uninitialized OgreAllocator
with a reference to set its value;
None
if the allocator is full
Sourcepub fn new_with<F>(
setter: F,
allocator: &OgreAllocatorType,
) -> Option<OgreArc<DataType, OgreAllocatorType>>where
F: FnOnce(&mut DataType),
pub fn new_with<F>(
setter: F,
allocator: &OgreAllocatorType,
) -> Option<OgreArc<DataType, OgreAllocatorType>>where
F: FnOnce(&mut DataType),
Zero-copy the data
into one of the slots provided by allocator
– which will be used to deallocate it when the time comes
– zero-copying will be enforced (if compiled in Release mode) due to this method being inlined in the caller.
None
will be returned if there are, currently, no space available for the requested allocation.
A possible usage pattern for use cases that don’t care if we’re out of space is:
let allocator = <something from ogre_alloc::*>;
let data = <build your data here>;
let allocated_data = loop {
match OgreBox::new_with(|slot| *slot = data, allocator) {
Some(instance) => break instance,
None => <<out_of_elements_code>>, // sleep, warning, etc...
}
}
Sourcepub fn new_with_clones<const COUNT: usize, F>(
setter: F,
allocator: &OgreAllocatorType,
) -> Option<[OgreArc<DataType, OgreAllocatorType>; COUNT]>where
F: FnOnce(&mut DataType),
pub fn new_with_clones<const COUNT: usize, F>(
setter: F,
allocator: &OgreAllocatorType,
) -> Option<[OgreArc<DataType, OgreAllocatorType>; COUNT]>where
F: FnOnce(&mut DataType),
Similar to [new()], but pre-loads the referenec_count
to the specified COUNT
value, returning all the clones.
This method is faster than calling [new()] & [clone()]
Sourcepub fn from_allocated(
data_id: u32,
allocator: &OgreAllocatorType,
) -> OgreArc<DataType, OgreAllocatorType>
pub fn from_allocated( data_id: u32, allocator: &OgreAllocatorType, ) -> OgreArc<DataType, OgreAllocatorType>
Wraps data
with our struct, so it will be properly deallocated when dropped
– data
must have been previously allocated by the provided allocator
Sourcepub fn from_allocated_with_clones<const COUNT: usize>(
data_id: u32,
allocator: &OgreAllocatorType,
) -> [OgreArc<DataType, OgreAllocatorType>; COUNT]
pub fn from_allocated_with_clones<const COUNT: usize>( data_id: u32, allocator: &OgreAllocatorType, ) -> [OgreArc<DataType, OgreAllocatorType>; COUNT]
Similar to [from_allocate()], but pre-loads the reference_count
to the specified COUNT
value, returning all the clones,
which is faster than repetitive calls to [clone()].
Sourcepub unsafe fn increment_references(
&self,
count: u32,
) -> &OgreArc<DataType, OgreAllocatorType>
pub unsafe fn increment_references( &self, count: u32, ) -> &OgreArc<DataType, OgreAllocatorType>
§Safety
Increments the reference count of the passed [OgreUnique] by count
.
To be used in conjunction with [raw_copy()] in order to produce several clones at once,
in the hope it will be faster than calling [clone()] several times
IMPORTANT: failure to call [raw_copy()] the same number of times as the parameter to [increment_references()] will crash the program
Sourcepub unsafe fn raw_copy(&self) -> OgreArc<DataType, OgreAllocatorType>
pub unsafe fn raw_copy(&self) -> OgreArc<DataType, OgreAllocatorType>
§Safety
Copies the [OgreUnique] (a simple 64-bit pointer) without increasing the reference count – but it will still be decreased when dropped.
To be used after a call to [increment_references()] in order to produce several clones at once,
in the hope it will be faster than calling [clone()] several times.
IMPORTANT: failure to call [raw_copy()] the same number of times as the parameter to [increment_references()] will crash the program
Sourcepub fn references_count(&self) -> u32
pub fn references_count(&self) -> u32
Returns how many OgreBox<>
copies references the same data as self
does
Trait Implementations§
Source§impl<'a, ItemType, OgreAllocatorType, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelCommon<ItemType, OgreArc<ItemType, OgreAllocatorType>> for Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>
impl<'a, ItemType, OgreAllocatorType, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelCommon<ItemType, OgreArc<ItemType, OgreAllocatorType>> for Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>
Source§fn new<IntoString>(
name: IntoString,
) -> Arc<Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>>
fn new<IntoString>( name: IntoString, ) -> Arc<Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>>
name
Source§async fn flush(&self, timeout: Duration) -> u32
async fn flush(&self, timeout: Duration) -> u32
timeout
elapses.Returns the number of still unconsumed items – which is 0 if it was not interrupted by the timeout
Source§fn is_channel_open(&self) -> bool
fn is_channel_open(&self) -> bool
Source§async fn gracefully_end_stream(&self, stream_id: u32, timeout: Duration) -> bool
async fn gracefully_end_stream(&self, stream_id: u32, timeout: Duration) -> bool
stream_id
should cease its activities when there are no more elements left
to process, waiting for the operation to complete for up to timeout
.Returns
true
if the stream ended within the given timeout
or false
if it is still processing elements.Source§async fn gracefully_end_all_streams(&self, timeout: Duration) -> u32
async fn gracefully_end_all_streams(&self, timeout: Duration) -> u32
timeout
.Returns the number of un-ended streams – which is 0 if it was not interrupted by the timeout
Source§fn cancel_all_streams(&self)
fn cancel_all_streams(&self)
In opposition to [end_all_streams()], this method does not wait for any confirmation, nor cares if there are remaining elements to be processed.
Source§fn running_streams_count(&self) -> u32
fn running_streams_count(&self) -> u32
Source§fn pending_items_count(&self) -> u32
fn pending_items_count(&self) -> u32
IMPLEMENTORS: #[inline(always)]
Source§fn buffer_size(&self) -> u32
fn buffer_size(&self) -> u32
IMPLEMENTORS: #[inline(always)]
Source§impl<'a, ItemType, OgreAllocatorType, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelCommon<ItemType, OgreArc<ItemType, OgreAllocatorType>> for FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>
impl<'a, ItemType, OgreAllocatorType, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelCommon<ItemType, OgreArc<ItemType, OgreAllocatorType>> for FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>
Source§fn new<IntoString>(
name: IntoString,
) -> Arc<FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>>
fn new<IntoString>( name: IntoString, ) -> Arc<FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>>
name
Source§async fn flush(&self, timeout: Duration) -> u32
async fn flush(&self, timeout: Duration) -> u32
timeout
elapses.Returns the number of still unconsumed items – which is 0 if it was not interrupted by the timeout
Source§fn is_channel_open(&self) -> bool
fn is_channel_open(&self) -> bool
Source§async fn gracefully_end_stream(&self, stream_id: u32, timeout: Duration) -> bool
async fn gracefully_end_stream(&self, stream_id: u32, timeout: Duration) -> bool
stream_id
should cease its activities when there are no more elements left
to process, waiting for the operation to complete for up to timeout
.Returns
true
if the stream ended within the given timeout
or false
if it is still processing elements.Source§async fn gracefully_end_all_streams(&self, timeout: Duration) -> u32
async fn gracefully_end_all_streams(&self, timeout: Duration) -> u32
timeout
.Returns the number of un-ended streams – which is 0 if it was not interrupted by the timeout
Source§fn cancel_all_streams(&self)
fn cancel_all_streams(&self)
In opposition to [end_all_streams()], this method does not wait for any confirmation, nor cares if there are remaining elements to be processed.
Source§fn running_streams_count(&self) -> u32
fn running_streams_count(&self) -> u32
Source§fn pending_items_count(&self) -> u32
fn pending_items_count(&self) -> u32
IMPLEMENTORS: #[inline(always)]
Source§fn buffer_size(&self) -> u32
fn buffer_size(&self) -> u32
IMPLEMENTORS: #[inline(always)]
Source§impl<'a, ItemType, OgreAllocatorType, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelConsumer<'a, OgreArc<ItemType, OgreAllocatorType>> for Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>
impl<'a, ItemType, OgreAllocatorType, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelConsumer<'a, OgreArc<ItemType, OgreAllocatorType>> for Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>
Source§fn consume(
&self,
stream_id: u32,
) -> Option<OgreArc<ItemType, OgreAllocatorType>>
fn consume( &self, stream_id: u32, ) -> Option<OgreArc<ItemType, OgreAllocatorType>>
IMPLEMENTORS: use #[inline(always)]
Source§fn keep_stream_running(&self, stream_id: u32) -> bool
fn keep_stream_running(&self, stream_id: u32) -> bool
false
if the Stream
has been signaled to end its operations, causing it to report “out-of-elements” as soon as possible.IMPLEMENTORS: use #[inline(always)]
Source§fn register_stream_waker(&self, stream_id: u32, waker: &Waker)
fn register_stream_waker(&self, stream_id: u32, waker: &Waker)
stream_id
may be awaken.IMPLEMENTORS: use #[inline(always)]
Source§fn drop_resources(&self, stream_id: u32)
fn drop_resources(&self, stream_id: u32)
IMPLEMENTORS: use #[inline(always)]
Source§impl<'a, ItemType, OgreAllocatorType, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelConsumer<'a, OgreArc<ItemType, OgreAllocatorType>> for FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>
impl<'a, ItemType, OgreAllocatorType, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelConsumer<'a, OgreArc<ItemType, OgreAllocatorType>> for FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>
Source§fn consume(
&self,
stream_id: u32,
) -> Option<OgreArc<ItemType, OgreAllocatorType>>
fn consume( &self, stream_id: u32, ) -> Option<OgreArc<ItemType, OgreAllocatorType>>
IMPLEMENTORS: use #[inline(always)]
Source§fn keep_stream_running(&self, stream_id: u32) -> bool
fn keep_stream_running(&self, stream_id: u32) -> bool
false
if the Stream
has been signaled to end its operations, causing it to report “out-of-elements” as soon as possible.IMPLEMENTORS: use #[inline(always)]
Source§fn register_stream_waker(&self, stream_id: u32, waker: &Waker)
fn register_stream_waker(&self, stream_id: u32, waker: &Waker)
stream_id
may be awaken.IMPLEMENTORS: use #[inline(always)]
Source§fn drop_resources(&self, stream_id: u32)
fn drop_resources(&self, stream_id: u32)
IMPLEMENTORS: use #[inline(always)]
Source§impl<'a, ItemType, OgreAllocatorType, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelMulti<'a, ItemType, OgreArc<ItemType, OgreAllocatorType>> for Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>
impl<'a, ItemType, OgreAllocatorType, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelMulti<'a, ItemType, OgreArc<ItemType, OgreAllocatorType>> for Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>
Source§fn create_stream_for_old_events(
self: &Arc<Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>>,
) -> (MutinyStream<'a, ItemType, Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>, OgreArc<ItemType, OgreAllocatorType>>, u32)
fn create_stream_for_old_events( self: &Arc<Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>>, ) -> (MutinyStream<'a, ItemType, Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>, OgreArc<ItemType, OgreAllocatorType>>, u32)
Stream
(and its stream_id
) able to receive elements
that were sent through this channel before the call to this method.It is up to each implementor to define how back in the past those events may go, but it is known that
mmap log
based channels are able to see all past events.If called more than once, every stream will see all the past events available.
Currently
panic
s if called more times than allowed by [Multi]’s MAX_STREAMS
Source§fn create_stream_for_new_events(
self: &Arc<Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>>,
) -> (MutinyStream<'a, ItemType, Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>, OgreArc<ItemType, OgreAllocatorType>>, u32)
fn create_stream_for_new_events( self: &Arc<Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>>, ) -> (MutinyStream<'a, ItemType, Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>, OgreArc<ItemType, OgreAllocatorType>>, u32)
Stream
(and its stream_id
) able to receive elements sent through this channel after the call to this method.If called more than once, each
Stream
will see all new elements – “listener pattern”.Currently
panic
s if called more times than allowed by [Multi]’s MAX_STREAMS
Source§fn create_streams_for_old_and_new_events(
self: &Arc<Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>>,
) -> ((MutinyStream<'a, ItemType, Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>, OgreArc<ItemType, OgreAllocatorType>>, u32), (MutinyStream<'a, ItemType, Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>, OgreArc<ItemType, OgreAllocatorType>>, u32))
fn create_streams_for_old_and_new_events( self: &Arc<Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>>, ) -> ((MutinyStream<'a, ItemType, Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>, OgreArc<ItemType, OgreAllocatorType>>, u32), (MutinyStream<'a, ItemType, Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>, OgreArc<ItemType, OgreAllocatorType>>, u32))
Source§fn create_stream_for_old_and_new_events(
self: &Arc<Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>>,
) -> (MutinyStream<'a, ItemType, Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>, OgreArc<ItemType, OgreAllocatorType>>, u32)
fn create_stream_for_old_and_new_events( self: &Arc<Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>>, ) -> (MutinyStream<'a, ItemType, Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>, OgreArc<ItemType, OgreAllocatorType>>, u32)
Stream
(and its stream_id
) able to receive elements
that were sent through this channel either before and after the call to this method.It is up to each implementor to define how back in the past those events may go, but it is known that
mmap log
based channels are able to see all past events.Notice that, with this method, there is no way of discriminating where the “old” events end and where the “new” events start.
If called more than once, every stream will see all the past events available, as well as all future events after this method call.
Currently
panic
s if called more times than allowed by [Multi]’s MAX_STREAMS
Source§impl<'a, ItemType, OgreAllocatorType, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelMulti<'a, ItemType, OgreArc<ItemType, OgreAllocatorType>> for FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>
impl<'a, ItemType, OgreAllocatorType, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelMulti<'a, ItemType, OgreArc<ItemType, OgreAllocatorType>> for FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>
Source§fn create_stream_for_old_events(
self: &Arc<FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>>,
) -> (MutinyStream<'a, ItemType, FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>, OgreArc<ItemType, OgreAllocatorType>>, u32)
fn create_stream_for_old_events( self: &Arc<FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>>, ) -> (MutinyStream<'a, ItemType, FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>, OgreArc<ItemType, OgreAllocatorType>>, u32)
Stream
(and its stream_id
) able to receive elements
that were sent through this channel before the call to this method.It is up to each implementor to define how back in the past those events may go, but it is known that
mmap log
based channels are able to see all past events.If called more than once, every stream will see all the past events available.
Currently
panic
s if called more times than allowed by [Multi]’s MAX_STREAMS
Source§fn create_stream_for_new_events(
self: &Arc<FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>>,
) -> (MutinyStream<'a, ItemType, FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>, OgreArc<ItemType, OgreAllocatorType>>, u32)
fn create_stream_for_new_events( self: &Arc<FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>>, ) -> (MutinyStream<'a, ItemType, FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>, OgreArc<ItemType, OgreAllocatorType>>, u32)
Stream
(and its stream_id
) able to receive elements sent through this channel after the call to this method.If called more than once, each
Stream
will see all new elements – “listener pattern”.Currently
panic
s if called more times than allowed by [Multi]’s MAX_STREAMS
Source§fn create_streams_for_old_and_new_events(
self: &Arc<FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>>,
) -> ((MutinyStream<'a, ItemType, FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>, OgreArc<ItemType, OgreAllocatorType>>, u32), (MutinyStream<'a, ItemType, FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>, OgreArc<ItemType, OgreAllocatorType>>, u32))
fn create_streams_for_old_and_new_events( self: &Arc<FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>>, ) -> ((MutinyStream<'a, ItemType, FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>, OgreArc<ItemType, OgreAllocatorType>>, u32), (MutinyStream<'a, ItemType, FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>, OgreArc<ItemType, OgreAllocatorType>>, u32))
Source§fn create_stream_for_old_and_new_events(
self: &Arc<FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>>,
) -> (MutinyStream<'a, ItemType, FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>, OgreArc<ItemType, OgreAllocatorType>>, u32)
fn create_stream_for_old_and_new_events( self: &Arc<FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>>, ) -> (MutinyStream<'a, ItemType, FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>, OgreArc<ItemType, OgreAllocatorType>>, u32)
Stream
(and its stream_id
) able to receive elements
that were sent through this channel either before and after the call to this method.It is up to each implementor to define how back in the past those events may go, but it is known that
mmap log
based channels are able to see all past events.Notice that, with this method, there is no way of discriminating where the “old” events end and where the “new” events start.
If called more than once, every stream will see all the past events available, as well as all future events after this method call.
Currently
panic
s if called more times than allowed by [Multi]’s MAX_STREAMS
Source§impl<'a, ItemType, OgreAllocatorType, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelProducer<'a, ItemType, OgreArc<ItemType, OgreAllocatorType>> for Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>
impl<'a, ItemType, OgreAllocatorType, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelProducer<'a, ItemType, OgreArc<ItemType, OgreAllocatorType>> for Atomic<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>
Source§fn send(&self, item: ItemType) -> RetryResult<(), ItemType, (), ()>
fn send(&self, item: ItemType) -> RetryResult<(), ItemType, (), ()>
item
.See there for how to deal with the returned type.
IMPLEMENTORS: #[inline(always)]
Source§fn send_with<F>(&self, setter: F) -> RetryResult<(), F, (), ()>where
F: FnOnce(&mut ItemType),
fn send_with<F>(&self, setter: F) -> RetryResult<(), F, (), ()>where
F: FnOnce(&mut ItemType),
setter
, passing a slot so the payload may be filled there, then sends the event through this channel asynchronously.The returned type is conversible to
Result<(), F>
by calling .into() on it, returning Err<setter>
when the buffer is full,
to allow the caller to try again; otherwise you may add any retrying logic using the keen-retry
crate’s API like in: Read moreSource§async fn send_with_async<F, Fut>(
&'a self,
setter: F,
) -> RetryResult<(), F, (), ()>
async fn send_with_async<F, Fut>( &'a self, setter: F, ) -> RetryResult<(), F, (), ()>
setter
.
This method is useful for sending operations that depend on data acquired by async blocks, allowing
select loops (like the following) to be built: Read moreSource§fn send_derived(
&self,
ogre_arc_item: &OgreArc<ItemType, OgreAllocatorType>,
) -> bool
fn send_derived( &self, ogre_arc_item: &OgreArc<ItemType, OgreAllocatorType>, ) -> bool
DerivedItemType
instead of the ItemType
, this method may be useful
– for instance: if the Stream consumes OgreArc<Type>
(the derived item type) and the channel is for Type
, with this method one may send an OgreArc
directly.IMPLEMENTORS: #[inline(always)]
Source§fn reserve_slot(&self) -> Option<&mut ItemType>
fn reserve_slot(&self) -> Option<&mut ItemType>
See also [Self::send_reserved()] and [Self::cancel_slot_reserve()].
Source§fn try_send_reserved(&self, reserved_slot: &mut ItemType) -> bool
fn try_send_reserved(&self, reserved_slot: &mut ItemType) -> bool
false
is returned) might be part of the normal channel operation,
so retrying is advised.
More: some channel implementations are optimized (or even only accept) sending the slots
in the same order they were reserved.Source§fn try_cancel_slot_reserve(&self, reserved_slot: &mut ItemType) -> bool
fn try_cancel_slot_reserve(&self, reserved_slot: &mut ItemType) -> bool
Source§impl<'a, ItemType, OgreAllocatorType, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelProducer<'a, ItemType, OgreArc<ItemType, OgreAllocatorType>> for FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>
impl<'a, ItemType, OgreAllocatorType, const BUFFER_SIZE: usize, const MAX_STREAMS: usize> ChannelProducer<'a, ItemType, OgreArc<ItemType, OgreAllocatorType>> for FullSync<'a, ItemType, OgreAllocatorType, BUFFER_SIZE, MAX_STREAMS>
Source§fn send(&self, item: ItemType) -> RetryResult<(), ItemType, (), ()>
fn send(&self, item: ItemType) -> RetryResult<(), ItemType, (), ()>
item
.See there for how to deal with the returned type.
IMPLEMENTORS: #[inline(always)]
Source§fn send_with<F>(&self, setter: F) -> RetryResult<(), F, (), ()>where
F: FnOnce(&mut ItemType),
fn send_with<F>(&self, setter: F) -> RetryResult<(), F, (), ()>where
F: FnOnce(&mut ItemType),
setter
, passing a slot so the payload may be filled there, then sends the event through this channel asynchronously.The returned type is conversible to
Result<(), F>
by calling .into() on it, returning Err<setter>
when the buffer is full,
to allow the caller to try again; otherwise you may add any retrying logic using the keen-retry
crate’s API like in: Read moreSource§async fn send_with_async<F, Fut>(
&'a self,
setter: F,
) -> RetryResult<(), F, (), ()>
async fn send_with_async<F, Fut>( &'a self, setter: F, ) -> RetryResult<(), F, (), ()>
setter
.
This method is useful for sending operations that depend on data acquired by async blocks, allowing
select loops (like the following) to be built: Read moreSource§fn send_derived(
&self,
ogre_arc_item: &OgreArc<ItemType, OgreAllocatorType>,
) -> bool
fn send_derived( &self, ogre_arc_item: &OgreArc<ItemType, OgreAllocatorType>, ) -> bool
DerivedItemType
instead of the ItemType
, this method may be useful
– for instance: if the Stream consumes OgreArc<Type>
(the derived item type) and the channel is for Type
, with this method one may send an OgreArc
directly.IMPLEMENTORS: #[inline(always)]
Source§fn reserve_slot(&self) -> Option<&mut ItemType>
fn reserve_slot(&self) -> Option<&mut ItemType>
See also [Self::send_reserved()] and [Self::cancel_slot_reserve()].
Source§fn try_send_reserved(&self, reserved_slot: &mut ItemType) -> bool
fn try_send_reserved(&self, reserved_slot: &mut ItemType) -> bool
false
is returned) might be part of the normal channel operation,
so retrying is advised.
More: some channel implementations are optimized (or even only accept) sending the slots
in the same order they were reserved.