pub trait BatchedQueueTrait<T> {
// Required methods
fn len(&self) -> usize;
fn capacity(&self) -> usize;
fn is_empty(&self) -> bool;
fn push(&self, item: T) -> Result<(), BatchedQueueError>;
fn try_next_batch(&self) -> Result<Option<Vec<T>>, BatchedQueueError>;
fn next_batch(&self) -> Result<Vec<T>, BatchedQueueError>;
fn next_batch_timeout(
&self,
timeout: Duration,
) -> Result<Vec<T>, BatchedQueueError>;
fn flush(&self) -> Result<(), BatchedQueueError>;
}Expand description
Defines the common interface for batched queue implementations.
This trait provides methods for adding items to a queue, retrieving batches of items, and checking queue status. All implementations must handle the buffering of items until they form complete batches, and provide mechanisms for flushing partial batches when needed.
§Examples
use batched_queue::{BatchedQueue, BatchedQueueTrait};
// Create a queue with batch size of 10
let queue = BatchedQueue::new(10).expect("Failed to create queue");
// Create a sender that can be shared across threads
let sender = queue.create_sender();
// Push items to the queue (in real usage, this would be in another thread)
for i in 0..25 {
sender.push(i).expect("Failed to push item");
}
// Flush any remaining items that haven't formed a complete batch
sender.flush().expect("Failed to flush");
// Process batches
while let Some(batch) = queue.try_next_batch().expect("Failed to get batch") {
println!("Processing batch of {} items", batch.len());
for item in batch {
// Process each item
println!(" Item: {}", item);
}
}Required Methods§
Sourcefn push(&self, item: T) -> Result<(), BatchedQueueError>
fn push(&self, item: T) -> Result<(), BatchedQueueError>
Adds an item to the queue.
If adding this item causes the current batch to reach the configured batch size, the batch will be automatically sent for processing.
§Arguments
item- The item to add to the queue
§Errors
Returns BatchedQueueError::Disconnected if the receiving end has been dropped,
or other implementation-specific errors.
Sourcefn try_next_batch(&self) -> Result<Option<Vec<T>>, BatchedQueueError>
fn try_next_batch(&self) -> Result<Option<Vec<T>>, BatchedQueueError>
Sourcefn next_batch(&self) -> Result<Vec<T>, BatchedQueueError>
fn next_batch(&self) -> Result<Vec<T>, BatchedQueueError>
Retrieves the next batch of items, blocking until one is available.
§Errors
Returns BatchedQueueError::Disconnected if the sending end has been dropped,
or other implementation-specific errors.
Sourcefn next_batch_timeout(
&self,
timeout: Duration,
) -> Result<Vec<T>, BatchedQueueError>
fn next_batch_timeout( &self, timeout: Duration, ) -> Result<Vec<T>, BatchedQueueError>
Retrieves the next batch of items, blocking until one is available or until the timeout expires.
§Arguments
timeout- Maximum time to wait for a batch to become available
§Errors
Returns:
BatchedQueueError::Timeoutif no batch becomes available within the timeout periodBatchedQueueError::Disconnectedif the sending end has been dropped- Other implementation-specific errors
Sourcefn flush(&self) -> Result<(), BatchedQueueError>
fn flush(&self) -> Result<(), BatchedQueueError>
Flushes any pending items into a batch, even if the batch is not full.
This is useful for ensuring that all items are processed, especially during shutdown or when batches need to be processed on demand.
§Errors
Returns BatchedQueueError::Disconnected if the receiving end has been dropped,
or other implementation-specific errors.