Struct fluvio::consumer::PartitionConsumer
source · pub struct PartitionConsumer<P = SpuSocketPool> { /* private fields */ }
Expand description
An interface for consuming events from a particular partition
Implementations§
source§impl<P> PartitionConsumer<P>where
P: SpuDirectory,
impl<P> PartitionConsumer<P>where
P: SpuDirectory,
pub fn new( topic: String, partition: PartitionId, pool: Arc<P>, metrics: Arc<ClientMetrics>, ) -> Self
sourcepub fn partition(&self) -> PartitionId
pub fn partition(&self) -> PartitionId
Returns the ID of the partition that this consumer reads from
sourcepub fn metrics(&self) -> Arc<ClientMetrics>
pub fn metrics(&self) -> Arc<ClientMetrics>
Return a shared instance of ClientMetrics
sourcepub async fn stream(
&self,
offset: Offset,
) -> Result<impl Stream<Item = Result<Record, ErrorCode>>>
👎Deprecated since 0.21.8: use Fluvio::consumer_with_config()
instead
pub async fn stream( &self, offset: Offset, ) -> Result<impl Stream<Item = Result<Record, ErrorCode>>>
Fluvio::consumer_with_config()
insteadContinuously streams events from a particular offset in the consumer’s partition
Streaming is one of the two ways to consume events in Fluvio.
It is a continuous request for new records arriving in a partition,
beginning at a particular offset. You specify the starting point of the
stream using an Offset
and periodically receive events, either individually
or in batches.
Note this uses ConsumerRecord instead of batches
If you want more fine-grained control over how records are streamed,
check out the stream_with_config
method.
§Example
use futures::StreamExt;
let mut stream = consumer.stream(Offset::beginning()).await?;
while let Some(Ok(record)) = stream.next().await {
let key_str = record.get_key().map(|key| key.as_utf8_lossy_string());
let value_str = record.get_value().as_utf8_lossy_string();
println!("Got event: key={:?}, value={value_str}", key_str);
}
sourcepub async fn stream_with_config(
&self,
offset: Offset,
config: ConsumerConfig,
) -> Result<impl Stream<Item = Result<Record, ErrorCode>>>
👎Deprecated since 0.21.8: use Fluvio::consumer_with_config()
instead
pub async fn stream_with_config( &self, offset: Offset, config: ConsumerConfig, ) -> Result<impl Stream<Item = Result<Record, ErrorCode>>>
Fluvio::consumer_with_config()
insteadContinuously streams events from a particular offset in the consumer’s partition
Most of the time, you shouldn’t need to use a custom ConsumerConfig
.
If you don’t know what these settings do, try checking out the simpler
PartitionConsumer::stream
method that uses the default streaming settings.
Streaming is one of the two ways to consume events in Fluvio.
It is a continuous request for new records arriving in a partition,
beginning at a particular offset. You specify the starting point of the
stream using an Offset
and a ConsumerConfig
, and periodically
receive events, either individually or in batches.
§Example
use futures::StreamExt;
// Use a custom max_bytes value in the config
let fetch_config = ConsumerConfig::builder()
.max_bytes(1000)
.build()?;
let mut stream = consumer.stream_with_config(Offset::beginning(), fetch_config).await?;
while let Some(Ok(record)) = stream.next().await {
let key_str = record.get_key().map(|key| key.as_utf8_lossy_string());
let value_str = record.get_value().as_utf8_lossy_string();
println!("Got record: key={:?}, value={}", key_str, value_str);
}
sourcepub async fn stream_batches_with_config(
&self,
offset: Offset,
config: ConsumerConfig,
) -> Result<impl Stream<Item = Result<Batch, ErrorCode>>>
👎Deprecated since 0.21.8: use Fluvio::consumer_with_config()
instead
pub async fn stream_batches_with_config( &self, offset: Offset, config: ConsumerConfig, ) -> Result<impl Stream<Item = Result<Batch, ErrorCode>>>
Fluvio::consumer_with_config()
insteadContinuously streams batches of messages, starting an offset in the consumer’s partition
use futures::StreamExt;
// Use a custom max_bytes value in the config
let fetch_config = ConsumerConfig::builder()
.max_bytes(1000)
.build()?;
let mut stream = consumer.stream_batches_with_config(Offset::beginning(), fetch_config).await?;
while let Some(Ok(batch)) = stream.next().await {
for record in batch.records() {
let key_str = record.key().map(|key| key.as_utf8_lossy_string());
let value_str = record.value().as_utf8_lossy_string();
println!("Got record: key={:?}, value={}", key_str, value_str);
}
}
Trait Implementations§
Auto Trait Implementations§
impl<P> Freeze for PartitionConsumer<P>
impl<P> RefUnwindSafe for PartitionConsumer<P>where
P: RefUnwindSafe,
impl<P> Send for PartitionConsumer<P>
impl<P> Sync for PartitionConsumer<P>
impl<P> Unpin for PartitionConsumer<P>
impl<P> UnwindSafe for PartitionConsumer<P>where
P: RefUnwindSafe,
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
source§unsafe fn clone_to_uninit(&self, dst: *mut T)
unsafe fn clone_to_uninit(&self, dst: *mut T)
clone_to_uninit
)