Struct rdkafka::consumer::stream_consumer::StreamConsumer
[−]
[src]
#[must_use = "Consumer polling thread will stop immediately if unused"]pub struct StreamConsumer<C: ConsumerContext + 'static = DefaultConsumerContext> { /* fields omitted */ }
A Kafka Consumer providing a futures::Stream
interface.
This consumer doesn't need to be polled since it has a separate polling thread. Due to the
asynchronous nature of the stream, some messages might be consumed by the consumer without being
processed on the other end of the stream. If auto commit is used, it might cause message loss
after consumer restart. Manual offset storing should be used, see the store_offset
function on
Consumer
.
Methods
impl<C: ConsumerContext> StreamConsumer<C>
[src]
pub fn start(&self) -> MessageStream<C>
[src]
Starts the StreamConsumer with default configuration (100ms polling interval and no
NoMessageReceived
notifications).
pub fn start_with(
&self,
poll_interval: Duration,
no_message_error: bool
) -> MessageStream<C>
[src]
&self,
poll_interval: Duration,
no_message_error: bool
) -> MessageStream<C>
Starts the StreamConsumer with the specified poll interval. Additionally, if
no_message_error
is set to true, it will return an error of type
KafkaError::NoMessageReceived
every time the poll interval is reached and no message has
been received.
pub fn stop(&self)
[src]
Stops the StreamConsumer, blocking the caller until the internal consumer has been stopped.
Trait Implementations
impl<C: ConsumerContext> Consumer<C> for StreamConsumer<C>
[src]
fn get_base_consumer(&self) -> &BaseConsumer<C>
[src]
Returns a reference to the BaseConsumer.
fn subscribe(&self, topics: &[&str]) -> KafkaResult<()>
[src]
Subscribe the consumer to a list of topics.
fn unsubscribe(&self)
[src]
Unsubscribe the current subscription list.
fn assign(&self, assignment: &TopicPartitionList) -> KafkaResult<()>
[src]
Manually assign topics and partitions to the consumer. If used, automatic consumer rebalance won't be activated. Read more
fn commit(
&self,
topic_partition_list: &TopicPartitionList,
mode: CommitMode
) -> KafkaResult<()>
[src]
&self,
topic_partition_list: &TopicPartitionList,
mode: CommitMode
) -> KafkaResult<()>
Commits the offset of the specified message. The commit can be sync (blocking), or async. Notice that when a specific offset is committed, all the previous offsets are considered committed as well. Use this method only if you are processing messages in order. Read more
fn commit_consumer_state(&self, mode: CommitMode) -> KafkaResult<()>
[src]
Commit the current consumer state. Notice that if the consumer fails after a message has been received, but before the message has been processed by the user code, this might lead to data loss. Check the "at-least-once delivery" section in the readme for more information. Read more
fn commit_message(
&self,
message: &BorrowedMessage,
mode: CommitMode
) -> KafkaResult<()>
[src]
&self,
message: &BorrowedMessage,
mode: CommitMode
) -> KafkaResult<()>
Commit the provided message. Note that this will also automatically commit every message with lower offset within the same partition. Read more
fn store_offset(&self, message: &BorrowedMessage) -> KafkaResult<()>
[src]
Store offset for this message to be used on the next (auto)commit. When using this enable.auto.offset.store
should be set to false
in the config. Read more
fn subscription(&self) -> KafkaResult<TopicPartitionList>
[src]
Returns the current topic subscription.
fn assignment(&self) -> KafkaResult<TopicPartitionList>
[src]
Returns the current partition assignment.
fn committed<T>(&self, timeout: T) -> KafkaResult<TopicPartitionList> where
T: Into<Option<Duration>>,
Self: Sized,
[src]
T: Into<Option<Duration>>,
Self: Sized,
Retrieve committed offsets for topics and partitions.
fn offsets_for_timestamp<T>(
&self,
timestamp: i64,
timeout: T
) -> KafkaResult<TopicPartitionList> where
T: Into<Option<Duration>>,
Self: Sized,
[src]
&self,
timestamp: i64,
timeout: T
) -> KafkaResult<TopicPartitionList> where
T: Into<Option<Duration>>,
Self: Sized,
Lookup the offsets for this consumer's partitions by timestamp.
fn position(&self) -> KafkaResult<TopicPartitionList>
[src]
Retrieve current positions (offsets) for topics and partitions.
fn fetch_metadata<T>(
&self,
topic: Option<&str>,
timeout: T
) -> KafkaResult<Metadata> where
T: Into<Option<Duration>>,
Self: Sized,
[src]
&self,
topic: Option<&str>,
timeout: T
) -> KafkaResult<Metadata> where
T: Into<Option<Duration>>,
Self: Sized,
Returns the metadata information for the specified topic, or for all topics in the cluster if no topic is specified. Read more
fn fetch_watermarks<T>(
&self,
topic: &str,
partition: i32,
timeout: T
) -> KafkaResult<(i64, i64)> where
T: Into<Option<Duration>>,
Self: Sized,
[src]
&self,
topic: &str,
partition: i32,
timeout: T
) -> KafkaResult<(i64, i64)> where
T: Into<Option<Duration>>,
Self: Sized,
Returns the metadata information for all the topics in the cluster.
fn fetch_group_list<T>(
&self,
group: Option<&str>,
timeout: T
) -> KafkaResult<GroupList> where
T: Into<Option<Duration>>,
Self: Sized,
[src]
&self,
group: Option<&str>,
timeout: T
) -> KafkaResult<GroupList> where
T: Into<Option<Duration>>,
Self: Sized,
Returns the group membership information for the given group. If no group is specified, all groups will be returned. Read more
impl FromClientConfig for StreamConsumer
[src]
fn from_config(config: &ClientConfig) -> KafkaResult<StreamConsumer>
[src]
Create a client from client configuration. The default client context will be used.
impl<C: ConsumerContext> FromClientConfigAndContext<C> for StreamConsumer<C>
[src]
Creates a new StreamConsumer
starting from a ClientConfig
.
fn from_config_and_context(
config: &ClientConfig,
context: C
) -> KafkaResult<StreamConsumer<C>>
[src]
config: &ClientConfig,
context: C
) -> KafkaResult<StreamConsumer<C>>
Create a client from client configuration and a client context.