Module kafka::producer

source ·
Expand description

Kafka Producer - A higher-level API for sending messages to Kafka topics.

This module hosts a multi-topic capable producer for a Kafka cluster providing a convenient API for sending messages synchronously.

In Kafka, each message is a key/value pair where one or the other is optional. A Record represents all the data necessary to produce such a message to Kafka using the Producer. It specifies the target topic and the target partition the message is supposed to be delivered to as well as the key and the value.

Example

use std::fmt::Write;
use std::time::Duration;
use kafka::producer::{Producer, Record, RequiredAcks};

let mut producer =
    Producer::from_hosts(vec!("localhost:9092".to_owned()))
        .with_ack_timeout(Duration::from_secs(1))
        .with_required_acks(RequiredAcks::One)
        .create()
        .unwrap();

let mut buf = String::with_capacity(2);
for i in 0..10 {
  let _ = write!(&mut buf, "{}", i); // some computation of the message data to be sent
  producer.send(&Record::from_value("my-topic", buf.as_bytes())).unwrap();
  buf.clear();
}

In this example, when the producer.send(..) returns successfully, we are guaranteed the message is delivered to Kafka and persisted by at least one Kafka broker. However, when sending multiple messages just like in this example, it is more efficient to send them in batches using Producer::send_all.

Since some of the Records attributes are optional, convenience methods exist to ease their creation. In this example, the call to Record::from_value creates a key-less, value-only record with an unspecified partition. The Record struct, however, is intended to provide full control over its lifecycle to client code, and, hence, is fully open. Its current constructor methods are provided for convenience only.

Beside the target topic, key, and the value of a Record, client code is allowed to specify the topic partition the message is supposed to be delivered to. If the partition of a Record is not specified - more precisely speaking if it’s negative - Producer will rely on its underlying Partitioner to find a suitable one. A Partitioner implementation can be supplied by client code at the Producer’s construction time and defaults to DefaultPartitioner. See that for more information for its strategy to find a partition.

Re-exports

Structs

  • A Kafka Producer builder easing the process of setting up various configuration settings.
  • As its name implies DefaultPartitioner is the default partitioner for Producer.
  • Producer relevant partition information of a particular topic.
  • The Kafka Producer
  • A structure representing a message to be sent to Kafka through the Producer API. Such a message is basically a key/value pair specifying the target topic and optionally the topic’s partition.
  • A description of available topics and their available partitions.

Constants

Traits

  • A trait used by Producer to obtain the bytes Record::key and Record::value represent. This leaves the choice of the types for key and value with the client.
  • A partitioner is given a chance to choose/redefine a partition for a message to be sent to Kafka. See also Record#with_partition.

Type Aliases

  • The default hasher implementation used of DefaultPartitioner.