1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179
//! # rust-rdkafka //! A fully asynchronous, [futures]-based Kafka client library for Rust based on [librdkafka]. //! //! ## The library //! `rust-rdkafka` provides a safe Rust interface to librdkafka. It is currently based on librdkafka 0.9.5. //! //! ### Documentation //! //! - [Current master branch](https://fede1024.github.io/rust-rdkafka/) //! - [Latest release](https://docs.rs/rdkafka/) //! - [Changelog](https://github.com/fede1024/rust-rdkafka/blob/master/changelog.md) //! //! ### Features //! //! The main features provided at the moment are: //! //! - Support for Kafka 0.8.x, 0.9.x and 0.10.x. For more information about broker compatibility options, check the [librdkafka documentation]. //! - Consume from single or multiple topics. //! - Automatic consumer rebalancing. //! - Customizable rebalance, with pre and post rebalance callbacks. //! - Synchronous or asynchronous message production. //! - Customizable offset commit. //! - Access to cluster metadata (list of topic-partitions, replicas, active brokers etc). //! - Access to group metadata (list groups, list members of groups, hostnames etc). //! - Access to producer and consumer metrics, errors and callbacks. //! //! [librdkafka documentation]: https://github.com/edenhill/librdkafka/wiki/Broker-version-compatibility //! //! ### Client types //! //! `rust-rdkafka` provides low level and high level consumers and producers. Low level: //! //! * [`BaseConsumer`]: simple wrapper around the librdkafka consumer. It requires to be periodically `poll()`ed in order to execute callbacks, rebalances and to receive messages. //! * [`BaseProducer`]: simple wrapper around the librdkafka producer. As in the consumer case, the user must call `poll()` periodically to execute delivery callbacks. //! //! High level: //! //! * [`StreamConsumer`]: it returns a [`stream`] of messages and takes care of polling the consumer internally. //! * [`FutureProducer`]: it returns a [`future`] that will be completed once the message is delivered to Kafka (or failed). //! //! [`BaseConsumer`]: https://docs.rs/rdkafka/0.11.0/rdkafka/consumer/base_consumer/struct.BaseConsumer.html //! [`BaseProducer`]: https://docs.rs/rdkafka/0.11.0/rdkafka/producer/struct.BaseProducer.html //! [`StreamConsumer`]: https://docs.rs/rdkafka/0.11.0/rdkafka/consumer/stream_consumer/struct.StreamConsumer.html //! [`FutureProducer`]: https://docs.rs/rdkafka/0.11.0/rdkafka/producer/struct.FutureProducer.html //! [librdkafka]: https://github.com/edenhill/librdkafka //! [futures]: https://github.com/alexcrichton/futures-rs //! [`future`]: https://docs.rs/futures/0.1.3/futures/trait.Future.html //! [`stream`]: https://docs.rs/futures/0.1.3/futures/stream/trait.Stream.html //! //! *Warning*: the library is under active development and the APIs are likely to change. //! //! ### Asynchronous data processing with tokio-rs //! [tokio-rs] is a platform for fast processing of asynchronous events in Rust. The interfaces exposed by the `StreamConsumer` and the `FutureProducer` allow rust-rdkafka users to easily integrate Kafka consumers and producers within the tokio-rs platform, and write asynchronous message processing code. Note that rust-rdkafka can be used without tokio-rs. //! //! To see rust-rdkafka in action with tokio-rs, check out the [asynchronous processing example] in the examples folder. //! //! [tokio-rs]: https://tokio.rs/ //! [asynchronous processing example]: https://github.com/fede1024/rust-rdkafka/blob/master/examples/asynchronous_processing.rs //! //! ### At-least-once delivery //! //! At-least-once delivery semantic is common in many streaming applications: every message is guaranteed to be processed at least once; in case of temporary failure, the message can be re-processed and/or re-delivered, but no message will be lost. //! //! In order to implement at-least-once delivery the stream processing application has to carefully commit the offset only once the message has been processed. Committing the offset too early, instead, might cause message loss, since upon recovery the consumer will start from the next message, skipping the one where the failure occurred. //! //! To see how to implement at-least-once delivery with `rdkafka`, check out the [at-least-once delivery example] in the examples folder. To know more about delivery semantics, check the [message delivery semantics] chapter in the Kafka documentation. //! //! [at-least-once delivery example]: https://github.com/fede1024/rust-rdkafka/blob/master/examples/at_least_once.rs //! [message delivery semantics]: https://kafka.apache.org/0101/documentation.html#semantics //! //! ## Installation //! //! Add this to your `Cargo.toml`: //! //! ```toml //! [dependencies] //! rdkafka = "^0.11.0" //! ``` //! //! This crate will compile librdkafka from sources and link it statically to your executable. To compile librdkafka you'll need: //! //! * the GNU toolchain //! * GNU `make` //! * `pthreads` //! * `zlib` //! * `libssl-dev`: optional, *not* included by default (feature: `ssl`). //! * `libsasl2-dev`: optional, *not* included by default (feature: `sasl`). //! //! To enable ssl and sasl, use the `features` field in `Cargo.toml`. Example: //! //! ```toml //! [dependencies.rdkafka] //! version = "^0.11.0" //! features = ["ssl", "sasl"] //! ``` //! //! ## Compiling from sources //! //! To compile from sources, you'll have to update the submodule containing librdkafka: //! //! ```bash //! git submodule update --init //! ``` //! //! and then compile using `cargo`, selecting the features that you want. Example: //! //! ```bash //! cargo build --features "ssl sasl" //! ``` //! //! ## Examples //! //! You can find examples in the `examples` folder. To run them: //! //! ```bash //! cargo run --example <example_name> -- <example_args> //! ``` //! //! ## Tests //! //! ### Unit tests //! //! The unit tests can run without a Kafka broker present: //! //! ```bash //! cargo test --lib //! ``` //! //! ### Automatic testing //! //! rust-rdkafka contains a suite of tests which is automatically executed by travis in //! docker-compose. Given the interaction with C code that rust-rdkafka has to do, tests //! are executed in valgrind to check eventual memory errors and leaks. //! //! To run the full suite using docker-compose: //! //! ```bash //! ./test_suite.sh //! ``` //! //! To run locally, instead: //! //! ```bash //! KAFKA_HOST="kafka_server:9092" cargo test //! ``` //! //! In this case there is a broker expected to be running on `KAFKA_HOST`. //! The broker must be configured with default partition number 3 and topic autocreation in order //! for the tests to succeed. //! //>alloc_system #[macro_use] extern crate log; #[macro_use] extern crate serde_derive; extern crate serde_json; extern crate futures; extern crate errno; extern crate rdkafka_sys as rdsys; pub use rdsys::types as types; pub mod client; pub mod config; pub mod consumer; pub mod error; pub mod groups; pub mod message; pub mod metadata; pub mod producer; pub mod statistics; pub mod topic_partition_list; pub mod util; // Re-export pub use client::Context; pub use message::{Message, Timestamp}; pub use topic_partition_list::{Offset, TopicPartitionList};