kafkit-client
A native async Rust client for Apache Kafka 4.0 and newer.
kafkit-client is built on tokio and kafka-protocol. It exposes producer,
consumer, share-group consumer, and admin APIs without wrapping librdkafka or the
Java client.
Status
This crate targets modern Kafka clusters only:
- Apache Kafka 4.0+.
- KRaft clusters.
- Modern consumer groups using KIP-848.
- Transaction protocol v2.
- Share-group protocol support.
Classic consumer groups, ZooKeeper-era assumptions, and older broker protocol paths are intentionally out of scope.
Install
[]
= "0.1.1"
= { = "1", = ["macros", "rt-multi-thread"] }
Quick Start
Use KafkaClient when you want a compact, topic-scoped setup for application
code:
use ;
async
Producer
Use KafkaProducer and ProducerConfig directly when you need lower-level
control over batching, compression, idempotence, transactions, or explicit
topic/partition routing:
use ;
async
Consumer
Consumers use Kafka's modern group protocol. The ergonomic builder subscribes to
the topic selected by KafkaClient::topic(...):
use ;
async
Admin
The admin client covers common cluster and topic operations:
use ;
async
Security
TLS and SASL are configured on the same builder/config types used for plain connections:
use ;
async
Supported security features:
- TLS with system roots, custom CA files, server-name override, and client certs.
- SASL/PLAIN.
- SASL/SCRAM-SHA-256.
- SASL/SCRAM-SHA-512.
Feature Highlights
- Async producer, consumer, share-group consumer, and admin client.
- Topic-scoped
KafkaClientfacade for concise application setup. - Lower-level
KafkaProducer,KafkaConsumer,KafkaShareConsumer, andKafkaAdminAPIs for explicit configuration. - Record headers, nullable values, tombstones, and explicit record timestamps.
- Configurable compression, linger, batching, retry backoff, request timeout, and delivery timeout.
- Producer
flush()and explicitshutdown()controls. - Transactional producer support, including
send_offsets_to_transaction. - Manual assignment, topic-list subscription, and pattern subscription.
- Seek, position, pause, resume, committed-offset lookup, beginning/end offset lookup, and timestamp offset lookup.
- Topic create/list/describe/delete/config APIs, consumer group description, broker metadata lookup, and cluster description.
tracinginstrumentation.
Choosing an API
- Use
KafkaClientfor service code that mostly works with one topic at a time. - Use
KafkaProducerwithProducerConfigfor explicit partitioning, batching, compression, idempotence, or transactions. - Use
KafkaConsumerwithConsumerConfigfor direct control over group, assignment, fetch, offset, and commit behavior. - Use
KafkaShareConsumerfor Kafka share-group consumption. - Use
KafkaAdminwithAdminConfigfor cluster and topic management.
Operational Notes
- Call
shutdown().awaiton producers and consumers when your application is stopping so background tasks can drain and close connections cleanly. - Call
flush().awaiton producers when you need all currently buffered records acknowledged before continuing. - Consumers return batches from
poll().await; commit the records you have processed, or configure auto-commit if that matches your processing model. - Transactional producers must set a transactional id, call
init_transactions().await, then usebegin_transaction(),commit_transaction(), orabort_transaction().
Runnable Examples
The crate includes real programs under examples/. For a local Kafka 4.0+
broker on localhost:9092, run:
See examples/README.md for environment variables and Docker Compose notes.