# Quickstart
This page shows the shortest path from an empty Rust project to producing and
consuming records with `kafkit-client`.
## Requirements
- Apache Kafka 4.0 or newer.
- A reachable bootstrap listener, for example `localhost:9092`.
- Tokio runtime support in your application.
Add the crate and Tokio to your project:
```toml
[dependencies]
kafkit-client = "0.1.9"
tokio = { version = "1", features = ["macros", "rt-multi-thread"] }
```
## Produce And Consume
`KafkaClient` is the ergonomic entry point. It stores the bootstrap server and
can be scoped to a topic before creating producers or consumers.
```rust,no_run
use kafkit_client::{AutoOffsetReset, KafkaClient, KafkaMessage, RecordHeader};
#[tokio::main]
async fn main() -> kafkit_client::Result<()> {
let orders = KafkaClient::new("localhost:9092").topic("orders");
let producer = orders.producer().connect().await?;
producer
.send_message(
KafkaMessage::new("created".to_owned())
.with_key("order-42")
.with_header(RecordHeader::new("trace-id", "abc-123")),
)
.await?;
producer.flush().await?;
let consumer = orders
.consumer("orders-reader")
.with_auto_offset_reset(AutoOffsetReset::Earliest)
.connect()
.await?;
let records = consumer.poll().await?;
for record in records.iter() {
println!(
"{}:{}@{}",
record.topic,
record.partition,
record.offset
);
}
consumer.commit(&records).await?;
consumer.shutdown().await?;
producer.shutdown().await?;
Ok(())
}
```
## What To Use Next
- Use [Producer](producer.md) for batching, compression, tombstones, and
transactions.
- Use [Consumer](consumer.md) for subscriptions, manual assignment, offset
lookup, and commits.
- Use [Admin](admin.md) to create topics or inspect cluster metadata.
- Use [Security](security.md) when connecting to TLS or SASL listeners.
- Use [Observability](observability.md) to configure tracing diagnostics.
- Use [API Stability](api-stability.md) for the public API and SemVer policy.