kafkit-client 0.1.9

Kafka 4.0+ pure Rust client.
Documentation
# kafkit-client examples

These examples are real runnable programs for the `kafkit-client` crate.

## Order Workflow

`order_workflow` creates a topic if needed, produces several order events,
consumes them with a fresh consumer group, commits the consumed offsets, and
shuts the clients down cleanly.

Start Kafka 4.0+ locally, or use the repository compose setup:

```sh
docker compose -f examples/docker-compose.yml up kafka-1 kafka-2 kafka-3
```

Run the example from the repository root:

```sh
cargo run -p kafkit-client --example order_workflow
```

Environment variables:

- `KAFKIT_BOOTSTRAP_SERVERS`, default `localhost:9092`.
- `KAFKIT_TOPIC`, default `kafkit.orders`.
- `KAFKIT_GROUP_ID`, default is a unique group per run.

## WebSocket Broadcast

`websocket_broadcast` runs an HTTP/WebSocket server, creates a shared producer,
and starts a background Kafka consumer task. Inbound WebSocket messages are
produced to Kafka. The background consumer polls records, commits them,
serializes them as JSON, and broadcasts each record to every connected
WebSocket subscriber through a `tokio::sync::broadcast` channel.

Run the WebSocket server:

```sh
cargo run -p kafkit-client --example websocket_broadcast
```

Open the browser UI:

```text
http://127.0.0.1:8081
```

Or connect directly to the WebSocket endpoint:

```text
ws://127.0.0.1:8081/ws
```

Send JSON over the WebSocket to produce Kafka records:

```json
{
  "key": "browser-1",
  "value": "{\"source\":\"websocket\",\"status\":\"created\"}",
  "headers": {
    "source": "manual-client"
  }
}
```

Fields:

- `key`: optional record key.
- `value`: optional record value. Omit it or set it to `null` to produce a tombstone.
- `partition`: optional explicit partition.
- `headers`: optional string map of Kafka record headers.

You can also produce records from another terminal with the order workflow
example:

```sh
cargo run -p kafkit-client --example order_workflow
```

Environment variables:

- `KAFKIT_BOOTSTRAP_SERVERS`, default `localhost:9092`.
- `KAFKIT_TOPIC`, default `kafkit.orders`.
- `KAFKIT_GROUP_ID`, default `kafkit-websocket-broadcaster`.
- `KAFKIT_WS_BIND`, default `127.0.0.1:8081`.

## Transactional Order Worker

`transactional_order_worker` is a read-process-write service example. It seeds
order events, consumes them with a fresh group, writes fulfillment requests for
paid orders, writes invalid input records to a dead-letter topic, and commits
the consumed offsets in the same Kafka transaction as the produced output.

```sh
cargo run -p kafkit-client --example transactional_order_worker
```

Environment variables:

- `KAFKIT_BOOTSTRAP_SERVERS`, default `localhost:9092`.
- `KAFKIT_INPUT_TOPIC`, default `kafkit.examples.orders.in`.
- `KAFKIT_OUTPUT_TOPIC`, default `kafkit.examples.fulfillment.out`.
- `KAFKIT_DLQ_TOPIC`, default `kafkit.examples.orders.dlq`.
- `KAFKIT_GROUP_ID`, default is a unique group per run.
- `KAFKIT_TRANSACTIONAL_ID`, default is a unique transactional ID per run.

## Customer Profile Projection

`customer_profile_projection` demonstrates a compacted-topic state pattern. It
creates a compacted customer profile topic, produces upserts and a tombstone,
then manually assigns all topic partitions and rebuilds an in-memory projection
from Kafka records.

```sh
cargo run -p kafkit-client --example customer_profile_projection
```

Environment variables:

- `KAFKIT_BOOTSTRAP_SERVERS`, default `localhost:9092`.
- `KAFKIT_TOPIC`, default `kafkit.examples.customer-profiles`.
- `KAFKIT_RUN_ID`, default is generated per run.

## Cluster Operations Report

`cluster_operations_report` is an operator-style example. It describes the
cluster, creates a small planned topic set, reports partition and config state,
optionally applies topic config changes, attempts a broker log-dir usage report,
and lists visible consumer groups.

```sh
cargo run -p kafkit-client --example cluster_operations_report
```

Environment variables:

- `KAFKIT_BOOTSTRAP_SERVERS`, default `localhost:9092`.
- `KAFKIT_TOPIC_PREFIX`, default `kafkit.ops`.
- `KAFKIT_APPLY_TOPIC_CONFIG`, default `false`. Set to `true` to apply the
  topic config plan with incremental alter configs.