bevy_event_bus
A Bevy plugin that connects Bevy's event system to external message brokers like Kafka.
Features
-
Seamless integration with Bevy's event system
- Events are simultaneously sent to both Bevy's internal event system and external message brokers
- Familiar API design following Bevy's conventions (EventBusReader/EventBusWriter)
-
Automatic event registration
- Simply derive
ExternalBusEventon your event types - Must also derive
SerializeandDeserializefrom serde for external broker compatibility - No manual registration required
- Simply derive
-
Topic-based messaging
- Send and receive events on specific topics (auto-subscribe on first read)
- No manual subscription API required
-
Error handling
- Provides detailed error information for connectivity and serialization issues
- Fire-and-forget behavior available by ignoring the Result (e.g.,
let _ = writer.write(...)) - Bevy events fired to describe event bus errors (connection issues etc)
-
Backends
- Kafka support (with the "kafka" feature)
- Easily extendable to support other message brokers
Installation
Add to your Cargo.toml:
[]
= "0.1"
With Kafka support:
[]
= { = "0.1", = ["kafka"] }
Usage
Define your events
use *;
use *;
use ;
// Define an event - no manual registration needed!
Set up the plugin
use *;
use *;
Send events
// System that sends events
Receive events
// System that receives events
Error Handling
The event bus provides comprehensive error handling for both sending and receiving events. All errors are reported as Bevy events, allowing you to handle them in your systems.
Write Errors
When sending events, errors can occur asynchronously and handled with Bevy's inbuilt event system:
Read Errors
When receiving events, deserialization failures are reported as EventBusDecodeError events:
Complete Error Handling Setup
Add all error handling systems to your app:
use *;
Fire-and-Forget Usage
If you don't want to handle errors explicitly, simply don't add error handling systems. The errors will still be logged as warnings but won't affect your application flow:
// Simple usage without explicit error handling
Backend Configuration
Kafka
use HashMap;
use *;
let config = KafkaConfig ;
let kafka_backend = new;
Auto-Subscription
Reading from a topic automatically subscribes the consumer to that topic on first use.
Additional Kafka Config Keys
Use additional_config to pass through arbitrary librdkafka properties (e.g. security, retries, acks).
Common keys:
enable.idempotence=truemessage.timeout.ms=5000(already set on producer)security.protocol=SSLssl.ca.location=/path/to/ca.pemssl.certificate.location=/path/to/cert.pemssl.key.location=/path/to/key.pem
Local Development (Docker)
You can spin up a single-node Kafka (KRaft) automatically in tests. The test harness will:
- Try to start
bitnami/kafka:latestexposing 9092 ifKAFKA_BOOTSTRAP_SERVERSnot set. - Poll metadata until the broker is ready.
Manual run:
Set KAFKA_BOOTSTRAP_SERVERS to override (e.g. in CI):
Testing Notes
Integration tests use the docker harness or external broker. They generate unique topic names per run to avoid offset collisions.
Backend Configuration
Kafka
use HashMap;
use *;
let config = KafkaConfig ;
let kafka_backend = new;
Performance Testing
The library includes comprehensive performance tests to measure throughput and latency under various conditions.
Quick Performance Test
# Run all performance benchmarks
# Run specific test
# Results are automatically saved to event_bus_perf_results.csv
Sample Performance Results
Test: test_message_throughput | Send Rate: 76373 msg/s | Receive Rate: 78000 msg/s | Payload: 100 bytes
Test: test_high_volume_small_messages | Send Rate: 73742 msg/s | Receive Rate: 74083 msg/s | Payload: 20 bytes
Test: test_large_message_throughput | Send Rate: 8234 msg/s | Receive Rate: 8156 msg/s | Payload: 10000 bytes
The performance tests measure:
- Message throughput (messages per second)
- Data throughput (MB/s)
- End-to-end latency
- System stability under load
Performance results are tracked over time with git commit hashes for regression analysis.