Webhook Dispatcher (Rust)
An in-process webhook delivery engine with fairness, retries, DLQ, signatures, and pluggable durability.
Quickstart (10 Lines)
use ;
async
Installation
Add to your Cargo.toml:
[]
= "0.1"
Enable optional features if needed:
[]
= { = "0.1", = ["http", "redis", "postgres", "metrics", "tracing"] }
Production Checklist
- Enable real HTTP delivery:
--features http - Use a durable backend for restarts: Redis or Postgres
- Choose an overflow policy:
BlockorSpillToStorage - Set rate limits (endpoint + tenant) to protect downstreams
- Use signature verification on the receiver side
Receiver Verification
use verify_webhook_request;
let headers = vec!;
verify_webhook_request?;
Features
- Fair scheduling with sharded queues
- Retries with jitter + DLQ
- Per-endpoint retry policy overrides
- HMAC signatures and timestamp support
- Per-endpoint rate limiting
- Multi-tenant isolation (tenant id on endpoints/events)
- Pluggable storage (in-memory, Redis, Postgres)
- Metrics and tracing feature flags
Optional Features
Durable Storage Backends
In-Memory (default)
Fast and simple, but not durable across restarts.
let dispatcher = new;
Redis
use Arc;
use ;
let client = open?;
let storage = new;
let dispatcher = new_with_storage.await;
Postgres
use Arc;
use ;
let =
connect.await?;
spawn;
let storage = new;
let dispatcher = new_with_storage.await;
Common Recipes
Set Overflow Policy
use ;
let mut cfg = default;
cfg.overflow_policy = Block;
Per-Endpoint Retry Overrides
use Endpoint;
let endpoint = new
.with_retry_policy;
Tenant Rate Limits
use ;
dispatcher
.set_tenant_rate_limit
.await;
DLQ Replay
use IdempotencyKey;
let replayed = dispatcher.replay_dlq_all.await;
let ok = dispatcher.replay_dlq_entry.await;
Delivery Status Queries
use IdempotencyKey;
let status = dispatcher.delivery_status.await;
CLI/Usage Notes
- This is a library; you call it from your app.
- To test real delivery, enable
httpand point to a real URL.
Example Configs (Different Scales)
Small (dev / side-project)
use DispatcherConfig;
let cfg = DispatcherConfig ;
Medium (startup)
use DispatcherConfig;
let cfg = DispatcherConfig ;
Large (high throughput)
use DispatcherConfig;
let cfg = DispatcherConfig ;
Architecture Diagram
┌───────────────┐
│ Dispatcher │
└──────┬────────┘
│ dispatch(event)
┌──────▼────────┐
│ Sharded Queues│ (fair scheduling)
└──────┬────────┘
│
┌──────▼────────┐
│ Scheduler │ (retry + jitter + DLQ)
└──────┬────────┘
│
┌──────▼────────┐
│ Workers │ (rate limit + HTTP)
└──────┬────────┘
│
┌──────▼────────┐
│ Endpoints │
└───────────────┘
Troubleshooting
I’m not seeing webhooks delivered
- Ensure you ran with
--features http. - Check endpoint URL and network access.
- If you see DLQ entries, replay them or check the status.
I’m getting backpressure errors
- Increase
shard_queue_sizeormax_in_flight. - Use
OverflowPolicy::Blockfor safer behavior.
Retries don’t seem to happen
- Confirm
max_retriesis set on the endpoint. - Verify that the failure is retryable (4xx is non-retryable).
Signature verification fails
- Make sure the secret matches.
- Confirm the timestamp is within
max_age_secs. - Validate header names match your endpoint config.
Redis/Postgres durability not working
- Confirm the feature flag is enabled (
redisorpostgres). - Ensure the storage backend is used via
new_with_storage. - Check connection details and permissions.
Prometheus Metrics Example
Add the exporter in your app (not required by the library):
# Cargo.toml
= "0.15"
use PrometheusBuilder;
let _handle = new.install.unwrap;
Then run with --features metrics and scrape the metrics endpoint exposed by your app.
Notes
- Default mode is in-memory (fast, not durable across restarts).
- Use
new_with_storagefor durable backends.