# Traffic Control Roadmap
## Vision
- Deliver a composable suite of middleware layers for Rust services that covers caching, rate limiting, load shedding, and adaptive queueing.
- Make the tooling production-ready out of the box: metrics, observability hooks, predictable defaults, and explicit escape hatches.
- Keep the APIs Tower-native while remaining compatible with Actix and other ecosystems through thin shims.
## Immediate Initiatives
- **Caching Layer**
- Request key extraction strategies (path, headers, custom closures).
- Pluggable storage backends: in-memory (ARC/TinyLFU), Redis/KeyDB, optional write-through behavior.
- Stampede protection and background refresh hooks.
- Configurable TTL, size limits, invalidation triggers.
- **Adaptive Queue / Load-Shed Layer**
- Bounded, fair queue that sits adjacent to the rate limiter.
- Probabilistic shedding tied to limiter signals (e.g., hot key contention, rejection rate).
- Downstream-aware permits: track concurrency/latency per dependency and throttle accordingly.
- Rich response strategy (429 vs 503), including retry hints and structured error payloads.
## Longer-Term Ideas
- Shared control-plane for policy distribution (config file, remote API).
- SDK integrations for popular frameworks (Axum, Actix, Hyper, Tonic).
- Benchmarks and scenario packs (burst, high-cardinality, distributed deployments).
## Next Steps
- Draft API sketches for `CacheLayer` and `AdaptiveQueueLayer`.
- Evaluate reusable pieces from `tokio-rate-limit` (storage, metrics) to avoid duplication.
- Set up integration examples combining caching + rate limiting + load shedding in a sample Axum service.