tower-batch
A Tower middleware that buffers requests and flushes them in batches. Use it when the downstream system is more efficient with bulk writes – databases, message brokers, object stores, etc. The middleware collects individual requests as BatchControl::Item(R) and, once the buffer reaches a maximum size or a maximum duration elapses, signals the inner service with BatchControl::Flush so it can process the accumulated batch.
Quick start
Add the dependency to your Cargo.toml:
[]
= { = "0.2.0" }
Create a batch service and start sending requests:
use Duration;
use Batch;
// `my_service` implements `Service<BatchControl<MyRequest>>`
let batch = new;
If you prefer the Tower layer pattern:
use BatchLayer;
let layer = new;
How it works
Your inner service must implement Service<BatchControl<R>> where R is the request type. The middleware sends two kinds of calls:
BatchControl::Item(request)– buffer this request. Typically, you just push it onto aVecand returnOk(()).BatchControl::Flush– process everything you have buffered, then return the result.
Batch::new spawns a background worker that owns the inner service. It forwards each incoming request as an Item, and triggers a Flush when the batch is full or the timer fires. Batch handles are cheap to clone – each clone shares the same worker, so you can hand them to multiple tasks.
Examples
See the examples/ directory:
sqlite_batch– batch-insert rows into an in-memory SQLite database using the rarray virtual table.
Run an example with:
License
This project is licensed under the MIT license.
Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in tower-batch by you, shall be licensed as MIT, without any additional terms or conditions.