Expand description
Shared batch processing trait and entry types.
§Design intent
Most perception stages are per-feed: each feed has its own detector, tracker, classifier, etc. However, inference-heavy stages (object detection, embedding extraction, scene classification) benefit from batching frames across multiple feeds into a single accelerator call.
The BatchProcessor trait captures this pattern. The runtime
collects frames from multiple feeds, dispatches them as a batch, and
routes per-item results back to each feed’s pipeline continuation.
§Backend independence
BatchProcessor does not assume ONNX, TensorRT, OpenVINO, or any
specific inference framework. Implementors choose their own backend.
§Temporal window support
Each BatchEntry carries a single FrameEnvelope — the right
granularity for single-frame inference.
For models that operate on temporal windows or clips (e.g., video transformers, clip-based action recognition), the extension path is:
- A per-feed stage assembles the frame window from internal state or the temporal store.
- The assembled window is stored as a typed artifact in
StageOutput::artifactsfor a downstream stage, or the batch processor manages its own per-feed window buffers internally.
This keeps the BatchEntry type and coordinator protocol focused on
the single-frame case.
Structs§
- Batch
Entry - An entry in a batch, passed to
BatchProcessor::process.
Traits§
- Batch
Processor - User-implementable trait for shared batch inference.