Skip to main content

BatchProcessor

Trait BatchProcessor 

Source
pub trait BatchProcessor: Send + 'static {
    // Required methods
    fn id(&self) -> StageId;
    fn process(&mut self, items: &mut [BatchEntry]) -> Result<(), StageError>;

    // Provided methods
    fn on_start(&mut self) -> Result<(), StageError> { ... }
    fn on_stop(&mut self) -> Result<(), StageError> { ... }
    fn category(&self) -> StageCategory { ... }
    fn capabilities(&self) -> Option<StageCapabilities> { ... }
}
Expand description

User-implementable trait for shared batch inference.

A BatchProcessor receives frames from potentially multiple feeds, processes them together (typically via GPU-accelerated inference), and writes per-frame results back into each BatchEntry::output.

§Ownership model

The processor is owned by its coordinator thread (moved in via Box<dyn BatchProcessor>). It is never shared across threads — Sync is not required. The coordinator is the sole caller of every method on the trait.

process() takes &mut self, so the processor can hold mutable state (GPU session handles, scratch buffers, etc.) without interior mutability.

§Lifecycle

  1. on_start() — called once when the batch coordinator starts.
  2. process() — called once per formed batch.
  3. on_stop() — called once when the coordinator shuts down.

§Error handling

If process() returns Err(StageError), the entire batch fails. All feed threads waiting on that batch receive the error and drop their frames (same semantics as a per-feed stage error).

§Output flexibility

StageOutput is the same type used by per-feed stages. A batch processor is not limited to detection — it can produce:

Post-batch per-feed stages see these outputs through the normal StageContext::artifacts accumulator.

§Example

use nv_perception::batch::{BatchProcessor, BatchEntry};
use nv_perception::{StageId, StageOutput, DetectionSet};
use nv_core::error::StageError;

struct MyDetector { /* model handle */ }

impl BatchProcessor for MyDetector {
    fn id(&self) -> StageId { StageId("my_detector") }

    fn process(&mut self, items: &mut [BatchEntry]) -> Result<(), StageError> {
        for item in items.iter_mut() {
            let pixels = item.frame.require_host_data()
                .map_err(|e| StageError::ProcessingFailed {
                    stage_id: self.id(),
                    detail: e.to_string(),
                })?;
            // ... run model on &*pixels ...
            item.output = Some(StageOutput::with_detections(DetectionSet::empty()));
        }
        Ok(())
    }
}

Required Methods§

Source

fn id(&self) -> StageId

Unique name for this processor (used in provenance, metrics, logging).

Source

fn process(&mut self, items: &mut [BatchEntry]) -> Result<(), StageError>

Process a batch of frames.

For each entry in items, read frame (and optionally view), perform inference, and set output to Some(StageOutput).

The batch may contain frames from multiple feeds. items.len() is bounded by the configured max_batch_size.

Provided Methods§

Source

fn on_start(&mut self) -> Result<(), StageError>

Called once when the batch coordinator starts.

Allocate GPU resources, load models, warm up the runtime here.

Source

fn on_stop(&mut self) -> Result<(), StageError>

Called once when the batch coordinator shuts down.

Release resources here. Best-effort — errors are logged but not fatal.

Source

fn category(&self) -> StageCategory

Optional category hint — defaults to StageCategory::FrameAnalysis.

Aligns with Stage::category() so that validation and metrics treat the batch processor as a pipeline participant, not a foreign construct.

Source

fn capabilities(&self) -> Option<StageCapabilities>

Declare input/output capabilities for pipeline validation.

When provided, the feed pipeline builder can validate that pre-batch stages satisfy the processor’s input requirements and post-batch stages’ inputs are met by the processor’s outputs.

Returns None by default (opts out of validation).

Implementors§