Skip to main content

ImageProcessorTrait

Trait ImageProcessorTrait 

Source
pub trait ImageProcessorTrait {
    // Required methods
    fn convert(
        &mut self,
        src: &TensorDyn,
        dst: &mut TensorDyn,
        rotation: Rotation,
        flip: Flip,
        crop: Crop,
    ) -> Result<()>;
    fn draw_decoded_masks(
        &mut self,
        dst: &mut TensorDyn,
        detect: &[DetectBox],
        segmentation: &[Segmentation],
        overlay: MaskOverlay<'_>,
    ) -> Result<()>;
    fn draw_proto_masks(
        &mut self,
        dst: &mut TensorDyn,
        detect: &[DetectBox],
        proto_data: &ProtoData,
        overlay: MaskOverlay<'_>,
    ) -> Result<()>;
    fn set_class_colors(&mut self, colors: &[[u8; 4]]) -> Result<()>;
}

Required Methods§

Source

fn convert( &mut self, src: &TensorDyn, dst: &mut TensorDyn, rotation: Rotation, flip: Flip, crop: Crop, ) -> Result<()>

Converts the source image to the destination image format and size. The image is cropped first, then flipped, then rotated

§Arguments
  • dst - The destination image to be converted to.
  • src - The source image to convert from.
  • rotation - The rotation to apply to the destination image.
  • flip - Flips the image
  • crop - An optional rectangle specifying the area to crop from the source image
§Returns

A Result indicating success or failure of the conversion.

Source

fn draw_decoded_masks( &mut self, dst: &mut TensorDyn, detect: &[DetectBox], segmentation: &[Segmentation], overlay: MaskOverlay<'_>, ) -> Result<()>

Draw pre-decoded detection boxes and segmentation masks onto dst.

Supports two segmentation modes based on the mask channel count:

  • Instance segmentation (C=1): one Segmentation per detection, segmentation and detect are zipped.
  • Semantic segmentation (C>1): a single Segmentation covering all classes; only the first element is used.
§Format requirements
  • CPU backend: dst must be RGBA or RGB.
  • OpenGL backend: dst must be RGBA, BGRA, or RGB.
  • G2D backend: only produces the base frame (empty detections); returns NotImplemented when any detection or segmentation is supplied.
§Output contract

This function always fully writes dst — it never relies on the caller having pre-cleared the destination. The four cases are:

detectionsbackgroundoutput
nonenonedst cleared to 0x00000000
nonesetdst ← background
setnonemasks drawn over cleared dst
setsetmasks drawn over background

Each backend implements this with its native primitives: G2D uses g2d_clear / g2d_blit, OpenGL uses glClear / DMA-BUF GPU blit plus the mask program, and CPU uses direct buffer fill / memcpy as the terminal fallback. CPU-memcpy of DMA buffers is avoided on the accelerated paths.

An empty segmentation slice is valid — only bounding boxes are drawn.

overlay controls compositing: background is the compositing source (must match dst in size and format); opacity scales mask alpha.

§Buffer aliasing

dst and overlay.background must reference distinct underlying buffers. An aliased pair returns Error::AliasedBuffers without dispatching to any backend — the GL path would otherwise read and write the same texture in a single draw, which is undefined behaviour on most drivers. Aliasing is detected via TensorDyn::aliases, which catches both shared-allocation clones and separate imports over the same dmabuf fd.

§Migration from v0.16.3 and earlier

Prior to v0.16.4 the call silently preserved dst’s contents on empty detections. That invariant no longer holds — dst is always fully written. Callers who pre-loaded an image into dst before calling this function must now pass that image via overlay.background instead.

Source

fn draw_proto_masks( &mut self, dst: &mut TensorDyn, detect: &[DetectBox], proto_data: &ProtoData, overlay: MaskOverlay<'_>, ) -> Result<()>

Draw masks from proto data onto image (fused decode+draw).

For YOLO segmentation models, this avoids materializing intermediate Array3<u8> masks. The ProtoData contains mask coefficients and the prototype tensor; the renderer computes mask_coeff @ protos directly at the output resolution using bilinear sampling.

detect and proto_data.mask_coefficients must have the same length (enforced by zip — excess entries are silently ignored). An empty detect slice is valid and produces the base frame — cleared or background-blitted — via the selected backend’s native primitive.

§Format requirements and output contract

Same as draw_decoded_masks, including the “always fully writes dst” guarantee across all four detection/background combinations.

overlay controls compositing — see draw_decoded_masks.

Source

fn set_class_colors(&mut self, colors: &[[u8; 4]]) -> Result<()>

Sets the colors used for rendering segmentation masks. Up to 20 colors can be set.

Implementors§