Skip to main content

ImageProcessorTrait

Trait ImageProcessorTrait 

Source
pub trait ImageProcessorTrait {
    // Required methods
    fn convert(
        &mut self,
        src: &TensorDyn,
        dst: &mut TensorDyn,
        rotation: Rotation,
        flip: Flip,
        crop: Crop,
    ) -> Result<()>;
    fn draw_masks(
        &mut self,
        dst: &mut TensorDyn,
        detect: &[DetectBox],
        segmentation: &[Segmentation],
    ) -> Result<()>;
    fn draw_masks_proto(
        &mut self,
        dst: &mut TensorDyn,
        detect: &[DetectBox],
        proto_data: &ProtoData,
    ) -> Result<()>;
    fn decode_masks_atlas(
        &mut self,
        detect: &[DetectBox],
        proto_data: ProtoData,
        output_width: usize,
        output_height: usize,
    ) -> Result<(Vec<u8>, Vec<MaskRegion>)>;
    fn set_class_colors(&mut self, colors: &[[u8; 4]]) -> Result<()>;
}

Required Methods§

Source

fn convert( &mut self, src: &TensorDyn, dst: &mut TensorDyn, rotation: Rotation, flip: Flip, crop: Crop, ) -> Result<()>

Converts the source image to the destination image format and size. The image is cropped first, then flipped, then rotated

§Arguments
  • dst - The destination image to be converted to.
  • src - The source image to convert from.
  • rotation - The rotation to apply to the destination image.
  • flip - Flips the image
  • crop - An optional rectangle specifying the area to crop from the source image
§Returns

A Result indicating success or failure of the conversion.

Source

fn draw_masks( &mut self, dst: &mut TensorDyn, detect: &[DetectBox], segmentation: &[Segmentation], ) -> Result<()>

Draw pre-decoded detection boxes and segmentation masks onto dst.

Supports two segmentation modes based on the mask channel count:

  • Instance segmentation (C=1): one Segmentation per detection, segmentation and detect are zipped.
  • Semantic segmentation (C>1): a single Segmentation covering all classes; only the first element is used.
§Format requirements
  • CPU backend: dst must be RGBA or RGB.
  • OpenGL backend: dst must be RGBA, BGRA, or RGB.
  • G2D backend: not implemented (returns NotImplemented).

An empty segmentation slice is valid — only bounding boxes are drawn.

Source

fn draw_masks_proto( &mut self, dst: &mut TensorDyn, detect: &[DetectBox], proto_data: &ProtoData, ) -> Result<()>

Draw masks from proto data onto image (fused decode+draw).

For YOLO segmentation models, this avoids materializing intermediate Array3<u8> masks. The ProtoData contains mask coefficients and the prototype tensor; the renderer computes mask_coeff @ protos directly at the output resolution using bilinear sampling.

detect and proto_data.mask_coefficients must have the same length (enforced by zip — excess entries are silently ignored). An empty detect slice is valid and returns immediately after drawing nothing.

§Format requirements

Same as draw_masks. G2D returns NotImplemented.

Source

fn decode_masks_atlas( &mut self, detect: &[DetectBox], proto_data: ProtoData, output_width: usize, output_height: usize, ) -> Result<(Vec<u8>, Vec<MaskRegion>)>

Decode masks into a compact atlas buffer.

Used internally by the Python/C decode_masks APIs. The atlas is a compact vertical strip where each detection occupies a strip sized to its padded bounding box (not the full output resolution).

output_width and output_height define the coordinate space for interpreting bounding boxes — individual mask regions are bbox-sized. Mask pixels are binary: 255 = presence, 0 = background.

Returns (atlas_pixels, regions) where regions describes each detection’s location and bbox within the atlas.

G2D backend returns NotImplemented.

Source

fn set_class_colors(&mut self, colors: &[[u8; 4]]) -> Result<()>

Sets the colors used for rendering segmentation masks. Up to 20 colors can be set.

Implementors§