Skip to main content

ImageProcessorTrait

Trait ImageProcessorTrait 

Source
pub trait ImageProcessorTrait {
    // Required methods
    fn convert(
        &mut self,
        src: &TensorImage,
        dst: &mut TensorImage,
        rotation: Rotation,
        flip: Flip,
        crop: Crop,
    ) -> Result<()>;
    fn convert_ref(
        &mut self,
        src: &TensorImage,
        dst: &mut TensorImageRef<'_>,
        rotation: Rotation,
        flip: Flip,
        crop: Crop,
    ) -> Result<()>;
    fn draw_masks(
        &mut self,
        dst: &mut TensorImage,
        detect: &[DetectBox],
        segmentation: &[Segmentation],
    ) -> Result<()>;
    fn draw_masks_proto(
        &mut self,
        dst: &mut TensorImage,
        detect: &[DetectBox],
        proto_data: &ProtoData,
    ) -> Result<()>;
    fn decode_masks_atlas(
        &mut self,
        detect: &[DetectBox],
        proto_data: ProtoData,
        output_width: usize,
        output_height: usize,
    ) -> Result<(Vec<u8>, Vec<MaskRegion>)>;
    fn set_class_colors(&mut self, colors: &[[u8; 4]]) -> Result<()>;
}

Required Methods§

Source

fn convert( &mut self, src: &TensorImage, dst: &mut TensorImage, rotation: Rotation, flip: Flip, crop: Crop, ) -> Result<()>

Converts the source image to the destination image format and size. The image is cropped first, then flipped, then rotated

§Arguments
  • dst - The destination image to be converted to.
  • src - The source image to convert from.
  • rotation - The rotation to apply to the destination image.
  • flip - Flips the image
  • crop - An optional rectangle specifying the area to crop from the source image
§Returns

A Result indicating success or failure of the conversion.

Source

fn convert_ref( &mut self, src: &TensorImage, dst: &mut TensorImageRef<'_>, rotation: Rotation, flip: Flip, crop: Crop, ) -> Result<()>

Converts the source image to a borrowed destination tensor for zero-copy preprocessing.

This variant accepts a TensorImageRef as the destination, enabling direct writes into external buffers (e.g., model input tensors) without intermediate copies.

§Arguments
  • src - The source image to convert from.
  • dst - A borrowed tensor image wrapping the destination buffer.
  • rotation - The rotation to apply to the destination image.
  • flip - Flips the image
  • crop - An optional rectangle specifying the area to crop from the source image
§Returns

A Result indicating success or failure of the conversion.

Source

fn draw_masks( &mut self, dst: &mut TensorImage, detect: &[DetectBox], segmentation: &[Segmentation], ) -> Result<()>

Draw pre-decoded masks onto image.

Source

fn draw_masks_proto( &mut self, dst: &mut TensorImage, detect: &[DetectBox], proto_data: &ProtoData, ) -> Result<()>

Draw masks from proto data onto image (fused decode+draw).

For YOLO segmentation models, this avoids materializing intermediate Array3<u8> masks. The ProtoData contains mask coefficients and the prototype tensor; the renderer computes mask_coeff @ protos directly.

Source

fn decode_masks_atlas( &mut self, detect: &[DetectBox], proto_data: ProtoData, output_width: usize, output_height: usize, ) -> Result<(Vec<u8>, Vec<MaskRegion>)>

Decode masks to atlas buffer (internal, used by decode_masks).

The atlas is a compact vertical strip where each detection occupies a strip sized to its padded bounding box (not the full output resolution).

Returns (atlas_pixels, regions) where regions describes each detection’s location and bbox within the atlas.

Source

fn set_class_colors(&mut self, colors: &[[u8; 4]]) -> Result<()>

Sets the colors used for rendering segmentation masks. Up to 17 colors can be set.

Implementors§