pub trait ImageProcessorTrait {
// Required methods
fn convert(
&mut self,
src: &TensorDyn,
dst: &mut TensorDyn,
rotation: Rotation,
flip: Flip,
crop: Crop,
) -> Result<()>;
fn draw_decoded_masks(
&mut self,
dst: &mut TensorDyn,
detect: &[DetectBox],
segmentation: &[Segmentation],
overlay: MaskOverlay<'_>,
) -> Result<()>;
fn draw_proto_masks(
&mut self,
dst: &mut TensorDyn,
detect: &[DetectBox],
proto_data: &ProtoData,
overlay: MaskOverlay<'_>,
) -> Result<()>;
fn set_class_colors(&mut self, colors: &[[u8; 4]]) -> Result<()>;
}Required Methods§
Sourcefn convert(
&mut self,
src: &TensorDyn,
dst: &mut TensorDyn,
rotation: Rotation,
flip: Flip,
crop: Crop,
) -> Result<()>
fn convert( &mut self, src: &TensorDyn, dst: &mut TensorDyn, rotation: Rotation, flip: Flip, crop: Crop, ) -> Result<()>
Converts the source image to the destination image format and size. The image is cropped first, then flipped, then rotated
§Arguments
dst- The destination image to be converted to.src- The source image to convert from.rotation- The rotation to apply to the destination image.flip- Flips the imagecrop- An optional rectangle specifying the area to crop from the source image
§Returns
A Result indicating success or failure of the conversion.
Sourcefn draw_decoded_masks(
&mut self,
dst: &mut TensorDyn,
detect: &[DetectBox],
segmentation: &[Segmentation],
overlay: MaskOverlay<'_>,
) -> Result<()>
fn draw_decoded_masks( &mut self, dst: &mut TensorDyn, detect: &[DetectBox], segmentation: &[Segmentation], overlay: MaskOverlay<'_>, ) -> Result<()>
Draw pre-decoded detection boxes and segmentation masks onto dst.
Supports two segmentation modes based on the mask channel count:
- Instance segmentation (
C=1): oneSegmentationper detection,segmentationanddetectare zipped. - Semantic segmentation (
C>1): a singleSegmentationcovering all classes; only the first element is used.
§Format requirements
- CPU backend:
dstmust beRGBAorRGB. - OpenGL backend:
dstmust beRGBA,BGRA, orRGB. - G2D backend: only produces the base frame (empty detections);
returns
NotImplementedwhen any detection or segmentation is supplied.
§Output contract
This function always fully writes dst — it never relies on the
caller having pre-cleared the destination. The four cases are:
| detections | background | output |
|---|---|---|
| none | none | dst cleared to 0x00000000 |
| none | set | dst ← background |
| set | none | masks drawn over cleared dst |
| set | set | masks drawn over background |
Each backend implements this with its native primitives: G2D uses
g2d_clear / g2d_blit, OpenGL uses glClear / DMA-BUF GPU blit
plus the mask program, and CPU uses direct buffer fill / memcpy as
the terminal fallback. CPU-memcpy of DMA buffers is avoided on the
accelerated paths.
An empty segmentation slice is valid — only bounding boxes are drawn.
overlay controls compositing: background is the compositing source
(must match dst in size and format); opacity scales mask alpha.
§Buffer aliasing
dst and overlay.background must reference distinct underlying
buffers. An aliased pair returns Error::AliasedBuffers without
dispatching to any backend — the GL path would otherwise read and
write the same texture in a single draw, which is undefined behaviour
on most drivers. Aliasing is detected via
TensorDyn::aliases, which
catches both shared-allocation clones and separate imports over the
same dmabuf fd.
§Migration from v0.16.3 and earlier
Prior to v0.16.4 the call silently preserved dst’s contents on empty
detections. That invariant no longer holds — dst is always fully
written. Callers who pre-loaded an image into dst before calling this
function must now pass that image via overlay.background instead.
Sourcefn draw_proto_masks(
&mut self,
dst: &mut TensorDyn,
detect: &[DetectBox],
proto_data: &ProtoData,
overlay: MaskOverlay<'_>,
) -> Result<()>
fn draw_proto_masks( &mut self, dst: &mut TensorDyn, detect: &[DetectBox], proto_data: &ProtoData, overlay: MaskOverlay<'_>, ) -> Result<()>
Draw masks from proto data onto image (fused decode+draw).
For YOLO segmentation models, this avoids materializing intermediate
Array3<u8> masks. The ProtoData contains mask coefficients and the
prototype tensor; the renderer computes mask_coeff @ protos directly
at the output resolution using bilinear sampling.
detect and proto_data.mask_coefficients must have the same length
(enforced by zip — excess entries are silently ignored). An empty
detect slice is valid and produces the base frame — cleared or
background-blitted — via the selected backend’s native primitive.
§Format requirements and output contract
Same as draw_decoded_masks, including
the “always fully writes dst” guarantee across all four
detection/background combinations.
overlay controls compositing — see draw_decoded_masks.