Trait opencv::dnn::prelude::LayerTraitConst[][src]

pub trait LayerTraitConst: AlgorithmTraitConst {
    fn as_raw_Layer(&self) -> *const c_void;

    fn blobs(&self) -> Vector<Mat> { ... }
fn name(&self) -> String { ... }
fn typ(&self) -> String { ... }
fn preferable_target(&self) -> i32 { ... }
fn apply_halide_scheduler(
        &self,
        node: &mut Ptr<BackendNode>,
        inputs: &Vector<Mat>,
        outputs: &Vector<Mat>,
        target_id: i32
    ) -> Result<()> { ... }
fn get_scale_shift(&self, scale: &mut Mat, shift: &mut Mat) -> Result<()> { ... }
fn get_scale_zeropoint(
        &self,
        scale: &mut f32,
        zeropoint: &mut i32
    ) -> Result<()> { ... }
fn get_memory_shapes(
        &self,
        inputs: &Vector<MatShape>,
        required_outputs: i32,
        outputs: &mut Vector<MatShape>,
        internals: &mut Vector<MatShape>
    ) -> Result<bool> { ... }
fn get_flops(
        &self,
        inputs: &Vector<MatShape>,
        outputs: &Vector<MatShape>
    ) -> Result<i64> { ... } }
Expand description

This interface class allows to build new Layers - are building blocks of networks.

Each class, derived from Layer, must implement allocate() methods to declare own outputs and forward() to compute outputs. Also before using the new layer into networks you must register your layer by using one of @ref dnnLayerFactory “LayerFactory” macros.

Required methods

Provided methods

List of learned parameters must be stored here to allow read them by using Net::getParam().

Name of the layer instance, can be used for logging or other internal purposes.

Type name which was used for creating layer by layer factory.

prefer target for layer forwarding

Automatic Halide scheduling based on layer hyper-parameters.

Parameters
  • node: Backend node with Halide functions.
  • inputs: Blobs that will be used in forward invocations.
  • outputs: Blobs that will be used in forward invocations.
  • targetId: Target identifier
See also

BackendNode, Target

Layer don’t use own Halide::Func members because we can have applied layers fusing. In this way the fused function should be scheduled.

Returns parameters of layers with channel-wise multiplication and addition.

Parameters
  • scale:[out] Channel-wise multipliers. Total number of values should be equal to number of channels.
  • shift:[out] Channel-wise offsets. Total number of values should be equal to number of channels.

Some layers can fuse their transformations with further layers. In example, convolution + batch normalization. This way base layer use weights from layer after it. Fused layer is skipped. By default, @p scale and @p shift are empty that means layer has no element-wise multiplications or additions.

Returns scale and zeropoint of layers

Parameters
  • scale:[out] Output scale
  • zeropoint:[out] Output zeropoint

By default, @p scale is 1 and @p zeropoint is 0.

Implementors