Trait opencv::dnn::prelude::LayerTrait[][src]

pub trait LayerTrait: AlgorithmTrait {
Show 35 methods fn as_raw_Layer(&self) -> *const c_void;
fn as_raw_mut_Layer(&mut self) -> *mut c_void; fn blobs(&mut self) -> Vector<Mat> { ... }
fn set_blobs(&mut self, val: Vector<Mat>) { ... }
fn name(&self) -> String { ... }
fn set_name(&mut self, val: &str) { ... }
fn typ(&self) -> String { ... }
fn set_type(&mut self, val: &str) { ... }
fn preferable_target(&self) -> i32 { ... }
fn set_preferable_target(&mut self, val: i32) { ... }
fn finalize(
        &mut self,
        inputs: &dyn ToInputArray,
        outputs: &mut dyn ToOutputArray
    ) -> Result<()> { ... }
fn forward_mat(
        &mut self,
        input: &mut Vector<Mat>,
        output: &mut Vector<Mat>,
        internals: &mut Vector<Mat>
    ) -> Result<()> { ... }
fn forward(
        &mut self,
        inputs: &dyn ToInputArray,
        outputs: &mut dyn ToOutputArray,
        internals: &mut dyn ToOutputArray
    ) -> Result<()> { ... }
fn forward_fallback(
        &mut self,
        inputs: &dyn ToInputArray,
        outputs: &mut dyn ToOutputArray,
        internals: &mut dyn ToOutputArray
    ) -> Result<()> { ... }
fn finalize_mat_to(
        &mut self,
        inputs: &Vector<Mat>,
        outputs: &mut Vector<Mat>
    ) -> Result<()> { ... }
fn finalize_mat(&mut self, inputs: &Vector<Mat>) -> Result<Vector<Mat>> { ... }
fn run(
        &mut self,
        inputs: &Vector<Mat>,
        outputs: &mut Vector<Mat>,
        internals: &mut Vector<Mat>
    ) -> Result<()> { ... }
fn input_name_to_index(&mut self, input_name: &str) -> Result<i32> { ... }
fn output_name_to_index(&mut self, output_name: &str) -> Result<i32> { ... }
fn support_backend(&mut self, backend_id: i32) -> Result<bool> { ... }
fn init_halide(
        &mut self,
        inputs: &Vector<Ptr<dyn BackendWrapper>>
    ) -> Result<Ptr<BackendNode>> { ... }
fn init_inf_engine(
        &mut self,
        inputs: &Vector<Ptr<dyn BackendWrapper>>
    ) -> Result<Ptr<BackendNode>> { ... }
fn init_ngraph(
        &mut self,
        inputs: &Vector<Ptr<dyn BackendWrapper>>,
        nodes: &Vector<Ptr<BackendNode>>
    ) -> Result<Ptr<BackendNode>> { ... }
fn init_vk_com(
        &mut self,
        inputs: &Vector<Ptr<dyn BackendWrapper>>
    ) -> Result<Ptr<BackendNode>> { ... }
fn init_cuda(
        &mut self,
        context: *mut c_void,
        inputs: &Vector<Ptr<dyn BackendWrapper>>,
        outputs: &Vector<Ptr<dyn BackendWrapper>>
    ) -> Result<Ptr<BackendNode>> { ... }
fn apply_halide_scheduler(
        &self,
        node: &mut Ptr<BackendNode>,
        inputs: &Vector<Mat>,
        outputs: &Vector<Mat>,
        target_id: i32
    ) -> Result<()> { ... }
fn try_attach(
        &mut self,
        node: &Ptr<BackendNode>
    ) -> Result<Ptr<BackendNode>> { ... }
fn set_activation(
        &mut self,
        layer: &Ptr<dyn ActivationLayer>
    ) -> Result<bool> { ... }
fn try_fuse(&mut self, top: &mut Ptr<Layer>) -> Result<bool> { ... }
fn get_scale_shift(&self, scale: &mut Mat, shift: &mut Mat) -> Result<()> { ... }
fn unset_attached(&mut self) -> Result<()> { ... }
fn get_memory_shapes(
        &self,
        inputs: &Vector<MatShape>,
        required_outputs: i32,
        outputs: &mut Vector<MatShape>,
        internals: &mut Vector<MatShape>
    ) -> Result<bool> { ... }
fn get_flops(
        &self,
        inputs: &Vector<MatShape>,
        outputs: &Vector<MatShape>
    ) -> Result<i64> { ... }
fn update_memory_shapes(
        &mut self,
        inputs: &Vector<MatShape>
    ) -> Result<bool> { ... }
fn set_params_from(&mut self, params: &LayerParams) -> Result<()> { ... }
}
Expand description

This interface class allows to build new Layers - are building blocks of networks.

Each class, derived from Layer, must implement allocate() methods to declare own outputs and forward() to compute outputs. Also before using the new layer into networks you must register your layer by using one of @ref dnnLayerFactory “LayerFactory” macros.

Required methods

Provided methods

List of learned parameters must be stored here to allow read them by using Net::getParam().

List of learned parameters must be stored here to allow read them by using Net::getParam().

Name of the layer instance, can be used for logging or other internal purposes.

Name of the layer instance, can be used for logging or other internal purposes.

Type name which was used for creating layer by layer factory.

Type name which was used for creating layer by layer factory.

prefer target for layer forwarding

prefer target for layer forwarding

Computes and sets internal parameters according to inputs, outputs and blobs.

Parameters

  • inputs: vector of already allocated input blobs
  • outputs:[out] vector of already allocated output blobs

If this method is called after network has allocated all memory for input and output blobs and before inferencing.

👎 Deprecated:

Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead

Given the @p input blobs, computes the output @p blobs.

Deprecated: Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead

Parameters

  • input: the input blobs.
  • output:[out] allocated output blobs, which will store results of the computation.
  • internals:[out] allocated internal blobs

Given the @p input blobs, computes the output @p blobs.

Parameters

  • inputs: the input blobs.
  • outputs:[out] allocated output blobs, which will store results of the computation.
  • internals:[out] allocated internal blobs

Given the @p input blobs, computes the output @p blobs.

Parameters

  • inputs: the input blobs.
  • outputs:[out] allocated output blobs, which will store results of the computation.
  • internals:[out] allocated internal blobs
👎 Deprecated:

Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead

@brief Computes and sets internal parameters according to inputs, outputs and blobs.

Parameters

  • inputs: vector of already allocated input blobs
  • outputs:[out] vector of already allocated output blobs

If this method is called after network has allocated all memory for input and output blobs and before inferencing.

Overloaded parameters

Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead

👎 Deprecated:

Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead

@brief Computes and sets internal parameters according to inputs, outputs and blobs.

Parameters

  • inputs: vector of already allocated input blobs
  • outputs:[out] vector of already allocated output blobs

If this method is called after network has allocated all memory for input and output blobs and before inferencing.

Overloaded parameters

Deprecated: Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead

👎 Deprecated:

This method will be removed in the future release.

Allocates layer and computes output.

Deprecated: This method will be removed in the future release.

Returns index of input blob into the input array.

Parameters

  • inputName: label of input blob

Each layer input and output can be labeled to easily identify them using “%<layer_name%>[.output_name]” notation. This method maps label of input blob to its index into input vector.

Returns index of output blob in output array.

See also

inputNameToIndex()

Ask layer if it support specific backend for doing computations.

Parameters

  • backendId: computation backend identifier.

See also

Backend

Returns Halide backend node.

Parameters

  • inputs: Input Halide buffers.

See also

BackendNode, BackendWrapper

Input buffers should be exactly the same that will be used in forward invocations. Despite we can use Halide::ImageParam based on input shape only, it helps prevent some memory management issues (if something wrong, Halide tests will be failed).

Returns a CUDA backend node

Parameters

  • context: void pointer to CSLContext object
  • inputs: layer inputs
  • outputs: layer outputs

Automatic Halide scheduling based on layer hyper-parameters.

Parameters

  • node: Backend node with Halide functions.
  • inputs: Blobs that will be used in forward invocations.
  • outputs: Blobs that will be used in forward invocations.
  • targetId: Target identifier

See also

BackendNode, Target

Layer don’t use own Halide::Func members because we can have applied layers fusing. In this way the fused function should be scheduled.

Implement layers fusing.

Parameters

  • node: Backend node of bottom layer.

See also

BackendNode

Actual for graph-based backends. If layer attached successfully, returns non-empty cv::Ptr to node of the same backend. Fuse only over the last function.

Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case.

Parameters

  • layer: The subsequent activation layer.

Returns true if the activation layer has been attached successfully.

Try to fuse current layer with a next one

Parameters

  • top: Next layer to be fused.

Returns

True if fusion was performed.

Returns parameters of layers with channel-wise multiplication and addition.

Parameters

  • scale:[out] Channel-wise multipliers. Total number of values should be equal to number of channels.
  • shift:[out] Channel-wise offsets. Total number of values should be equal to number of channels.

Some layers can fuse their transformations with further layers. In example, convolution + batch normalization. This way base layer use weights from layer after it. Fused layer is skipped. By default, @p scale and @p shift are empty that means layer has no element-wise multiplications or additions.

“Deattaches” all the layers, attached to particular layer.

Implementors