Struct opencv::dnn::ConcatLayer [−][src]
pub struct ConcatLayer { /* fields omitted */ }
Implementations
Trait Implementations
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Performs the conversion.
Performs the conversion.
List of learned parameters must be stored here to allow read them by using Net::getParam().
Name of the layer instance, can be used for logging or other internal purposes.
prefer target for layer forwarding
fn finalize(
&mut self,
inputs: &dyn ToInputArray,
outputs: &mut dyn ToOutputArray
) -> Result<()>
fn finalize(
&mut self,
inputs: &dyn ToInputArray,
outputs: &mut dyn ToOutputArray
) -> Result<()>
Computes and sets internal parameters according to inputs, outputs and blobs. Read more
Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
Given the @p input blobs, computes the output @p blobs. Read more
fn forward(
&mut self,
inputs: &dyn ToInputArray,
outputs: &mut dyn ToOutputArray,
internals: &mut dyn ToOutputArray
) -> Result<()>
fn forward(
&mut self,
inputs: &dyn ToInputArray,
outputs: &mut dyn ToOutputArray,
internals: &mut dyn ToOutputArray
) -> Result<()>
Given the @p input blobs, computes the output @p blobs. Read more
fn forward_fallback(
&mut self,
inputs: &dyn ToInputArray,
outputs: &mut dyn ToOutputArray,
internals: &mut dyn ToOutputArray
) -> Result<()>
fn forward_fallback(
&mut self,
inputs: &dyn ToInputArray,
outputs: &mut dyn ToOutputArray,
internals: &mut dyn ToOutputArray
) -> Result<()>
Given the @p input blobs, computes the output @p blobs. Read more
Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
@brief Computes and sets internal parameters according to inputs, outputs and blobs. Read more
Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
@brief Computes and sets internal parameters according to inputs, outputs and blobs. Read more
This method will be removed in the future release.
Allocates layer and computes output. Read more
Returns index of input blob into the input array. Read more
Returns index of output blob in output array. Read more
Ask layer if it support specific backend for doing computations. Read more
fn init_halide(
&mut self,
inputs: &Vector<Ptr<dyn BackendWrapper>>
) -> Result<Ptr<BackendNode>>
fn init_halide(
&mut self,
inputs: &Vector<Ptr<dyn BackendWrapper>>
) -> Result<Ptr<BackendNode>>
Returns Halide backend node. Read more
fn init_inf_engine(
&mut self,
inputs: &Vector<Ptr<dyn BackendWrapper>>
) -> Result<Ptr<BackendNode>>
fn init_ngraph(
&mut self,
inputs: &Vector<Ptr<dyn BackendWrapper>>,
nodes: &Vector<Ptr<BackendNode>>
) -> Result<Ptr<BackendNode>>
fn init_vk_com(
&mut self,
inputs: &Vector<Ptr<dyn BackendWrapper>>
) -> Result<Ptr<BackendNode>>
fn init_cuda(
&mut self,
context: *mut c_void,
inputs: &Vector<Ptr<dyn BackendWrapper>>,
outputs: &Vector<Ptr<dyn BackendWrapper>>
) -> Result<Ptr<BackendNode>>
fn init_cuda(
&mut self,
context: *mut c_void,
inputs: &Vector<Ptr<dyn BackendWrapper>>,
outputs: &Vector<Ptr<dyn BackendWrapper>>
) -> Result<Ptr<BackendNode>>
Returns a CUDA backend node Read more
Implement layers fusing. Read more
Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
Try to fuse current layer with a next one Read more
“Deattaches” all the layers, attached to particular layer.
List of learned parameters must be stored here to allow read them by using Net::getParam().
Name of the layer instance, can be used for logging or other internal purposes.
prefer target for layer forwarding
Automatic Halide scheduling based on layer hyper-parameters. Read more
Returns parameters of layers with channel-wise multiplication and addition. Read more