Struct opencv::dnn::SliceLayer [−][src]
pub struct SliceLayer { /* fields omitted */ }
Expand description
Slice layer has several modes:
- Caffe mode
Parameters
- axis: Axis of split operation
- slice_point: Array of split points
Number of output blobs equals to number of split points plus one. The first blob is a slice on input from 0 to @p slice_point[0] - 1 by @p axis, the second output blob is a slice of input from @p slice_point[0] to @p slice_point[1] - 1 by @p axis and the last output blob is a slice of input from @p slice_point[-1] up to the end of @p axis size.
- TensorFlow mode
- begin: Vector of start indices
- size: Vector of sizes
More convenient numpy-like slice. One and only output blob
is a slice input[begin[0]:begin[0]+size[0], begin[1]:begin[1]+size[1], ...]
- Torch mode
- axis: Axis of split operation
Split input blob on the equal parts by @p axis.
Implementations
Trait Implementations
Stores algorithm parameters in a file storage
simplified API for language bindings Stores algorithm parameters in a file storage Read more
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
Saves the algorithm to a file. In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs). Read more
Returns the algorithm string identifier. This string is used as top level xml/yml node tag when the object is saved to a file or string. Read more
Performs the conversion.
Performs the conversion.
List of learned parameters must be stored here to allow read them by using Net::getParam().
Name of the layer instance, can be used for logging or other internal purposes.
prefer target for layer forwarding
fn finalize(
&mut self,
inputs: &dyn ToInputArray,
outputs: &mut dyn ToOutputArray
) -> Result<()>
fn finalize(
&mut self,
inputs: &dyn ToInputArray,
outputs: &mut dyn ToOutputArray
) -> Result<()>
Computes and sets internal parameters according to inputs, outputs and blobs. Read more
Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
Given the @p input blobs, computes the output @p blobs. Read more
fn forward(
&mut self,
inputs: &dyn ToInputArray,
outputs: &mut dyn ToOutputArray,
internals: &mut dyn ToOutputArray
) -> Result<()>
fn forward(
&mut self,
inputs: &dyn ToInputArray,
outputs: &mut dyn ToOutputArray,
internals: &mut dyn ToOutputArray
) -> Result<()>
Given the @p input blobs, computes the output @p blobs. Read more
fn forward_fallback(
&mut self,
inputs: &dyn ToInputArray,
outputs: &mut dyn ToOutputArray,
internals: &mut dyn ToOutputArray
) -> Result<()>
fn forward_fallback(
&mut self,
inputs: &dyn ToInputArray,
outputs: &mut dyn ToOutputArray,
internals: &mut dyn ToOutputArray
) -> Result<()>
Given the @p input blobs, computes the output @p blobs. Read more
Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
@brief Computes and sets internal parameters according to inputs, outputs and blobs. Read more
Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
@brief Computes and sets internal parameters according to inputs, outputs and blobs. Read more
This method will be removed in the future release.
Allocates layer and computes output. Read more
Returns index of input blob into the input array. Read more
Returns index of output blob in output array. Read more
Ask layer if it support specific backend for doing computations. Read more
fn init_halide(
&mut self,
inputs: &Vector<Ptr<dyn BackendWrapper>>
) -> Result<Ptr<BackendNode>>
fn init_halide(
&mut self,
inputs: &Vector<Ptr<dyn BackendWrapper>>
) -> Result<Ptr<BackendNode>>
Returns Halide backend node. Read more
fn init_inf_engine(
&mut self,
inputs: &Vector<Ptr<dyn BackendWrapper>>
) -> Result<Ptr<BackendNode>>
fn init_ngraph(
&mut self,
inputs: &Vector<Ptr<dyn BackendWrapper>>,
nodes: &Vector<Ptr<BackendNode>>
) -> Result<Ptr<BackendNode>>
fn init_vk_com(
&mut self,
inputs: &Vector<Ptr<dyn BackendWrapper>>
) -> Result<Ptr<BackendNode>>
fn init_cuda(
&mut self,
context: *mut c_void,
inputs: &Vector<Ptr<dyn BackendWrapper>>,
outputs: &Vector<Ptr<dyn BackendWrapper>>
) -> Result<Ptr<BackendNode>>
fn init_cuda(
&mut self,
context: *mut c_void,
inputs: &Vector<Ptr<dyn BackendWrapper>>,
outputs: &Vector<Ptr<dyn BackendWrapper>>
) -> Result<Ptr<BackendNode>>
Returns a CUDA backend node Read more
Implement layers fusing. Read more
Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. Read more
Try to fuse current layer with a next one Read more
“Deattaches” all the layers, attached to particular layer.
List of learned parameters must be stored here to allow read them by using Net::getParam().
Name of the layer instance, can be used for logging or other internal purposes.
prefer target for layer forwarding
Automatic Halide scheduling based on layer hyper-parameters. Read more
Returns parameters of layers with channel-wise multiplication and addition. Read more