pub trait LayerTraitConst: AlgorithmTraitConst {
// Required method
fn as_raw_Layer(&self) -> *const c_void;
// Provided methods
fn blobs(&self) -> Vector<Mat> { ... }
fn name(&self) -> String { ... }
fn typ(&self) -> String { ... }
fn preferable_target(&self) -> i32 { ... }
fn apply_halide_scheduler(
&self,
node: &mut Ptr<BackendNode>,
inputs: &Vector<Mat>,
outputs: &Vector<Mat>,
target_id: i32,
) -> Result<()> { ... }
fn get_scale_shift(
&self,
scale: &mut impl MatTrait,
shift: &mut impl MatTrait,
) -> Result<()> { ... }
fn get_scale_zeropoint(
&self,
scale: &mut f32,
zeropoint: &mut i32,
) -> Result<()> { ... }
fn get_memory_shapes(
&self,
inputs: &Vector<MatShape>,
required_outputs: i32,
outputs: &mut Vector<MatShape>,
internals: &mut Vector<MatShape>,
) -> Result<bool> { ... }
fn get_flops(
&self,
inputs: &Vector<MatShape>,
outputs: &Vector<MatShape>,
) -> Result<i64> { ... }
}
Expand description
Constant methods for crate::dnn::Layer
Required Methods§
fn as_raw_Layer(&self) -> *const c_void
Provided Methods§
Sourcefn blobs(&self) -> Vector<Mat>
fn blobs(&self) -> Vector<Mat>
List of learned parameters must be stored here to allow read them by using Net::getParam().
Sourcefn name(&self) -> String
fn name(&self) -> String
Name of the layer instance, can be used for logging or other internal purposes.
Sourcefn preferable_target(&self) -> i32
fn preferable_target(&self) -> i32
prefer target for layer forwarding
Sourcefn apply_halide_scheduler(
&self,
node: &mut Ptr<BackendNode>,
inputs: &Vector<Mat>,
outputs: &Vector<Mat>,
target_id: i32,
) -> Result<()>
fn apply_halide_scheduler( &self, node: &mut Ptr<BackendNode>, inputs: &Vector<Mat>, outputs: &Vector<Mat>, target_id: i32, ) -> Result<()>
Automatic Halide scheduling based on layer hyper-parameters.
§Parameters
- node: Backend node with Halide functions.
- inputs: Blobs that will be used in forward invocations.
- outputs: Blobs that will be used in forward invocations.
- targetId: Target identifier
§See also
BackendNode, Target
Layer don’t use own Halide::Func members because we can have applied layers fusing. In this way the fused function should be scheduled.
Sourcefn get_scale_shift(
&self,
scale: &mut impl MatTrait,
shift: &mut impl MatTrait,
) -> Result<()>
fn get_scale_shift( &self, scale: &mut impl MatTrait, shift: &mut impl MatTrait, ) -> Result<()>
Returns parameters of layers with channel-wise multiplication and addition.
§Parameters
- scale:[out] Channel-wise multipliers. Total number of values should be equal to number of channels.
- shift:[out] Channel-wise offsets. Total number of values should be equal to number of channels.
Some layers can fuse their transformations with further layers. In example, convolution + batch normalization. This way base layer use weights from layer after it. Fused layer is skipped. By default, @p scale and @p shift are empty that means layer has no element-wise multiplications or additions.
Sourcefn get_scale_zeropoint(
&self,
scale: &mut f32,
zeropoint: &mut i32,
) -> Result<()>
fn get_scale_zeropoint( &self, scale: &mut f32, zeropoint: &mut i32, ) -> Result<()>
Returns scale and zeropoint of layers
§Parameters
- scale:[out] Output scale
- zeropoint:[out] Output zeropoint
By default, @p scale is 1 and @p zeropoint is 0.
fn get_memory_shapes( &self, inputs: &Vector<MatShape>, required_outputs: i32, outputs: &mut Vector<MatShape>, internals: &mut Vector<MatShape>, ) -> Result<bool>
fn get_flops( &self, inputs: &Vector<MatShape>, outputs: &Vector<MatShape>, ) -> Result<i64>
Dyn Compatibility§
This trait is not dyn compatible.
In older versions of Rust, dyn compatibility was called "object safety", so this trait is not object safe.