[−][src]Trait opencv::hub_prelude::ANN_MLP
Artificial Neural Networks - Multi-Layer Perceptrons.
Unlike many other models in ML that are constructed and trained at once, in the MLP model these steps are separated. First, a network with the specified topology is created using the non-default constructor or the method ANN_MLP::create. All the weights are set to zeros. Then, the network is trained using a set of input and output vectors. The training procedure can be repeated more than once, that is, the weights can be adjusted based on the new training data.
Additional flags for StatModel::train are available: ANN_MLP::TrainFlags.
See also
@ref ml_intro_ann
Required methods
pub fn as_raw_ANN_MLP(&self) -> *const c_void
[src]
pub fn as_raw_mut_ANN_MLP(&mut self) -> *mut c_void
[src]
Provided methods
pub fn set_train_method(
&mut self,
method: i32,
param1: f64,
param2: f64
) -> Result<()>
[src]
&mut self,
method: i32,
param1: f64,
param2: f64
) -> Result<()>
Sets training method and common parameters.
Parameters
- method: Default value is ANN_MLP::RPROP. See ANN_MLP::TrainingMethods.
- param1: passed to setRpropDW0 for ANN_MLP::RPROP and to setBackpropWeightScale for ANN_MLP::BACKPROP and to initialT for ANN_MLP::ANNEAL.
- param2: passed to setRpropDWMin for ANN_MLP::RPROP and to setBackpropMomentumScale for ANN_MLP::BACKPROP and to finalT for ANN_MLP::ANNEAL.
C++ default parameters
- param1: 0
- param2: 0
pub fn get_train_method(&self) -> Result<i32>
[src]
Returns current training method
pub fn set_activation_function(
&mut self,
typ: i32,
param1: f64,
param2: f64
) -> Result<()>
[src]
&mut self,
typ: i32,
param1: f64,
param2: f64
) -> Result<()>
Initialize the activation function for each neuron. Currently the default and the only fully supported activation function is ANN_MLP::SIGMOID_SYM.
Parameters
- type: The type of activation function. See ANN_MLP::ActivationFunctions.
- param1: The first parameter of the activation function, . Default value is 0.
- param2: The second parameter of the activation function, . Default value is 0.
C++ default parameters
- param1: 0
- param2: 0
pub fn set_layer_sizes(&mut self, _layer_sizes: &dyn ToInputArray) -> Result<()>
[src]
Integer vector specifying the number of neurons in each layer including the input and output layers. The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer. Default value is empty Mat.
See also
getLayerSizes
pub fn get_layer_sizes(&self) -> Result<Mat>
[src]
Integer vector specifying the number of neurons in each layer including the input and output layers. The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer.
See also
setLayerSizes
pub fn get_term_criteria(&self) -> Result<TermCriteria>
[src]
Termination criteria of the training algorithm. You can specify the maximum number of iterations (maxCount) and/or how much the error could change between the iterations to make the algorithm continue (epsilon). Default value is TermCriteria(TermCriteria::MAX_ITER + TermCriteria::EPS, 1000, 0.01).
See also
setTermCriteria
pub fn set_term_criteria(&mut self, val: TermCriteria) -> Result<()>
[src]
Termination criteria of the training algorithm. You can specify the maximum number of iterations (maxCount) and/or how much the error could change between the iterations to make the algorithm continue (epsilon). Default value is TermCriteria(TermCriteria::MAX_ITER + TermCriteria::EPS, 1000, 0.01).
See also
setTermCriteria getTermCriteria
pub fn get_backprop_weight_scale(&self) -> Result<f64>
[src]
BPROP: Strength of the weight gradient term. The recommended value is about 0.1. Default value is 0.1.
See also
setBackpropWeightScale
pub fn set_backprop_weight_scale(&mut self, val: f64) -> Result<()>
[src]
BPROP: Strength of the weight gradient term. The recommended value is about 0.1. Default value is 0.1.
See also
setBackpropWeightScale getBackpropWeightScale
pub fn get_backprop_momentum_scale(&self) -> Result<f64>
[src]
BPROP: Strength of the momentum term (the difference between weights on the 2 previous iterations). This parameter provides some inertia to smooth the random fluctuations of the weights. It can vary from 0 (the feature is disabled) to 1 and beyond. The value 0.1 or so is good enough. Default value is 0.1.
See also
setBackpropMomentumScale
pub fn set_backprop_momentum_scale(&mut self, val: f64) -> Result<()>
[src]
BPROP: Strength of the momentum term (the difference between weights on the 2 previous iterations). This parameter provides some inertia to smooth the random fluctuations of the weights. It can vary from 0 (the feature is disabled) to 1 and beyond. The value 0.1 or so is good enough. Default value is 0.1.
See also
setBackpropMomentumScale getBackpropMomentumScale
pub fn get_rprop_dw0(&self) -> Result<f64>
[src]
pub fn set_rprop_dw0(&mut self, val: f64) -> Result<()>
[src]
pub fn get_rprop_dw_plus(&self) -> Result<f64>
[src]
pub fn set_rprop_dw_plus(&mut self, val: f64) -> Result<()>
[src]
RPROP: Increase factor . It must be >1. Default value is 1.2.
See also
setRpropDWPlus getRpropDWPlus
pub fn get_rprop_dw_minus(&self) -> Result<f64>
[src]
pub fn set_rprop_dw_minus(&mut self, val: f64) -> Result<()>
[src]
RPROP: Decrease factor . It must be <1. Default value is 0.5.
See also
setRpropDWMinus getRpropDWMinus
pub fn get_rprop_dw_min(&self) -> Result<f64>
[src]
RPROP: Update-values lower limit . It must be positive. Default value is FLT_EPSILON.
See also
setRpropDWMin
pub fn set_rprop_dw_min(&mut self, val: f64) -> Result<()>
[src]
RPROP: Update-values lower limit . It must be positive. Default value is FLT_EPSILON.
See also
setRpropDWMin getRpropDWMin
pub fn get_rprop_dw_max(&self) -> Result<f64>
[src]
pub fn set_rprop_dw_max(&mut self, val: f64) -> Result<()>
[src]
RPROP: Update-values upper limit . It must be >1. Default value is 50.
See also
setRpropDWMax getRpropDWMax
pub fn get_anneal_initial_t(&self) -> Result<f64>
[src]
pub fn set_anneal_initial_t(&mut self, val: f64) -> Result<()>
[src]
ANNEAL: Update initial temperature. It must be >=0. Default value is 10.
See also
setAnnealInitialT getAnnealInitialT
pub fn get_anneal_final_t(&self) -> Result<f64>
[src]
ANNEAL: Update final temperature. It must be >=0 and less than initialT. Default value is 0.1.
See also
setAnnealFinalT
pub fn set_anneal_final_t(&mut self, val: f64) -> Result<()>
[src]
ANNEAL: Update final temperature. It must be >=0 and less than initialT. Default value is 0.1.
See also
setAnnealFinalT getAnnealFinalT
pub fn get_anneal_cooling_ratio(&self) -> Result<f64>
[src]
ANNEAL: Update cooling ratio. It must be >0 and less than 1. Default value is 0.95.
See also
setAnnealCoolingRatio
pub fn set_anneal_cooling_ratio(&mut self, val: f64) -> Result<()>
[src]
ANNEAL: Update cooling ratio. It must be >0 and less than 1. Default value is 0.95.
See also
setAnnealCoolingRatio getAnnealCoolingRatio
pub fn get_anneal_ite_per_step(&self) -> Result<i32>
[src]
ANNEAL: Update iteration per step. It must be >0 . Default value is 10.
See also
setAnnealItePerStep
pub fn set_anneal_ite_per_step(&mut self, val: i32) -> Result<()>
[src]
ANNEAL: Update iteration per step. It must be >0 . Default value is 10.
See also
setAnnealItePerStep getAnnealItePerStep
pub fn set_anneal_energy_rng(&mut self, rng: &RNG) -> Result<()>
[src]
Set/initialize anneal RNG
pub fn get_weights(&self, layer_idx: i32) -> Result<Mat>
[src]
Implementations
impl<'_> dyn ANN_MLP + '_
[src]
pub fn create() -> Result<Ptr<dyn ANN_MLP>>
[src]
Creates empty model
Use StatModel::train to train the model, Algorithm::load<ANN_MLP>(filename) to load the pre-trained model. Note that the train method has optional flags: ANN_MLP::TrainFlags.
pub fn load(filepath: &str) -> Result<Ptr<dyn ANN_MLP>>
[src]
Loads and creates a serialized ANN from a file
Use ANN::save to serialize and store an ANN to disk. Load the ANN from this file again, by calling this function with the path to the file.
Parameters
- filepath: path to serialized ANN