pub trait ANN_MLP: ANN_MLPConst + StatModel {
Show 17 methods fn as_raw_mut_ANN_MLP(&mut self) -> *mut c_void; fn set_train_method(
        &mut self,
        method: i32,
        param1: f64,
        param2: f64
    ) -> Result<()> { ... } fn set_activation_function(
        &mut self,
        typ: i32,
        param1: f64,
        param2: f64
    ) -> Result<()> { ... } fn set_layer_sizes(&mut self, _layer_sizes: &dyn ToInputArray) -> Result<()> { ... } fn set_term_criteria(&mut self, val: TermCriteria) -> Result<()> { ... } fn set_backprop_weight_scale(&mut self, val: f64) -> Result<()> { ... } fn set_backprop_momentum_scale(&mut self, val: f64) -> Result<()> { ... } fn set_rprop_dw0(&mut self, val: f64) -> Result<()> { ... } fn set_rprop_dw_plus(&mut self, val: f64) -> Result<()> { ... } fn set_rprop_dw_minus(&mut self, val: f64) -> Result<()> { ... } fn set_rprop_dw_min(&mut self, val: f64) -> Result<()> { ... } fn set_rprop_dw_max(&mut self, val: f64) -> Result<()> { ... } fn set_anneal_initial_t(&mut self, val: f64) -> Result<()> { ... } fn set_anneal_final_t(&mut self, val: f64) -> Result<()> { ... } fn set_anneal_cooling_ratio(&mut self, val: f64) -> Result<()> { ... } fn set_anneal_ite_per_step(&mut self, val: i32) -> Result<()> { ... } fn set_anneal_energy_rng(&mut self, rng: &RNG) -> Result<()> { ... }
}

Required Methods

Provided Methods

Sets training method and common parameters.

Parameters
  • method: Default value is ANN_MLP::RPROP. See ANN_MLP::TrainingMethods.
  • param1: passed to setRpropDW0 for ANN_MLP::RPROP and to setBackpropWeightScale for ANN_MLP::BACKPROP and to initialT for ANN_MLP::ANNEAL.
  • param2: passed to setRpropDWMin for ANN_MLP::RPROP and to setBackpropMomentumScale for ANN_MLP::BACKPROP and to finalT for ANN_MLP::ANNEAL.
C++ default parameters
  • param1: 0
  • param2: 0

Initialize the activation function for each neuron. Currently the default and the only fully supported activation function is ANN_MLP::SIGMOID_SYM.

Parameters
  • type: The type of activation function. See ANN_MLP::ActivationFunctions.
  • param1: The first parameter of the activation function, inline formula. Default value is 0.
  • param2: The second parameter of the activation function, inline formula. Default value is 0.
C++ default parameters
  • param1: 0
  • param2: 0

Integer vector specifying the number of neurons in each layer including the input and output layers. The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer. Default value is empty Mat.

See also

getLayerSizes

Termination criteria of the training algorithm. You can specify the maximum number of iterations (maxCount) and/or how much the error could change between the iterations to make the algorithm continue (epsilon). Default value is TermCriteria(TermCriteria::MAX_ITER + TermCriteria::EPS, 1000, 0.01).

See also

setTermCriteria getTermCriteria

BPROP: Strength of the weight gradient term. The recommended value is about 0.1. Default value is 0.1.

See also

setBackpropWeightScale getBackpropWeightScale

BPROP: Strength of the momentum term (the difference between weights on the 2 previous iterations). This parameter provides some inertia to smooth the random fluctuations of the weights. It can vary from 0 (the feature is disabled) to 1 and beyond. The value 0.1 or so is good enough. Default value is 0.1.

See also

setBackpropMomentumScale getBackpropMomentumScale

RPROP: Initial value inline formula of update-values inline formula. Default value is 0.1.

See also

setRpropDW0 getRpropDW0

RPROP: Increase factor inline formula. It must be >1. Default value is 1.2.

See also

setRpropDWPlus getRpropDWPlus

RPROP: Decrease factor inline formula. It must be <1. Default value is 0.5.

See also

setRpropDWMinus getRpropDWMinus

RPROP: Update-values lower limit inline formula. It must be positive. Default value is FLT_EPSILON.

See also

setRpropDWMin getRpropDWMin

RPROP: Update-values upper limit inline formula. It must be >1. Default value is 50.

See also

setRpropDWMax getRpropDWMax

ANNEAL: Update initial temperature. It must be >=0. Default value is 10.

See also

setAnnealInitialT getAnnealInitialT

ANNEAL: Update final temperature. It must be >=0 and less than initialT. Default value is 0.1.

See also

setAnnealFinalT getAnnealFinalT

ANNEAL: Update cooling ratio. It must be >0 and less than 1. Default value is 0.95.

See also

setAnnealCoolingRatio getAnnealCoolingRatio

ANNEAL: Update iteration per step. It must be >0 . Default value is 10.

See also

setAnnealItePerStep getAnnealItePerStep

Set/initialize anneal RNG

Implementations

Creates empty model

Use StatModel::train to train the model, Algorithm::load<ANN_MLP>(filename) to load the pre-trained model. Note that the train method has optional flags: ANN_MLP::TrainFlags.

Loads and creates a serialized ANN from a file

Use ANN::save to serialize and store an ANN to disk. Load the ANN from this file again, by calling this function with the path to the file.

Parameters
  • filepath: path to serialized ANN

Implementors