Trait opencv::hub_prelude::ANN_MLPConst[][src]

pub trait ANN_MLPConst: StatModelConst {
Show 16 methods fn as_raw_ANN_MLP(&self) -> *const c_void; fn get_train_method(&self) -> Result<i32> { ... }
fn get_layer_sizes(&self) -> Result<Mat> { ... }
fn get_term_criteria(&self) -> Result<TermCriteria> { ... }
fn get_backprop_weight_scale(&self) -> Result<f64> { ... }
fn get_backprop_momentum_scale(&self) -> Result<f64> { ... }
fn get_rprop_dw0(&self) -> Result<f64> { ... }
fn get_rprop_dw_plus(&self) -> Result<f64> { ... }
fn get_rprop_dw_minus(&self) -> Result<f64> { ... }
fn get_rprop_dw_min(&self) -> Result<f64> { ... }
fn get_rprop_dw_max(&self) -> Result<f64> { ... }
fn get_anneal_initial_t(&self) -> Result<f64> { ... }
fn get_anneal_final_t(&self) -> Result<f64> { ... }
fn get_anneal_cooling_ratio(&self) -> Result<f64> { ... }
fn get_anneal_ite_per_step(&self) -> Result<i32> { ... }
fn get_weights(&self, layer_idx: i32) -> Result<Mat> { ... }
}
Expand description

Artificial Neural Networks - Multi-Layer Perceptrons.

Unlike many other models in ML that are constructed and trained at once, in the MLP model these steps are separated. First, a network with the specified topology is created using the non-default constructor or the method ANN_MLP::create. All the weights are set to zeros. Then, the network is trained using a set of input and output vectors. The training procedure can be repeated more than once, that is, the weights can be adjusted based on the new training data.

Additional flags for StatModel::train are available: ANN_MLP::TrainFlags.

See also

@ref ml_intro_ann

Required methods

Provided methods

Returns current training method

Integer vector specifying the number of neurons in each layer including the input and output layers. The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer.

See also

setLayerSizes

Termination criteria of the training algorithm. You can specify the maximum number of iterations (maxCount) and/or how much the error could change between the iterations to make the algorithm continue (epsilon). Default value is TermCriteria(TermCriteria::MAX_ITER + TermCriteria::EPS, 1000, 0.01).

See also

setTermCriteria

BPROP: Strength of the weight gradient term. The recommended value is about 0.1. Default value is 0.1.

See also

setBackpropWeightScale

BPROP: Strength of the momentum term (the difference between weights on the 2 previous iterations). This parameter provides some inertia to smooth the random fluctuations of the weights. It can vary from 0 (the feature is disabled) to 1 and beyond. The value 0.1 or so is good enough. Default value is 0.1.

See also

setBackpropMomentumScale

RPROP: Initial value inline formula of update-values inline formula. Default value is 0.1.

See also

setRpropDW0

RPROP: Increase factor inline formula. It must be >1. Default value is 1.2.

See also

setRpropDWPlus

RPROP: Decrease factor inline formula. It must be <1. Default value is 0.5.

See also

setRpropDWMinus

RPROP: Update-values lower limit inline formula. It must be positive. Default value is FLT_EPSILON.

See also

setRpropDWMin

RPROP: Update-values upper limit inline formula. It must be >1. Default value is 50.

See also

setRpropDWMax

ANNEAL: Update initial temperature. It must be >=0. Default value is 10.

See also

setAnnealInitialT

ANNEAL: Update final temperature. It must be >=0 and less than initialT. Default value is 0.1.

See also

setAnnealFinalT

ANNEAL: Update cooling ratio. It must be >0 and less than 1. Default value is 0.95.

See also

setAnnealCoolingRatio

ANNEAL: Update iteration per step. It must be >0 . Default value is 10.

See also

setAnnealItePerStep

Implementors