[][src]Trait opencv::ml::ANN_MLP

pub trait ANN_MLP: StatModel {
    fn as_raw_ANN_MLP(&self) -> *mut c_void;

    fn set_train_method(
        &mut self,
        method: i32,
        param1: f64,
        param2: f64
    ) -> Result<()> { ... }
fn get_train_method(&self) -> Result<i32> { ... }
fn set_activation_function(
        &mut self,
        _type: i32,
        param1: f64,
        param2: f64
    ) -> Result<()> { ... }
fn set_layer_sizes(&mut self, _layer_sizes: &dyn ToInputArray) -> Result<()> { ... }
fn get_layer_sizes(&self) -> Result<Mat> { ... }
fn get_term_criteria(&self) -> Result<TermCriteria> { ... }
fn set_term_criteria(&mut self, val: &TermCriteria) -> Result<()> { ... }
fn get_backprop_weight_scale(&self) -> Result<f64> { ... }
fn set_backprop_weight_scale(&mut self, val: f64) -> Result<()> { ... }
fn get_backprop_momentum_scale(&self) -> Result<f64> { ... }
fn set_backprop_momentum_scale(&mut self, val: f64) -> Result<()> { ... }
fn get_rprop_dw0(&self) -> Result<f64> { ... }
fn set_rprop_dw0(&mut self, val: f64) -> Result<()> { ... }
fn get_rprop_dw_plus(&self) -> Result<f64> { ... }
fn set_rprop_dw_plus(&mut self, val: f64) -> Result<()> { ... }
fn get_rprop_dw_minus(&self) -> Result<f64> { ... }
fn set_rprop_dw_minus(&mut self, val: f64) -> Result<()> { ... }
fn get_rprop_dw_min(&self) -> Result<f64> { ... }
fn set_rprop_dw_min(&mut self, val: f64) -> Result<()> { ... }
fn get_rprop_dw_max(&self) -> Result<f64> { ... }
fn set_rprop_dw_max(&mut self, val: f64) -> Result<()> { ... }
fn get_anneal_initial_t(&self) -> Result<f64> { ... }
fn set_anneal_initial_t(&mut self, val: f64) -> Result<()> { ... }
fn get_anneal_final_t(&self) -> Result<f64> { ... }
fn set_anneal_final_t(&mut self, val: f64) -> Result<()> { ... }
fn get_anneal_cooling_ratio(&self) -> Result<f64> { ... }
fn set_anneal_cooling_ratio(&mut self, val: f64) -> Result<()> { ... }
fn get_anneal_ite_per_step(&self) -> Result<i32> { ... }
fn set_anneal_ite_per_step(&mut self, val: i32) -> Result<()> { ... }
fn get_weights(&self, layer_idx: i32) -> Result<Mat> { ... } }

Artificial Neural Networks - Multi-Layer Perceptrons.

Unlike many other models in ML that are constructed and trained at once, in the MLP model these steps are separated. First, a network with the specified topology is created using the non-default constructor or the method ANN_MLP::create. All the weights are set to zeros. Then, the network is trained using a set of input and output vectors. The training procedure can be repeated more than once, that is, the weights can be adjusted based on the new training data.

Additional flags for StatModel::train are available: ANN_MLP::TrainFlags.

See also

@ref ml_intro_ann

Required methods

Loading content...

Provided methods

fn set_train_method(
    &mut self,
    method: i32,
    param1: f64,
    param2: f64
) -> Result<()>

Sets training method and common parameters.

Parameters

  • method: Default value is ANN_MLP::RPROP. See ANN_MLP::TrainingMethods.
  • param1: passed to setRpropDW0 for ANN_MLP::RPROP and to setBackpropWeightScale for ANN_MLP::BACKPROP and to initialT for ANN_MLP::ANNEAL.
  • param2: passed to setRpropDWMin for ANN_MLP::RPROP and to setBackpropMomentumScale for ANN_MLP::BACKPROP and to finalT for ANN_MLP::ANNEAL.

C++ default parameters

  • param1: 0
  • param2: 0

fn get_train_method(&self) -> Result<i32>

Returns current training method

fn set_activation_function(
    &mut self,
    _type: i32,
    param1: f64,
    param2: f64
) -> Result<()>

Initialize the activation function for each neuron. Currently the default and the only fully supported activation function is ANN_MLP::SIGMOID_SYM.

Parameters

  • type: The type of activation function. See ANN_MLP::ActivationFunctions.
  • param1: The first parameter of the activation function, inline formula. Default value is 0.
  • param2: The second parameter of the activation function, inline formula. Default value is 0.

C++ default parameters

  • param1: 0
  • param2: 0

fn set_layer_sizes(&mut self, _layer_sizes: &dyn ToInputArray) -> Result<()>

Integer vector specifying the number of neurons in each layer including the input and output layers. The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer. Default value is empty Mat.

See also

getLayerSizes

fn get_layer_sizes(&self) -> Result<Mat>

Integer vector specifying the number of neurons in each layer including the input and output layers. The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer.

See also

setLayerSizes

fn get_term_criteria(&self) -> Result<TermCriteria>

@see setTermCriteria

fn set_term_criteria(&mut self, val: &TermCriteria) -> Result<()>

@copybrief getTermCriteria @see getTermCriteria

fn get_backprop_weight_scale(&self) -> Result<f64>

@see setBackpropWeightScale

fn set_backprop_weight_scale(&mut self, val: f64) -> Result<()>

@copybrief getBackpropWeightScale @see getBackpropWeightScale

fn get_backprop_momentum_scale(&self) -> Result<f64>

@see setBackpropMomentumScale

fn set_backprop_momentum_scale(&mut self, val: f64) -> Result<()>

@copybrief getBackpropMomentumScale @see getBackpropMomentumScale

fn get_rprop_dw0(&self) -> Result<f64>

@see setRpropDW0

fn set_rprop_dw0(&mut self, val: f64) -> Result<()>

@copybrief getRpropDW0 @see getRpropDW0

fn get_rprop_dw_plus(&self) -> Result<f64>

@see setRpropDWPlus

fn set_rprop_dw_plus(&mut self, val: f64) -> Result<()>

@copybrief getRpropDWPlus @see getRpropDWPlus

fn get_rprop_dw_minus(&self) -> Result<f64>

@see setRpropDWMinus

fn set_rprop_dw_minus(&mut self, val: f64) -> Result<()>

@copybrief getRpropDWMinus @see getRpropDWMinus

fn get_rprop_dw_min(&self) -> Result<f64>

@see setRpropDWMin

fn set_rprop_dw_min(&mut self, val: f64) -> Result<()>

@copybrief getRpropDWMin @see getRpropDWMin

fn get_rprop_dw_max(&self) -> Result<f64>

@see setRpropDWMax

fn set_rprop_dw_max(&mut self, val: f64) -> Result<()>

@copybrief getRpropDWMax @see getRpropDWMax

fn get_anneal_initial_t(&self) -> Result<f64>

@see setAnnealInitialT

fn set_anneal_initial_t(&mut self, val: f64) -> Result<()>

@copybrief getAnnealInitialT @see getAnnealInitialT

fn get_anneal_final_t(&self) -> Result<f64>

@see setAnnealFinalT

fn set_anneal_final_t(&mut self, val: f64) -> Result<()>

@copybrief getAnnealFinalT @see getAnnealFinalT

fn get_anneal_cooling_ratio(&self) -> Result<f64>

@see setAnnealCoolingRatio

fn set_anneal_cooling_ratio(&mut self, val: f64) -> Result<()>

@copybrief getAnnealCoolingRatio @see getAnnealCoolingRatio

fn get_anneal_ite_per_step(&self) -> Result<i32>

@see setAnnealItePerStep

fn set_anneal_ite_per_step(&mut self, val: i32) -> Result<()>

@copybrief getAnnealItePerStep @see getAnnealItePerStep

fn get_weights(&self, layer_idx: i32) -> Result<Mat>

Loading content...

Methods

impl<'_> dyn ANN_MLP + '_[src]

pub fn create() -> Result<PtrOfANN_MLP>[src]

Creates empty model

Use StatModel::train to train the model, Algorithm::load<ANN_MLP>(filename) to load the pre-trained model. Note that the train method has optional flags: ANN_MLP::TrainFlags.

pub fn load(filepath: &str) -> Result<PtrOfANN_MLP>[src]

Loads and creates a serialized ANN from a file

Use ANN::save to serialize and store an ANN to disk. Load the ANN from this file again, by calling this function with the path to the file.

Parameters

  • filepath: path to serialized ANN

Implementors

Loading content...