pub struct ANN_MLP { /* private fields */ }
Expand description
Artificial Neural Networks - Multi-Layer Perceptrons.
Unlike many other models in ML that are constructed and trained at once, in the MLP model these steps are separated. First, a network with the specified topology is created using the non-default constructor or the method ANN_MLP::create. All the weights are set to zeros. Then, the network is trained using a set of input and output vectors. The training procedure can be repeated more than once, that is, the weights can be adjusted based on the new training data.
Additional flags for StatModel::train are available: ANN_MLP::TrainFlags.
See also
[ml_intro_ann]
Implementations§
Trait Implementations§
source§impl ANN_MLPTrait for ANN_MLP
impl ANN_MLPTrait for ANN_MLP
fn as_raw_mut_ANN_MLP(&mut self) -> *mut c_void
source§fn set_train_method(
&mut self,
method: i32,
param1: f64,
param2: f64
) -> Result<()>
fn set_train_method( &mut self, method: i32, param1: f64, param2: f64 ) -> Result<()>
Sets training method and common parameters. Read more
source§fn set_train_method_def(&mut self, method: i32) -> Result<()>
fn set_train_method_def(&mut self, method: i32) -> Result<()>
Sets training method and common parameters. Read more
source§fn set_activation_function(
&mut self,
typ: i32,
param1: f64,
param2: f64
) -> Result<()>
fn set_activation_function( &mut self, typ: i32, param1: f64, param2: f64 ) -> Result<()>
Initialize the activation function for each neuron.
Currently the default and the only fully supported activation function is ANN_MLP::SIGMOID_SYM. Read more
source§fn set_activation_function_def(&mut self, typ: i32) -> Result<()>
fn set_activation_function_def(&mut self, typ: i32) -> Result<()>
Initialize the activation function for each neuron.
Currently the default and the only fully supported activation function is ANN_MLP::SIGMOID_SYM. Read more
source§fn set_layer_sizes(&mut self, _layer_sizes: &impl ToInputArray) -> Result<()>
fn set_layer_sizes(&mut self, _layer_sizes: &impl ToInputArray) -> Result<()>
Integer vector specifying the number of neurons in each layer including the input and output layers.
The very first element specifies the number of elements in the input layer.
The last element - number of elements in the output layer. Default value is empty Mat. Read more
source§fn set_term_criteria(&mut self, val: TermCriteria) -> Result<()>
fn set_term_criteria(&mut self, val: TermCriteria) -> Result<()>
Termination criteria of the training algorithm.
You can specify the maximum number of iterations (maxCount) and/or how much the error could
change between the iterations to make the algorithm continue (epsilon). Default value is
TermCriteria(TermCriteria::MAX_ITER + TermCriteria::EPS, 1000, 0.01). Read more
source§fn set_backprop_weight_scale(&mut self, val: f64) -> Result<()>
fn set_backprop_weight_scale(&mut self, val: f64) -> Result<()>
BPROP: Strength of the weight gradient term.
The recommended value is about 0.1. Default value is 0.1. Read more
source§fn set_backprop_momentum_scale(&mut self, val: f64) -> Result<()>
fn set_backprop_momentum_scale(&mut self, val: f64) -> Result<()>
BPROP: Strength of the momentum term (the difference between weights on the 2 previous iterations).
This parameter provides some inertia to smooth the random fluctuations of the weights. It can
vary from 0 (the feature is disabled) to 1 and beyond. The value 0.1 or so is good enough.
Default value is 0.1. Read more
source§fn set_rprop_dw0(&mut self, val: f64) -> Result<()>
fn set_rprop_dw0(&mut self, val: f64) -> Result<()>
RPROP: Initial value inline formula of update-values inline formula.
Default value is 0.1. Read more
source§fn set_rprop_dw_plus(&mut self, val: f64) -> Result<()>
fn set_rprop_dw_plus(&mut self, val: f64) -> Result<()>
RPROP: Increase factor inline formula.
It must be >1. Default value is 1.2. Read more
source§fn set_rprop_dw_minus(&mut self, val: f64) -> Result<()>
fn set_rprop_dw_minus(&mut self, val: f64) -> Result<()>
RPROP: Decrease factor inline formula.
It must be <1. Default value is 0.5. Read more
source§fn set_rprop_dw_min(&mut self, val: f64) -> Result<()>
fn set_rprop_dw_min(&mut self, val: f64) -> Result<()>
RPROP: Update-values lower limit inline formula.
It must be positive. Default value is FLT_EPSILON. Read more
source§fn set_rprop_dw_max(&mut self, val: f64) -> Result<()>
fn set_rprop_dw_max(&mut self, val: f64) -> Result<()>
RPROP: Update-values upper limit inline formula.
It must be >1. Default value is 50. Read more
source§fn set_anneal_initial_t(&mut self, val: f64) -> Result<()>
fn set_anneal_initial_t(&mut self, val: f64) -> Result<()>
ANNEAL: Update initial temperature.
It must be >=0. Default value is 10. Read more
source§fn set_anneal_final_t(&mut self, val: f64) -> Result<()>
fn set_anneal_final_t(&mut self, val: f64) -> Result<()>
ANNEAL: Update final temperature.
It must be >=0 and less than initialT. Default value is 0.1. Read more
source§fn set_anneal_cooling_ratio(&mut self, val: f64) -> Result<()>
fn set_anneal_cooling_ratio(&mut self, val: f64) -> Result<()>
ANNEAL: Update cooling ratio.
It must be >0 and less than 1. Default value is 0.95. Read more
source§impl ANN_MLPTraitConst for ANN_MLP
impl ANN_MLPTraitConst for ANN_MLP
fn as_raw_ANN_MLP(&self) -> *const c_void
source§fn get_train_method(&self) -> Result<i32>
fn get_train_method(&self) -> Result<i32>
Returns current training method
source§fn get_layer_sizes(&self) -> Result<Mat>
fn get_layer_sizes(&self) -> Result<Mat>
Integer vector specifying the number of neurons in each layer including the input and output layers.
The very first element specifies the number of elements in the input layer.
The last element - number of elements in the output layer. Read more
source§fn get_term_criteria(&self) -> Result<TermCriteria>
fn get_term_criteria(&self) -> Result<TermCriteria>
Termination criteria of the training algorithm.
You can specify the maximum number of iterations (maxCount) and/or how much the error could
change between the iterations to make the algorithm continue (epsilon). Default value is
TermCriteria(TermCriteria::MAX_ITER + TermCriteria::EPS, 1000, 0.01). Read more
source§fn get_backprop_weight_scale(&self) -> Result<f64>
fn get_backprop_weight_scale(&self) -> Result<f64>
BPROP: Strength of the weight gradient term.
The recommended value is about 0.1. Default value is 0.1. Read more
source§fn get_backprop_momentum_scale(&self) -> Result<f64>
fn get_backprop_momentum_scale(&self) -> Result<f64>
BPROP: Strength of the momentum term (the difference between weights on the 2 previous iterations).
This parameter provides some inertia to smooth the random fluctuations of the weights. It can
vary from 0 (the feature is disabled) to 1 and beyond. The value 0.1 or so is good enough.
Default value is 0.1. Read more
source§fn get_rprop_dw0(&self) -> Result<f64>
fn get_rprop_dw0(&self) -> Result<f64>
RPROP: Initial value inline formula of update-values inline formula.
Default value is 0.1. Read more
source§fn get_rprop_dw_plus(&self) -> Result<f64>
fn get_rprop_dw_plus(&self) -> Result<f64>
RPROP: Increase factor inline formula.
It must be >1. Default value is 1.2. Read more
source§fn get_rprop_dw_minus(&self) -> Result<f64>
fn get_rprop_dw_minus(&self) -> Result<f64>
RPROP: Decrease factor inline formula.
It must be <1. Default value is 0.5. Read more
source§fn get_rprop_dw_min(&self) -> Result<f64>
fn get_rprop_dw_min(&self) -> Result<f64>
RPROP: Update-values lower limit inline formula.
It must be positive. Default value is FLT_EPSILON. Read more
source§fn get_rprop_dw_max(&self) -> Result<f64>
fn get_rprop_dw_max(&self) -> Result<f64>
RPROP: Update-values upper limit inline formula.
It must be >1. Default value is 50. Read more
source§fn get_anneal_initial_t(&self) -> Result<f64>
fn get_anneal_initial_t(&self) -> Result<f64>
ANNEAL: Update initial temperature.
It must be >=0. Default value is 10. Read more
source§fn get_anneal_final_t(&self) -> Result<f64>
fn get_anneal_final_t(&self) -> Result<f64>
ANNEAL: Update final temperature.
It must be >=0 and less than initialT. Default value is 0.1. Read more
source§fn get_anneal_cooling_ratio(&self) -> Result<f64>
fn get_anneal_cooling_ratio(&self) -> Result<f64>
ANNEAL: Update cooling ratio.
It must be >0 and less than 1. Default value is 0.95. Read more
source§fn get_anneal_ite_per_step(&self) -> Result<i32>
fn get_anneal_ite_per_step(&self) -> Result<i32>
ANNEAL: Update iteration per step.
It must be >0 . Default value is 10. Read more
fn get_weights(&self, layer_idx: i32) -> Result<Mat>
source§impl AlgorithmTrait for ANN_MLP
impl AlgorithmTrait for ANN_MLP
source§impl AlgorithmTraitConst for ANN_MLP
impl AlgorithmTraitConst for ANN_MLP
fn as_raw_Algorithm(&self) -> *const c_void
source§fn write(&self, fs: &mut FileStorage) -> Result<()>
fn write(&self, fs: &mut FileStorage) -> Result<()>
Stores algorithm parameters in a file storage
source§fn write_1(&self, fs: &mut FileStorage, name: &str) -> Result<()>
fn write_1(&self, fs: &mut FileStorage, name: &str) -> Result<()>
Stores algorithm parameters in a file storage Read more
source§fn write_with_name(&self, fs: &Ptr<FileStorage>, name: &str) -> Result<()>
fn write_with_name(&self, fs: &Ptr<FileStorage>, name: &str) -> Result<()>
@deprecated Read more
source§fn write_with_name_def(&self, fs: &Ptr<FileStorage>) -> Result<()>
fn write_with_name_def(&self, fs: &Ptr<FileStorage>) -> Result<()>
👎Deprecated:
Note
Deprecated: ## Note
This alternative version of [write_with_name] function uses the following default values for its arguments: Read more
source§fn empty(&self) -> Result<bool>
fn empty(&self) -> Result<bool>
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read
source§fn save(&self, filename: &str) -> Result<()>
fn save(&self, filename: &str) -> Result<()>
Saves the algorithm to a file.
In order to make this method work, the derived class must implement Algorithm::write(FileStorage& fs).
source§fn get_default_name(&self) -> Result<String>
fn get_default_name(&self) -> Result<String>
Returns the algorithm string identifier.
This string is used as top level xml/yml node tag when the object is saved to a file or string.
source§impl Boxed for ANN_MLP
impl Boxed for ANN_MLP
source§impl StatModelTrait for ANN_MLP
impl StatModelTrait for ANN_MLP
fn as_raw_mut_StatModel(&mut self) -> *mut c_void
source§fn train_with_data(
&mut self,
train_data: &Ptr<TrainData>,
flags: i32
) -> Result<bool>
fn train_with_data( &mut self, train_data: &Ptr<TrainData>, flags: i32 ) -> Result<bool>
Trains the statistical model Read more
source§fn train_with_data_def(&mut self, train_data: &Ptr<TrainData>) -> Result<bool>
fn train_with_data_def(&mut self, train_data: &Ptr<TrainData>) -> Result<bool>
Trains the statistical model Read more
source§fn train(
&mut self,
samples: &impl ToInputArray,
layout: i32,
responses: &impl ToInputArray
) -> Result<bool>
fn train( &mut self, samples: &impl ToInputArray, layout: i32, responses: &impl ToInputArray ) -> Result<bool>
Trains the statistical model Read more
source§impl StatModelTraitConst for ANN_MLP
impl StatModelTraitConst for ANN_MLP
fn as_raw_StatModel(&self) -> *const c_void
source§fn get_var_count(&self) -> Result<i32>
fn get_var_count(&self) -> Result<i32>
Returns the number of variables in training samples
fn empty(&self) -> Result<bool>
source§fn is_trained(&self) -> Result<bool>
fn is_trained(&self) -> Result<bool>
Returns true if the model is trained
source§fn is_classifier(&self) -> Result<bool>
fn is_classifier(&self) -> Result<bool>
Returns true if the model is classifier
source§fn calc_error(
&self,
data: &Ptr<TrainData>,
test: bool,
resp: &mut impl ToOutputArray
) -> Result<f32>
fn calc_error( &self, data: &Ptr<TrainData>, test: bool, resp: &mut impl ToOutputArray ) -> Result<f32>
Computes error on the training or test dataset Read more
source§fn predict(
&self,
samples: &impl ToInputArray,
results: &mut impl ToOutputArray,
flags: i32
) -> Result<f32>
fn predict( &self, samples: &impl ToInputArray, results: &mut impl ToOutputArray, flags: i32 ) -> Result<f32>
Predicts response(s) for the provided sample(s) Read more
source§fn predict_def(&self, samples: &impl ToInputArray) -> Result<f32>
fn predict_def(&self, samples: &impl ToInputArray) -> Result<f32>
Predicts response(s) for the provided sample(s) Read more
impl Send for ANN_MLP
Auto Trait Implementations§
impl RefUnwindSafe for ANN_MLP
impl !Sync for ANN_MLP
impl Unpin for ANN_MLP
impl UnwindSafe for ANN_MLP
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more