Enum fann::TrainAlgorithm [] [src]

pub enum TrainAlgorithm {
    Incremental {
        learning_momentum: c_float,
        learning_rate: c_float,
    },
    Batch {
        learning_rate: c_float,
    },
    Rprop {
        decrease_factor: c_float,
        increase_factor: c_float,
        delta_min: c_float,
        delta_max: c_float,
        delta_zero: c_float,
    },
    Quickprop {
        decay: c_float,
        mu: c_float,
        learning_rate: c_float,
    },
}

The Training algorithms used when training on fann_train_data with functions like fann_train_on_data or fann_train_on_file. The incremental training alters the weights after each time it is presented an input pattern, while batch only alters the weights once after it has been presented to all the patterns.

Variants

Incremental

Standard backpropagation algorithm, where the weights are updated after each training pattern. This means that the weights are updated many times during a single epoch and some problems will train very fast, while other more advanced problems will not train very well.

Fields

learning_momentum: c_float

A higher momentum can be used to speed up incremental training. It should be between 0 and 1, the default is 0.

learning_rate: c_float

The learning rate determines how aggressive training should be. Default is 0.7.

Batch

Standard backpropagation algorithm, where the weights are updated after calculating the mean square error for the whole training set. This means that the weights are only updated once during an epoch. For this reason some problems will train slower with this algorithm. But since the mean square error is calculated more correctly than in incremental training, some problems will reach better solutions.

Fields

learning_rate: c_float

The learning rate determines how aggressive training should be. Default is 0.7.

Rprop

A more advanced batch training algorithm which achieves good results for many problems. Rprop is adaptive and therefore does not use the learning_rate. Some other parameters can, however, be set to change the way Rprop works, but it is only recommended for users with a deep understanding of the algorithm. The original RPROP training algorithm is described by [Riedmiller and Braun, 1993], but the algorithm used here is a variant, iRPROP, described by [Igel and Husken, 2000].

Fields

decrease_factor: c_float

A value less than 1, used to decrease the step size during training. Default 0.5

increase_factor: c_float

A value greater than 1, used to increase the step size during training. Default 1.2

delta_min: c_float

The minimum step size. Default 0.0

delta_max: c_float

The maximum step size. Default 50.0

delta_zero: c_float

The initial step size. Default 0.1

Quickprop

A more advanced batch training algorithm which achieves good results for many problems. The quickprop training algorithm uses the learning_rate parameter along with other more advanced parameters, but it is only recommended to change these for users with a deep understanding of the algorithm. Quickprop is described by [Fahlman, 1988].

Fields

decay: c_float

The factor by which weights should become smaller in each iteration, to ensure that the weights don't grow too large during training. Should be a negative number close to 0. The default is -0.0001.

mu: c_float

The mu factor is used to increase or decrease the step size; should always be greater than 1. The default is 1.75.

learning_rate: c_float

The learning rate determines how aggressive training should be. Default is 0.7.

Methods

impl TrainAlgorithm
[src]

fn default_incremental() -> TrainAlgorithm

The Incremental algorithm with default parameters.

fn default_batch() -> TrainAlgorithm

The Batch algorithm with default parameters.

fn default_rprop() -> TrainAlgorithm

The Rprop algorithm with default parameters.

fn default_quickprop() -> TrainAlgorithm

The Quickprop algorithm with default parameters.

Trait Implementations

impl PartialEq for TrainAlgorithm
[src]

fn eq(&self, __arg_0: &TrainAlgorithm) -> bool

This method tests for self and other values to be equal, and is used by ==. Read more

fn ne(&self, __arg_0: &TrainAlgorithm) -> bool

This method tests for !=.

impl Debug for TrainAlgorithm
[src]

fn fmt(&self, __arg_0: &mut Formatter) -> Result

Formats the value using the given formatter.

impl Clone for TrainAlgorithm
[src]

fn clone(&self) -> TrainAlgorithm

Returns a copy of the value. Read more

fn clone_from(&mut self, source: &Self)
1.0.0

Performs copy-assignment from source. Read more

impl Copy for TrainAlgorithm
[src]

impl Default for TrainAlgorithm
[src]

fn default() -> TrainAlgorithm

Returns the "default value" for a type. Read more