Enum fann_sys::fann_activationfunc_enum [] [src]

#[repr(C)]
pub enum fann_activationfunc_enum { FANN_NONE, FANN_LINEAR, FANN_THRESHOLD, FANN_THRESHOLD_SYMMETRIC, FANN_SIGMOID, FANN_SIGMOID_STEPWISE, FANN_SIGMOID_SYMMETRIC, FANN_SIGMOID_SYMMETRIC_STEPWISE, FANN_GAUSSIAN, FANN_GAUSSIAN_SYMMETRIC, FANN_GAUSSIAN_STEPWISE, FANN_ELLIOTT, FANN_ELLIOTT_SYMMETRIC, FANN_LINEAR_PIECE, FANN_LINEAR_PIECE_SYMMETRIC, FANN_SIN_SYMMETRIC, FANN_COS_SYMMETRIC, FANN_SIN, FANN_COS, }

The activation functions used for the neurons during training. The activation functions can either be defined for a group of neurons by fann_set_activation_function_hidden and fann_set_activation_function_output, or it can be defined for a single neuron by fann_set_activation_function.

The steepness of an activation function is defined in the same way by fann_set_activation_steepness_hidden, fann_set_activation_steepness_output and fann_set_activation_steepness.

The functions are described with functions where:

  • x is the input to the activation function,

  • y is the output,

  • s is the steepness and

  • d is the derivation.

Variants

Neuron does not exist or does not have an activation function.

Linear activation function.

  • span: -inf < y < inf

  • y = x*s, d = 1*s

  • Can NOT be used in fixed point.

Threshold activation function.

  • x < 0 -> y = 0, x >= 0 -> y = 1

  • Can NOT be used during training.

Threshold activation function.

  • x < 0 -> y = 0, x >= 0 -> y = 1

  • Can NOT be used during training.

Sigmoid activation function.

  • One of the most used activation functions.

  • span: 0 < y < 1

  • y = 1/(1 + exp(-2*s*x))

  • d = 2*s*y*(1 - y)

Stepwise linear approximation to sigmoid.

  • Faster than sigmoid but a bit less precise.

Symmetric sigmoid activation function, aka. tanh.

  • One of the most used activation functions.

  • span: -1 < y < 1

  • y = tanh(s*x) = 2/(1 + exp(-2*s*x)) - 1

  • d = s*(1-(y*y))

Stepwise linear approximation to symmetric sigmoid.

  • Faster than symmetric sigmoid but a bit less precise.

Gaussian activation function.

  • 0 when x = -inf, 1 when x = 0 and 0 when x = inf

  • span: 0 < y < 1

  • y = exp(-x*s*x*s)

  • d = -2*x*s*y*s

Symmetric gaussian activation function.

  • -1 when x = -inf, 1 when x = 0 and 0 when x = inf

  • span: -1 < y < 1

  • y = exp(-x*s*x*s)*2-1

  • d = -2*x*s*(y+1)*s

Stepwise linear approximation to gaussian. Faster than gaussian but a bit less precise. NOT implemented yet.

Fast (sigmoid like) activation function defined by David Elliott

  • span: 0 < y < 1

  • y = ((x*s) / 2) / (1 + |x*s|) + 0.5

  • d = s*1/(2*(1+|x*s|)*(1+|x*s|))

Fast (symmetric sigmoid like) activation function defined by David Elliott

  • span: -1 < y < 1

  • y = (x*s) / (1 + |x*s|)

  • d = s*1/((1+|x*s|)*(1+|x*s|))

Bounded linear activation function.

  • span: 0 <= y <= 1

  • y = x*s, d = 1*s

Bounded linear activation function.

  • span: -1 <= y <= 1

  • y = x*s, d = 1*s

Periodical sine activation function.

  • span: -1 <= y <= 1

  • y = sin(x*s)

  • d = s*cos(x*s)

Periodical cosine activation function.

  • span: -1 <= y <= 1

  • y = cos(x*s)

  • d = s*-sin(x*s)

Periodical sine activation function.

  • span: 0 <= y <= 1

  • y = sin(x*s)/2+0.5

  • d = s*cos(x*s)/2

Periodical cosine activation function.

  • span: 0 <= y <= 1

  • y = cos(x*s)/2+0.5

  • d = s*-sin(x*s)/2

Trait Implementations

impl Copy for fann_activationfunc_enum
[src]

impl Clone for fann_activationfunc_enum
[src]

Returns a copy of the value. Read more

Performs copy-assignment from source. Read more