Trait Activation

Source
pub trait Activation {
    // Required methods
    fn activate(x: f64) -> f64;
    fn derivative(x: f64) -> f64;
}
Expand description

An activation for a NeuralNet, including a function and a ‘derivative’ function.

§Examples

The code below shows how to implement the ReLU activation:

use serde::{Serialize, Deserialize};

// The activation must be serializable and deserializable so that the network can be
// saved/loaded to/from files
#[derive(Serialize, Deserialize)]
struct Relu;

impl scholar::Activation for Relu {
    fn activate(x: f64) -> f64 {
        x.max(0.0)
    }

    fn derivative(x: f64) -> f64 {
        if x > 0.0 {
            1.0
        } else {
            0.0
        }
    }
}

Required Methods§

Source

fn activate(x: f64) -> f64

The activation function.

Source

fn derivative(x: f64) -> f64

The ‘derivative’ of the activation function.

There is a small quirk regarding this function that occurs when it ‘references’ the activate function of the same trait implementation. For example, the real derivative of the sigmoid (σ) function is:

σ(x) * (1 - σ(x))

When implementing this in code for a NeuralNet, however, you can simply remove these ‘references’. This is because in the context of neural networks the activation’s regular function will have always been applied to the input of its derivative function, no matter the circumstances. The derivative of sigmoid thus becomes:

x * (1 - x)

which matches what the real implementation looks like:

impl Activation for Sigmoid {
    ...

    fn derivative(x: f64) -> f64 {
        x * (1.0 - x)
    }
}

Dyn Compatibility§

This trait is not dyn compatible.

In older versions of Rust, dyn compatibility was called "object safety", so this trait is not object safe.

Implementors§