1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
/// `ActivationFunction` is a type alias for a function that takes a slice of `f32` values
/// (the input to the activation function) and returns a `Vec<f32>` (the output of the activation function).
/// This type alias is used to define various activation functions in the neural network.
pub type ActivationFunction = fn ;
/// `softmax` is an activation function that converts a vector of raw input values (often called logits)
/// into a probability distribution, where the sum of the output values equals 1.0. It is commonly used
/// in the output layer of neural networks for classification tasks.
///
/// # Arguments
/// * `input` - A slice of `f32` values representing the input to the softmax function (often logits).
///
/// # Returns
/// * A `Vec<f32>` where each element is a probability corresponding to each input value.
/// The probabilities are scaled such that their sum equals 1.0.
///
/// # Example
/// ```
/// let input = vec![1.0, 2.0, 3.0];
/// let output = softmax(&input);
/// assert_eq!(output.len(), input.len());
/// assert!(output.iter().sum::<f32>() - 1.0 < 1e-6); // The sum of probabilities should be 1.0.
/// ```
/// `relu` is the Rectified Linear Unit (ReLU) activation function, which outputs the input value
/// if it is positive and zero otherwise. ReLU is one of the most commonly used activation functions
/// in neural networks, especially for hidden layers.
///
/// # Arguments
/// * `input` - A slice of `f32` values representing the input to the ReLU function.
///
/// # Returns
/// * A `Vec<f32>` where each element is either the input value (if it is greater than 0) or 0 (if the input is negative).
///
/// # Example
/// ```
/// let input = vec![-1.0, 0.0, 1.0];
/// let output = relu(&input);
/// assert_eq!(output, vec![0.0, 0.0, 1.0]);
/// ```
/// `linear` is a linear activation function that simply returns the input values as they are.
/// This function is commonly used in the output layer of a neural network, especially in regression tasks,
/// where no transformation of the output values is needed.
///
/// # Arguments
/// * `input` - A slice of `f32` values representing the input to the linear function.
///
/// # Returns
/// * A `Vec<f32>` that is identical to the input vector, as no transformation is applied.
///
/// # Example
/// ```
/// let input = vec![1.0, 2.0, 3.0];
/// let output = linear(&input);
/// assert_eq!(output, input);
/// ```