Module leaf::layers::activation::sigmoid [] [src]

Applies the nonlinear Log-Sigmoid function.

Non-linearity activation function: y = (1 + e-x)-1

A classic choice in neural networks. But you might consider using ReLu as an alternative.

ReLu, compared to Sigmoid

  • reduces the likelyhood of vanishing gradients
  • increases the likelyhood of a more beneficial sparse representation
  • can be computed faster
  • is therefore the most popular activation function in DNNs as of this writing (2015).

Structs

Sigmoid

Sigmoid Activation Layer