| LRN
applies Local Response Normalization
| to an input blob. This operation performs
| a kind of “lateral inhibition” by normalizing
| over local input regions, where normalization
| is applied across channels. This operator
| is typically used to normalize an unbounded
| activation (such as ReLU). The output
| shape is the same as the input shape.
| The brew
module has a wrapper for this
| operator for use in a ModelHelper
| object.
|
| The formula for LRN is as follows:
|
| $$b_{c} = a_{c}(bias + \frac{\alpha}{n}\sum_{c’=max(0,c-n/2)}^{min(N-1,c+n/2)}
| a_{c’}^2 )^{-\beta}$$
|
| Github Links:
|
| - https://github.com/pytorch/pytorch/blob/master/caffe2/operators/local_response_normalization_op.h
|
| - https://github.com/pytorch/pytorch/blob/master/caffe2/operators/local_response_normalization_op.cc
|