ReLU layer[edit]
ReLU is the abbreviation of Rectified Linear Units. This is a layer of neurons that use the non-saturating activation function . It increases the nonlinear properties of the decision function and of the overall network without affecting the receptive fields of the convolution layer.
Other functions are used to increase nonlinearity. For example the saturating hyperbolic tangent , , and the sigmoid function . Compared to tanh units, the advantage of ReLU is that the neural network trains several times faster.[22]