Through the use of an abstract and simplified model
of human neurons, is possible to develop a neural simulator
capable to classify, to generalize and to learn
how and approximate functions[10]. One of the most
used neural learning models is the so called MultiLayer
Perceptron (MLP) with Back-propagation learning
algorithm[25]. Some improved versions of the original
Back-Propagation algorithm were developed in the
few past years, and the RPROP algorithm[24] become
an interesting choice among them.
The RPROP algorithm performs a direct adaptation
of the weight step (learning rate) based on local gradient
information. To achieve this, each weight has its individual
update value ∆ij , which solely determines the
size of the weight update. This adaptive update-value
evolves during the learning process based on its local
sight of the error function E, according to the following
learning-rule[24]