It is clear that the learning speed of feedforward neural networks is in general far slower than required and it has been a major
bottleneck in their applications for past decades. Two key reasons behind may be: (1) the slow gradient-based learning algorithms are
extensively used to train neural networks, and (2) all the parameters of the networks are tuned iteratively by using such learning
algorithms. Unlike these conventional implementations, this paper proposes a new learning algorithm called extreme learning machine
(ELM) for single-hidden layer feedforward neural networks (SLFNs) which randomly chooses hidden nodes and analytically determines
the output weights of SLFNs. In theory, this algorithm tends to provide good generalization performance at extremely fast learning
speed. The experimental results based on a few artificial and real benchmark function approximation and classification problems
including very large complex applications show that the new algorithm can produce good generalization performance in most cases and
can learn thousands of times faster than conventional popular learning algorithms for feedforward neural networks.1
r 2006 Elsevier B.V. All rights reserved.