Much research has been directed at finding better ways of helping machines learn from examples. When
domain knowledge in a particular area is weak, solutions can be expensive, time consuming and even impossible
to derive using traditional programming techniques.
In such cases, neural networks can be used as tools to make reasonable solutions possible or good solutions
more economical. Such an automated solution is often more accurate than a hard-coded program, because it
learns from actual data instead of making assumptions about the problem. It often can adapt as the domain
changes and often takes less time to find a good solution than a programmer would. In addition, inductive
learning solutions may generalize well to unforeseen circumstances.
Radial Basis Function (RBF) networks [1][13][15] have received much attention recently because they
provide accurate generalization on a wide range of applications, yet can often be trained orders of magnitude
faster [7] than other models such as backpropagation neural networks [8] or genetic algorithms [9].
Radial basis function networks make use of a distance function to find out how different two input
vectors are (one being presented to the network and the other stored in a hidden node). This distance function
is typically designed for numeric attributes only and is inappropriate for nominal (unordered symbolic)
attributes.