One possible drawback of neural networks is that all attribute values must be encoded
in a standardized manner, taking values between zero and 1, even for categorical
variables. Later, when we examine the details of the back-propagation algorithm, we
shall understand why this is necessary. For now, however, how does one go about
standardizing all the attribute values?
For continuous variables, this is not a problem, as we discussed in Chapter 2.
We may simply apply the min–max normalization:
X∗ = X − min(X)
range(X) = X − min(X)
max(X) − min(X)
This works well as long as the minimum and maximum values are known and all
potential new data are bounded between them. Neural networks are somewhat robust