Neural network researchers have built forecasting and trading systems with training data from one year [22,34] to sixteen years 139], including various training set sizes in between the two extremes 116, 26,29|. However, once researchers have obtained their training data, they typically use all of the data in building the neural network forecasting model, with no attempt at comparing data quantity effects on the quality of the produced forecasting models. One of the few existing attempts to evaluate training set size effects has been performed by Zhang and Hu [39]. Zhang and Hu use a single comparison of a 16-year training set size to a 6-year training set size. Their results support Box et al. [5] and others [11], who claim that larger training sets produce better forecasting models, with the 16-year model outperforming the 6-year model