In many data mining applications we are given
a set of training examples and asked to
construct a regression machine or a classifier
that has low prediction error or low error rate
on new examples, respectively. An important
issue is speed especially when there are large
amounts of data. We show how both
classification and prediction error can be
reduced by using boosting techniques to
implement committee machines. In our
implementation of committees using either
classification trees or regression trees, we show
how we can trade off speed against either error
rate or prediction error.