For classification, we examine both standard and confidence-rated boosting. Standard boosting algorithms
use weak hypotheses that are classifiers, that is, whose outputs are in the set {−1, +1}. Schapire and
Singer [21] have considered boosting weak hypotheses whose outputs reflected not only a classification but
also an associated confidence encoded by a value in the range [−1, +1]. They demonstrate that so-called
confidence-rated boosting can speed convergence of the composite classifier, though the accuracy in the long
term was not found to be significantly affected. In Section 5, we discuss the minor modifications needed for
LPBoost to perform confidence-rated boosting.