We have shown that LP formulations of boosting are both attractive theoretically in terms of generalization
error bound and computationally via column generation. The LPBoost algorithm can be applied to any
boosting problem formulated as an LP. We examined algorithms based on the 1-norm soft margin cost
functions for support vector machines. A generalization error bound was found for the classificaiton case.
The LP optimality conditions allowed us to provide explanations for how the methods work. In classification,
the dual variables act as misclassification costs. The optimal ensemble consists of a linear combination of
weak hypotheses that work best under the worst possible choice of misclassification costs. This explanation
is closely related to that of [8]. For regression as discussed in the Barrier Boosting approach to the same
formulation [17], the dual multipliers act like error residuals to be used in a regularized least square problem.
We demonstrated the ease of adaptation to other boosting problems by examining the confidence-rated
and regression cases.