In the paper, we propose to study the minimizer of the empirical
error in the compressed hypothesis space instead of the original
hypothesis space. That is, we propose to find solutions in the
compressed hypothesis space. In recent years, dimension reduction
and random projections in various learning areas has received
considerable interests. Zhou, Lafferty, and Wasserman (2007) proposed
to use compressed linear regression, in which the data set Y
is compressed by the multiplication of a matrix A which satisfies
the ‘‘Restricted Isometric Property’’ in a linear regression model
Y = Xβ + ϵ where β is the coefficient and ϵ is noise. For the purpose
of classification, Calderbank, Jafarpour, and Schapire (2010)
studied an SVM algorithm in a compressed space and showed that
their algorithm has good generalization properties. They also gave