where we write the output of inner products in the objective as kernel
items h/ðxiÞ;/ðxjÞihyi; yji ¼ K/
ijKy
ij where K/
ij and Ky
ij stand for the
elements of the kernel matrices for the feature vectors and for the
label vectors respectively. Hence, the vector labels are kernelized
as well. The synthesized kernel is the element wise product of the
input and the output kernels, an operation that preserves positive
semi-definiteness.
The main point to be noted in the above formulation (1) is the
constraint equations.
hyi; ðW/ðxiÞÞi P 1 ni; i ¼ 1; . . .m; ni P 0; i ¼ 1; . . .m:
Here when we project W/ (xi) onto yi, we are restricting the resulting
value to be always less than or equal to one. There seems to be
no compelling reason for such a restriction. So if we allow yi,(W/
(xi)) to take value around 1 (both sides) the nonnegative restriction
on ni goes out.
Further the inequality constraint hyi; ðW/ðxiÞÞi P 1 ni;
i ¼ 1; . . .m; becomes equality constraint. hyi; ðW/ðxiÞÞi ¼
1 ni; i ¼ 1; . . .m.
The above change in turn necessitates 1-norm minimization
term CeTn in the objective function to take the two-norm form:
12
CnTn.
This finally leads to the formulation given below (4), which
basically can be thought of an extension of the formulation given
by Mallat (1998) for a two-class SVM.