Here k(xi, xj) = φ(xi)T · φ(xj) is a kernel function that calculates dot products
of instances that are mapped into a higher dimensional space F via the mapping
φ. The alphas are the weights associated with the training instances. All instances
with nonzero weights are “support vectors”. They determine the position of the
optimal SVM hyperplane, h(C, hI+, I−i), which is given by: