Support Vector Machines have recently gained popularity for their performance
and efficiency in many settings. SVM’s have also shown promising recent results
in RS. Kang and Yoo [46], for instance, report on an experimental study that aims
at selecting the best preprocessing technique for predicting missing values for an
SVM-based RS. In particular, they use SVD and Support Vector Regression. The
Support Vector Machine RS is built by first binarizing the 80 levels of available user
preference data. They experiment with several settings and report best results for a
threshold of 32 – i.e. a value of 32 and less is classified as prefer and a higher value
as do not prefer. The user id is used as the class label and the positive and negative
values are expressed as preference values 1 and 2.