In such systems, the similarities between users are obtained using multi-criteria
ratings, and the rest of the recommendation process can be the same as in singlecriterion rating systems. The next step is, for a given user, to find a set of neighbors
with the highest similarity values and predict unknown overall ratings of the user
based on neighbors’ ratings. Therefore, these similarity-based approaches are applicable only to neighborhood-based collaborative filtering recommendation tech-
niques that need to compute the similarity between users (or items).
In summary, multi-criteria ratings can be used to compute the similarity between
two users in the following two ways [2]: by (i) aggregating similarity values that
are calculated separately on each criterion into a single similarity and (ii) calculating the distance between multi-criteria ratings directly in the multi-dimensional
space. Empirical results using a small-scale Yahoo! Movies dataset show that both
heuristic approaches outperform the corresponding traditional single-rating collaborative filtering technique (i.e., that uses only single overall ratings) by up to 3.8%
in terms of precision-in-top-N metric, which represents the percentage of truly high
overall ratings among those that the system predicted to be the N most relevant
items for each user [2]. The improvements in precision depend on many parameters of collaborative filtering techniques, such as neighborhood sizes and the number of top-N recommendations. Furthermore, these approaches can be extended as
suggested by Manouselis and Costopoulou [49] by computing similarities using
not only known rating information, but also importance weights for each criterion.
The latter approaches were evaluated in an online application that recommends e-markets to users, where multiple buyers and sellers can access and exchange information about prices and product offerings, based on users’ multi-criteria evaluations
on several e-markets. The similarity-per-priority algorithm using Euclidian distance
performed the best among their proposed approaches in terms of the mean absolute
error (MAE) (i.e., 0.235 on scale of 1 to 7) with a fairly high coverage (i.e., 93% of
items can be recommended to users) as compared to non-personalized algorithms,
such as arithmetic mean and random, that produce higher MAE (0.718 and 2.063,
respectively) with 100% coverage [49].