Theoretical results are also emerging to support the robustness of particular
classes of model-based algorithm. In [35], a manipulation-resistant class of collaborative filtering algorithm is proposed for which robustness is proved, in the sense
that the effect of any attack on the ratings provided to an end-user diminishes with
increasing number of products rated by the end-user. Here, effectiveness is measured
in terms of a measure of the average distortion introduced by the attack to the ratings
provided to the user. The class of algorithms for which the proof holds is referred to
as a linear probabilistic collaborative filtering. In essence, the system is modelled as
outputting a probability mass function (PMF) over the possible ratings and in linear
algorithms, the PMF of the attacked system can be written as a weighted sum of the
PMF obtained considering only genuine profiles and that obtained considering only
attack profiles. Robustness is obtained, because, as the user supplies more ratings,
the contribution of the genuine PMF to the overall PMF begins to dominate. The
authors show that, while nearest neighbour algorithms are not linear in this sense,
some well-known model-based algorithms such as the naive-bayes algorithm are
asymptotically linear.