in interacting with recommender systems, and in some cases, those purposes may
be counter to those of the system owner or those of the majority of its user population. To cite a well-known example, the Google search engine finds itself engaging
in more-or-less continual combat against those who seek to promote their sites by
“gaming” its retrieval algorithm.
In search engine spam, the goal for an attacker is to make the promoted page
“look like” a good answer to a query in all respects that Google cares about. In the
case of collaborative recommendation, the goal for an adversary is to make a par-
ticular product or item look like a good recommendation for a particular user (or
maybe all users) when really it is not. Alternatively, the attacker might seek to pre-
vent a particular product from being recommended when really it is a good choice.
If we assume that a collaborative system makes its recommendations purely on the
basis of user profiles, then it is clear what an attacker must do – add user profiles that
push the recommendation algorithm to produce the desired effect. A single profile
would rarely have this effect, and in any case, fielded systems tend to avoid making
predictions based on only a single neighbor. What an attacker really needs to do is
to create a large number of psuedonomous profiles designed to bias the system’s
predictions. Site owners try to make this relatively costly, but there is an inherent
tension between policing the input of a collaborative system and making sure that
users are not discouraged from entering the data that the algorithm needs to do its
work. The possibility of designing user rating profiles to deliberately manipulate the
recommendation output of a collaborative filtering system was first raised in [24].
Since then, research has focused on attack strategies, detection strategies to combat attacks and recommendation algorithms that have inherent robustness against
attack.