25.7 Conclusion
Collaborative recommender systems are meant to be adaptive – users add their preferences to these system and their output changes accordingly. Robustness in this
context must mean something different than the classical computer science sense of
being able to continue functioning in the face of abnormalities or errors. Our goal is
to have systems that adapt, but that do not present an attractive target to the attacker.
An attacker wishing to bias the output of a robust recommender system would have
to make his attack sufficiently subtle that it does not trigger the suspicion of an attack detector, sufficiently small that it does not stand out from the normal pattern of
new user enrollment, and sufficiently close to real user distribution patterns that it
is not susceptible to being separated out by dimensionality reduction. If this proves
a difficult target to hit and if the payoff for attacks can be sufficiently limited, the
attacker may not find the impact of his attack sufficiently large relative to the effort
required to produce it. This is the best one can hope for in an adversarial arena.
It is difficult to say how close we have come to this ideal. If an attacker is aware
that such detection strategies are being applied, then the attack can be modified to
avoid detection. For example, [23] shows that if the attacker is aware of the criteria used to decide if an attack profiles exist in the user’s neighbourhood, then
the attacker can construct profiles which, although somewhat less effective than the
standard attacks, can circumvent detection. In [34] the effectiveness of various types
of attack profile obfuscation are evaluated. The general finding is that obfuscated attacks are not much less effective than optimal ones and much harder to detect. More
research is needed in this area.
Similar issues apply in the context of attack resistant recommendation algo-
rithms. While model-based algorithms show robustness to attacks that are effective
on memory-based algorithms, it is possible to conceive of new attacks that target
model-based algorithms. [31], for example, shows that association rule based recommendation is vulnerable to segment attacks.
Another way to view the problem is as a game between system designer and
attacker. For each system that the designer creates, an optimal attack against it can be
formulated by the attacker, which then requires another response from the designer,
etc. What we would like to see is that there are diminishing returns for the attacker,
so that each iteration of defense makes attacking more expensive and less effective.
One benefit of a detection strategy is that a system with detection cannot be more
vulnerable to attack than the original system, since in the worst case, the attacks are
not detected. We do not yet know if the robust algorithms that have been proposed
such as RMF have some as-yet-undiscovered flaw that could make them vulnerable
to a sophisticated attack, perhaps even more vulnerable than the algorithms that they
replace.