Our Slope One algorithms work on the intuitive principle
of a “popularity differential” between items for users.
In a pairwise fashion, we determine how much better one
item is liked than another. One way to measure this differential
is simply to subtract the average rating of the two items.
In turn, this difference can be used to predict another user’s
rating of one of those items, given their rating of the other.
Consider two users A and B, two items I and J and Fig. 1.
User A gave item I a rating of 1, whereas user B gave it a
rating of 2, while user A gave item J a rating of 1.5. We observe
that item J is rated more than item I by 1.5 −1 = 0.5
points, thus we could predict that user B will give item J a
rating of 2+0.5 = 2.5. We call user B the predictee user and
item J the predictee item. Many such differentials exist in a
training set for each unknown rating and we take an average
of these differentials. The family of slope one schemes presented
here arise from the three ways we select the relevant
differentials to arrive at a single prediction.