1. A Data Mining problem: This is the case where we have an item set and some sort of (user) manipulation data about them. Using clustering and other methods, we want to find correlations and in turn related items in the set. This should almost always be done offline, and for the processing of large data sets in a reasonable amount of time we require techniques like MapReduce. When such frameworks are used, the functionality offered by neither relational nor graph databases are of any use (as mentioned by Sean Owen). Sequential access to the input set during computation, and basic techniques (like a key value store) for retrieving the computed recommendation data are sufficient.
2. An Information Retrieval problem: A recommendation problem need not always have to be concerned with finding connections between items within the data set, it can also be about traversing existing known connections. Typically for smaller data sets (still numbering in millions), where user manipulation data can be structured and recorded, MapReduce is not always the best solution. Other examples are social network - type data where connections between the data are as important as the data. Further, the recommendation system need not be completely offline, it can be quasi-online with proper pre/lazy computation strategies. I think it is in this context that the comparison between the performance of relational and graph databases must be made.
The graph data model allows storage of all the connections of the node along with the node, so that there is no additional step in computing connected data apart from reading the node into memory. The graph data model also prescribes that relationships have properties. Property rich relationships are absolutely critical when trying to explore connected data. The same constructs are achieved in relational databases using foreign keys, joins and join tables (Not going in depth into this since it is pretty standard info). These relational constructs end up as a part of the domain model, but not of the physical storage model (unless heavily optimized).
It is therefore quite obvious that the graph data model allows you to find interconnected data much faster and in a much more scalable manner as compared to the relational data model (think cascading reads of node(row) addresses vs. cascading joins on foreign keys). Most optimization techniques such as indexing can be applied equally well to both kinds of databases. This is the reason why the graph data model should perform much better than the relational data model in information retrieval - type recommendation problems.
Of course, when we talk about graph databases from different vendors, the requirements (CPU, RAM, storage), capabilities, reliability and product maturity vary greatly. Given the maturity of the graph databases on the market currently, I do not think it is a good idea to put production critical data in there, even if it is relevant for making recommendations (a relational data store is still the best for that). I would use a graph database as an efficient quasi-online data warehouse for highly interconnected data, a role in which it should surely outperform a relational database.