One approach to this problem is to create a privacy-preserving system architecture that can compute recommendations without explicitly knowing the users’ input
data [Canny 2002b; 2002a; Polat and Du 2005a; 2005b]. However, this disregards the
fact that users’ perception of the potential privacy threats may differ from the actual
threats [John et al. 2011]. Another remedy is to give users explicit control over what
information they disclose [Wenning and Schunter 2006; Kolter and Pernul 2009].
Information disclosure then becomes an explicit decision, in which users have to
make a trade-off between the potential benefits of disclosure and the possibly
ensuing privacy risks [Mabley 2000; Chellappa and Sin 2005; Taylor et al. 2009].
Decision-making is an inherently complex problem though, especially when the
outcomes are uncertain or unknown [Kahneman and Tversky 1979; Kahneman et al.
1982; Gigerenzer and Goldstein 1996]. In the field of privacy, this complex decision
process has been aptly dubbed “privacy calculus” [Culnan 1993; Laufer and Wolfe
1977]. When users have to decide whether or not to disclose personal information to a
recommender system, they typically know little about the positive and negative
consequences of disclosure [Acquisti and Grossklags 2005; Acquisti and Grossklags
2008].
Another problem is that users’ information disclosure decisions are highly
dependent on the context [Lederer et al. 2003; Li et al. 2010; Nissenbaum 2010; John
et al. 2011]. Researchers have looked at various techniques to assist or influence
users in such decisions, such as reordering the disclosure requests to increase
disclosure [Acquisti et al. 2011], providing justifications for disclosing (or not
disclosing) certain information [Kobsa and Teltzrow 2005; Besmer et al. 2010; Patil
et al. 2011; Acquisti et al. 2011], or displaying privacy seals or statements [Rifon et
al. 2005; Hui et al. 2007; Egelman et al. 2009; Xu et al. 2009]. While these studies
yielded interesting and occasionally even counterintuitive results, those results are
mostly quite isolated. For instance, some research focuses on increasing disclosure
behavior, but disregards users’ perception of the system and their satisfaction with
the experience of using it (see section 2.1). Others study users’ general privacy
concerns, but disregard their impact on disclosure behavior (see section 2.2).
Research relevant to privacy-related decision-making is scattered across several
disparate thrusts, including research on increasing information disclosure, research
on user perception and satisfaction (also called ‘user experience’), and research on
privacy concerns as personal traits.
To make relevant and robust contributions, research on users’ reluctance to
disclose personal data to context-based recommender systems should forge the
divergent contributions into a unified approach. By incorporating system-related
perceptions and experiences as mediators to information disclosure behavior, such an
approach can provide insights into the cognitive processes involved in users’ privacy
calculus, and explain how suggested system improvements as well as personal
privacy concerns impact information disclosure decisions. This paper develops such
an encompassing approach (section 2) and applies it to the analysis of an online user
experiment with a mockup of a mobile app recommender system (section 3). Section 4
reflects on the results of this experiment and integrates them with qualitative
findings from an interview study. Section 5 finally provides conclusions and
suggestions for future research.