2.2 Critiques of the Chicago School Arguments
14. Not all economists, however, have taken the stance that privacy protection inherently causes market inefficiencies, or that consumers who value privacy can simply secure it in the marketplace (Murphy, 1996). Hirshleifer (1980), for instance, criticizing Posner and Stigler.s positions on privacy, notes that the assumptions of rational behaviour underlying the Chicago School.s privacy models fail to capture the complexity of consumers. privacy decision making.
5
15. In fact, while the early Chicago School studies of privacy originated in what may be defined a pre-ICT (modern Information and Communication Technologies) era, the development of new information technologies, and Internet in particular, led researchers to formulate more nuanced and granular views of the trade-offs associated with privacy protection and data sharing.
16. Varian (1996), for instance, notes that the secondary usage of personal data raises particular economic concerns: a consumer may rationally decide to share personal information with a firm because she expects to receive a net benefit from that transaction; however, she has little knowledge or control upon how the firm will later use that data. The firm may sell the consumer.s data to third parties at profit, but the consumer may not share any of that profit, or may even bear a cost when the third party abuses her data (for instance, for spam, adverse price discrimination, and so forth; see Odlyzko (2003)). Such negative externalities on the consumer are not internalized by the firm (Swire and Litan, 1998). Noam (1997) also acknowledges that transaction costs, poverty, and other hurdles may not allow consumers to acquire privacy protection under standard market conditions.
17. Hermalin and Katz (2006) criticize the Chicago School.s argument that privacy protection is inherently welfare-diminishing. They note that data protection may have ex ante positive effects on economic welfare. For instance, the protection of privacy can make it possible to support insurance schemes that otherwise would not exist. If all potential policy holders had to be tested for potentially fatal health condition, life insurance companies would adjust insurance prices according to the results of those tests. While the outcome would be ex post economically efficient (consumers would purchase insurances at actuarially fair rates), from ex ante the individual would bear the risks associated with the outcomes of their test results. However, if testing were banned, ¡°then the competitive equilibrium would entail all risk-averse individuals. buying full insurance at a common rate.¡± Therefore, ¡°[w]elfare would be greater than under the testing equilibrium both because the (socially wasteful) costs of testing would be avoided and because risk-averse individuals would bear less risk¡± (Hermalin and Katz, 2006). Furthermore, Hermalin and Katz note that markets may fail to adjust efficiently to additional information, lowering the efficiency of the resulting equilibrium. In their model, two rational agents engage in a transaction in which both are interested in collecting information about the other; privacy protection may actually lead to efficient allocation equilibria, and explicit prohibition of information transmission may be necessary for economic efficiency (as the mere allocation of informational property rights may not suffice).
18. Similarly, models by Hirshleifer (1971) and Taylor (2003) show that rational economic agents may end up inefficiently over-investing in collecting personal information about other parties (for instance, in order to increase private revenues from sales based on knowledge of the buyer.s willingness to pay). Taylor (2004) also finds that, in the presence of tracking technologies that allow merchants to infer consumers. preferences (so to later engage in price discrimination), whether or not the presence of privacy regulatory protection will enhance consumer and aggregate welfare depends on consumers. level of sophistication. Naive consumers do not anticipate the seller.s ability to use past consumer information for price discrimination; therefore, in equilibrium all their surplus is taken away by the firms, unless privacy protection is enforced through regulation. Regulation, however, would not be necessary if consumers were aware of how merchants will exploit their data, and strategic enough to adapt their behaviour accordingly.
19. Similar conclusions are reached by Acquisti and Varian (2005), who study a two-period model in which merchants have access to ¡°tracking¡± technologies and consumers have access to ¡°hiding¡± technologies. Internet commerce offers an example: merchants can use cookies to track consumer behaviour (in particular, past purchases), and consumers have access to ¡°anonymizing¡± technologies (deleting cookies, using anonymous browsing or payment tools) that hide that behaviour. Consumer tracking will enhance the merchant.s profits only if the tracking is also used to provide consumers with enhanced, personalized services.
6
20. Other models, in addition to the privacy costs associated with price discrimination and the social welfare implications of sharing of consumer data with third parties, find that the exploitation of personal information for unsolicited marketing can constitute a negative consumer externality (Hui and Png, 2003). Furthermore, while the majority of the theoretical economic work on privacy takes a micro-economic perspective (see also Hui and Png (2006)), significant macro-economic costs and benefits also arise from the protection or trade of individual information (see Section 3).
2.2.1 Behavioural Economics and Hurdles in Consumers Decision Making
21. There are reasons to believe that consumers act myopically when trading off the short term benefits and long term costs of information revelation and privacy invasions. The evidence also suggests that consumers may not be able to act ¡°rationally¡± (in the neoclassical economic sense) when facing privacy trade-offs. In recent years, a stream of research investigating the so-called privacy paradox has focused on the hurdles that hamper individuals. privacy-sensitive decision making. The evidence relies on three set of decision making hurdles: privacy decision making is afflicted by a) incomplete information, b) bounded cognitive ability to process the available information, and c) a host of systematic deviations from theoretically rational decision making, which can be explained through cognitive and behavioural biases investigated by studies from behavioural economics and behavioural decision research (for an overview of this area, see Acquisti (2004) and Acquisti and Grossklags (2007)).1 This line of enquiry has significant policy implications: As noted above, the modern microeconomic theory of privacy suggests that, when consumers are not fully rational or in fact myopic, the market equilibrium will tend not to afford privacy protection to individuals, and therefore privacy regulation may be needed to improve consumer and aggregate welfare.
2.2.2 Privacy Enhancing Technologies
22. While information technologies can be used to track, analyze, and link vast amounts of data related to the same individual, Privacy Enhancing Technologies (or PETs) can be used to protect, anonymize, or aggregate those data in ways that are both effective (in the sense that re-identifying individual information becomes either impossible or just costly enough to be unprofitable) and efficient (in the sense that the desired transaction can be regularly completed with no additional costs for the parties involved).
23. A vast body of research in privacy enhancing technologies suggests, in fact, that cryptographic protocols can be leveraged to satisfy both needs for data sharing and needs for data privacy. Not only is it already possible to complete verifiable and yet anonymous or privacy enhanced ¡°transactions¡± in areas as diverse as electronic payments (Chaum, 1983), online communications (Chaum, 1985), Internet browsing (Dingledine et al., 2004), or electronic voting (Benaloh, 1987); but it is also possible to have credential systems that provide authentication without identification (Camenisch and Lysyanskaya, 2001), share personal preferences while protecting privacy (Adar and Huberamn, 2001), leverage the power of recommender systems and collaborative filtering without exposing individual identities (Canny, 2002), or even executing calculations in encrypted spaces (Gentry, 2009), opening the doors for novel scenarios of privacy preserving data gathering and analysis.
24. In other words, privacy enhancing technologies may make it possible to reach equilibria where data holders can still analyze aggregate and anonymized data, while subjects. individual information stays protected. Arguably, the transition to these new equilibria could be welfare-enhancing for consumers and society as a whole. However, the possibility that Privacy Enhancing Technologies may lead to non-zero
1 Furthermore, a short overview of empirical studies investigating the conflicting and sometimes paradoxical consumers. valuations of personal data can be found in Acquisti et al. (2009).
7
sum market outcomes only recently has started being explicitly discussed in economic research (Acquisti, 2008).
3 Benefits and Costs of Disclosed and Protected Data
25. In this section, we consider the economic value of personal data and personal privacy by analyzing the individual and social costs and benefits associated with disclosed and protected.
26. Our focus in on information privacy. In the context of our analysis, data subjects are consumers, and data holders are firms. We will frame the analysis by presenting the market for personal data and the market for privacy as two sides of a same coin, wherein protected data may carry benefits and costs that are dual, or symmetric, to the