Rethinking Informational Privacy in the Datafication Age (Part 1)

The notions of personal information and privacy have often been addressed as intertwined, as tightly connected concepts where one constrains the other. That is, current theories and discussions of informational privacy are closely related to a specific notion of personal information, with privacy being conceptualized as one’s ability to control the access to one’s personal information. In this paper, I seek to understand what constitutes informational privacy in the age of datafication and personalization technologies. The paper is organized in three sections: the first section is a brief summary of prevailing concepts of informational privacy that construct the main theoretical framework for this paper, which is followed by the second section dedicated to metadata as an important dimension of personal information; the third section explains why there is no such thing as “raw personal information” in the algorithmic systems and why a normative privacy concept should take this into account.

Before I delve into the literature review, I want to emphasize that when I talk about privacy, I am referring to individual privacy and that this paper aims to develop a normative concept of information privacy within the context of contemporary logics of data collection and analysis. Moreover, as Solove (2008) argued, theories are meant to “be tested, doubted, criticized, amended, supported, and reinterpreted” (p. ix), this paper is not an attempt to introduce a new theory of informational privacy but to build on and develop upon existing theories in light of an evolving social and technological environment.

Theoretical Framework

Although there are various aspects of privacy that can be found in the literature, for the time being I shall limit myself to explore the dimension of informational privacy. According to Rössler (2005), the dimension of informational privacy has been considered as the central dimension of privacy by numerous theorists (p. 111). While there are different approaches to define privacy, in this essay I am taking up such approaches as those adopted by Charles Fried, Alan Westin, and Beate Rössler.

American jurist and lawyer Charles Fried (1968) defined the right to privacy as one’s control over knowledge about oneself, not only in terms of quantity of information but also modulations in the quality of the knowledge (p. 475). In similar terms, American law professor Alan Westin (1967) developed what is now considered a philosophical groundwork for current debates about privacy law: “privacy as the claim of individuals, groups or institutions to determine for themselves when, how, and to what extent information about them is communicated to others” (p. 7). In arguing why individuals need privacy, he wrote “privacy is neither a self-sufficient state nor an end in itself… It is basically an instrument for achieving individual goals of self-realization” (1966, p. 1029). This view is further extended by ethics professor Beate Rössler (2005), who proposed that “something counts as private if one can oneself control the access to this ‘something’” and that “the protection of privacy means protection against unwanted access by other people” (p. 8). Informational privacy is obtained if one is able to control the access of others to information about her and able to make well-founded expectations and assumptions concerning what others know about her and how they acquire their knowledge. Accordingly, informational privacy is violated when knowledge about the person is infringed upon, and when her expectations and assumptions “prove to be false, are disappointed or come to nothing” (p. 111-113). The fundamental point here is that:

violations of informational privacy always… result in violations of the conditions of autonomy…. Only on the basis of the (fragile) stability of her fabric of expectations, knowledge, assumptions and selective self-disclosure is it possible for a person to exercise control over her self-presentation and thus, in a broader sense, to enjoy the possibility of a self-determined life (p. 112, 140).

To sum up this branch of thought, the dimension of informational privacy serves to secure the individual’s control over access to knowledge about herself and her expectations regarding what other people or institutions know about her for it is essential to her self-realization and autonomous life. However, in the age of datafication and personalization technologies, this normative concept seems to be incomplete to guarantee one’s self-autonomy and self-realization because it does not take into account the possibility of interference in one’s action even when access to one’s information is deliberately granted. Furthermore, there are dimensions of personal information that Internet users are not fully aware of, such as metadata, given the contemporary logics of data aggregation. Therefore, although this definition of privacy in terms of control of access is highly valuable, there are crucial elements that are left out of account. In accordance with this argument, technology professor Helen Nissenbaum (2009) proposed that privacy concerns should not be limited solely to concern about control over personal information. To address this gap, she suggested to regulate how institutions use data, not how data is collected as there has been considerable evidence that the “transparency-and-choice” framework, which is built on the notion of “privacy as control of access”, cannot solve the core problems of flagrant data collection, dissemination, aggregation, analysis, and profiling (p. 34). It is important to note that, as legal scholar Daniel Solove (2006) argued, while technology is involved in various privacy problems, it is not the main and only cause of privacy problems, which “primarily occurred through activities of people, businesses, and the government” (p. 560). This observation is an important reminder for privacy theorists to go beyond a techno- or socio-deterministic perspective. Therefore, I propose to ask what personal information (data) is collected, how neutral data is, and what happens to the data after it is collected. By exploring these questions, we shall see how to re-conceptualize informational privacy in such a way that can address the challenges posed by algorithmic profiling and data-driven logic.

The dimensions of personal information 

Since there is a great ambiguity in the way the term “personal information” is used (Nissenbaum 2009, p. 4), it is important to define the dimensions of personal information used in this paper before we explore what personal information is collected. Personal information is information that is “relating to an identified or identifiable natural person (‘data subject’)” (definition by the European Union Directive, cited in Nissenbaum 2009, p. 4). This includes sensitive or intimate information, information about a person, and metadata generated through a user’s activities on the Net. Metadata are what constitutes a dimension of personal information that is generally less taken into consideration in privacy discourses. Since metadata do not contain the actual content of communication, the threat to privacy is less acknowledged and sometimes said to be negligible. Metadata in exchange for communication and entertainment services has become the norm; most users click consent to give away “something as private” without fully understanding what it is (Van Dijck 2004, p. 200).

While users may not be accustomed to thinking about metadata because they are not immediately visible, they have direct impact on users’ self-representation and identity, which in turn regulates their social relations and autonomy. According to security expert Bruce Schneier (2015), “Data is content, and metadata is context. Metadata can be much more revealing than data, especially when collected in the aggregate.” (p. 75). David Cole’s research on the NSA system proved that precisely the metadata are what articulate our relationship to the state as a surveilled subject (2014). Some examples of metadata are the data about whom you talk to, when you talk to them, how long you talk, which sites you visit, how long you stay, and which buttons you click on the sites, etc. As Mayer-Schöenberger and Cukier (2013) observed, in the age of datafication, “everything under the sun – including ones we never used to think of as information at all, such as a person’s location” is transformed into a data format to make it quantified (p. 15). Although metadata do not include the actual content of the communication you have, “metadata alone can provide an extremely detailed picture of a person’s most intimate associations and interests” (Cole 2014, para 2). In short, metadata are crucial elements in categorizing and profiling data subjects; they consequently define who deserves “a higher level of scrutiny” (Cheney-Lippold 2017, p. 92).

Media theorist Matteo Pasquinelli argued that since metadata now serves as the measures of value of social relations and what our algorithmic associations mean, “the current mode of Deleuze’s “societies of control” might be termed the “societies of metadata” (cited in Cheney-Lippold 2017, p. 92). As former NSA general counsel Stewart Baker said, “Metadata absolutely tells you everything about somebody’s life. If you have enough metadata you don’t really need content.” (cited in Cole 2014, para 2). Furthermore, the contemporary use of metadata is towards predicting human behavior. For example, in the case of the NSA, it is to determine the probability of someone committing a terrorist act (Cheney-Lippold 2017, p. 85). A frequently cited example of commercial platforms’ predictive analytics is the Amazon’s recommendation engine, which the company argues is “to create a personalized shopping experience” (Amazon.Jobs 2018). Google Now is materializing the vision of its creator: “It should know what you want and tell it to you before you ask the question” (Page, cited in Varian 2014, p. 28). In other words, from the viewpoint of surveillance and marketing, predictive analytics correlating (meta)data patterns to individual’s behavior yields powerful information about who we are and what we do, which is in turn repurposed for the institutions’ objectives, such as the manipulation of desire and demand or policing groups of people or individuals and others. The examples above show that the privacy risk of metadata lies at the practices of data filtering and algorithmic manipulation for commercial and other reasons while claiming they are neutral facilitators of users’ web experience. In fact, metadata have become a kind of invisible asset, processed mostly separate from its original context and outside people’s awareness. Moreover, there has been evidence time and again that companies and other institutions which own communication platforms monetize metadata by repackaging and selling them to advertisers or data companies (Van Dijck 2004, p. 200). Therefore, a normative concept of informational privacy must address all dimensions of personal information, including metadata, in order to protect the most private realm of individuals, the context of their communication.  The next question to be addressed is how the data collection and analysis processes influence one’s self-realization and self-determined life. To answer this question, firstly it is necessary to understand how the algorithmic systems of data collection and analysis work on a base level.

References: 

Agre, Philip E. “Surveillance and Capture: Two Models of Privacy”. The Information Society 10.2 (1994): 101-127.

Amazon.Jobs, “Personalization, 2018, https://www.amazon.jobs/en/teams/personalization-and-recommendations. Accessed 1 Nov 2018.

Cheney-Lippold, John. “A New Algorithmic Identity: Soft Biopolitics and the Modulation of Control.”Theory, Culture & Society 28.6 (Nov. 2011): 164-81.

Cheney-Lippold, John. We are data: Algorithms and the making of our digital selves. NYU Press, 2017. Print.

Cole, David, “‘We Kill People Based on Metadata,’” NYR Daily (blog), New York Review of Books, May 10 2014, www.nybooks.com.

Daniel, J. Solove. “A Taxonomy of Privacy”, University of Pennsylvania Law Review. 154 (2006): 477-560

Daniel, J. Solove. Understanding privacy. Harvard University Press, 2008. Print.

Fried, Charles. “Privacy [A Moral Annalysis].” Yale Law Journal 77 (1968): 21.

Mayer-Schöenberger, Viktor and Kenneth Cukier. Big Data: A Revolution That Will Transform How We Live, Work, and Think. New York: Houghton Mifflin Harcourt, 2013.

Nissenbaum, Helen. Privacy in context: Technology, policy, and the integrity of social life. Stanford University Press, 2009.

Rössler, Beate. The value of privacy. Cambridge: Polity Press, 2005. Print.

Schneier, Bruce. Data and Goliath: The hidden battles to collect your data and control your world. WW Norton & Company, 2015. Print.

Van Dijck, José. “Datafication, dataism and dataveillance: Big Data between scientific paradigm and ideology.” Surveillance & Society 12.2 (2014): 197-208.

Varian, Hal R. “Beyond big data.” Business Economics 49.1 (2014): 27-31.

Westin, Alan F. Privacy and Freedom. New York: Atheneum, 1967. Print.

Westin, Alan F. “Science, privacy, and freedom: Issues and proposals for the 1970’s. Part I – The current impact of surveillance on privacy.” Columbia Law Review 66.6 (1966): 1003-1050.

Leave a comment

Your email address will not be published.