The right to privacy in the Age of Facebook
In an era of voluntary revelation and involuntary regulation, we must find new ways to defend our private lives.
One of the most confusing things about the question of privacy, and what makes it so elusive today, is that it is far from evident how people regard their right to privacy, or how these attitudes translate into day-to-day behaviours.
The concept of privacy, once merely thought of as the right to be left alone, has been transformed as we have become more information-oriented and as digital technologies have come to ensure that almost everyone now has a ‘digital fingerprint’. The question is further complicated by the fact that, in recent decades, the boundary between private and public has become blurred. A new age of disclosure has emerged where reality TV and social networking sites now represent the ‘private’ face of public scrutiny.
Can one seriously argue that privacy is generally regarded as important today?
It is clear that contradictory attitudes and practices co-exist – often in the same individuals. For example, it is common to encounter people who are concerned about data collection and the potential abuse of power by the state, but who are at the same time willing to reveal deeply personal thoughts on social networking sites. Some say they value the right to privacy but then also seem willing to bargain the release of very personal information in exchange for relatively small (often financial) rewards. Others who are keen to protect their sexual or medical histories will still gladly disclose vital details of their financial circumstances on commercial websites (1). It is not uncommon for those who reveal personal information on social networking sites also to be paranoid about online transactions, fraud and identity-theft.
When it comes to security, anti-terrorism and anti-crime measures, even those who regard privacy as intrinsic to personal liberty are willing to accept encroachments on their freedoms and rights by the state with little protest or outcry (just consider the deployment of CCTV cameras in the UK).
A convenient trade-off?
It appears that privacy is increasingly being regarded as an area of trade-offs rather than a political principle to be defended or upheld in all circumstances, particularly against the state. There are several signs that this is happening:
- privacy is increasingly traded for free content online;
- personal information is widely shared with institutions in return for some benefit or reward, often financial;
- state surveillance and data capture are often accepted as necessary for ensuring personal or national security, even though giving up privacy does not necessarily enhance security (2).
There are numerous studies that show the differential attitude people have towards sharing information in different circumstances. One example is a recent study published by the Economic and Social Research Council, Assessing Privacy Impact, in which Dr Ian Brown of the Oxford Internet Institute explains that his organisation’s latest annual survey of internet users found that while more people are now happy to provide email addresses and names to e-commerce websites, public concern over data collection by state institutions (beyond concerns about competence) remains very high. Brown notes that ‘the public is unhappy about extensive sharing, even for purposes such as counter-terrorism and medical research’ (3).
So how does one begin to understand what is really happening here, let alone what this might mean for the future? It appears that people’s willingness to share information about themselves depends to a large degree on who they are sharing that information with. It is precisely because people have different levels of trust (or confidence) towards different institutions that these differentials in attitude and behaviour can coexist. This is what makes anticipating privacy behaviour so elusive; how much information people will disclose or how they will regard a breach of data protection depends not only on their prior attitude towards privacy, but also on how they trust the beneficiary of their self-disclosure or data breaches.
Risk trade-off behaviour is mediated through a trust relationship between the discloser and the recipient of information. The distinction between sharing information with people in a social network versus institutions sheds considerable light on the apparently contradictory behaviours noted above.
Trust and confidence
In The Problem of Trust (1997), Adam B Seligman makes the important and helpful distinction between trust and confidence. Seligman argues that there is a fundamental difference between trust in people (interpersonal relationships) and confidence in institutions. (The same would apply to technological systems, though this is not Seligman’s focus.)
This goes to the heart of what trust actually is: a relationship that is not based upon reciprocal calculation, but is open-ended. Seligman argues convincingly that if a trusting act was based on calculation of expected outcomes or on the rational expectation of a quantified outcome, this would not be an act of trust at all, but an act based on confidence. It would be based on the idea of confidence in the existence of a system that delivered what it promised. The suspension of reciprocal calculation is precisely what defines trusting relationships.
Trust not only entails negotiating risk, it implies risk, as it is about negotiating that which is unknown. But the risk is specific. It is based upon the implicit recognition of others’ capacity to act freely and in unexpected ways. Unconditionality and engagement are at the heart of trust relations. As Seligman notes, if all actions were constrained or regulated there would be no risk, only confidence or a lack thereof. In relations of confidence, roles are prescribed while passivity defines behaviour; we give data to the state, they act upon it – more often than not – outside of our control. Data-protection legislation protects data and prescribes what can or cannot be done with that data. We are merely passive onlookers who give up that data either willingly or inadvertently.
So, in our interpersonal relationships – in the realm of trust – we act as free individuals and recognise others’ free agency as well. But when we act in predefined ways (that is, when we are constrained), then trust is not called for nor established. Confidence that everyone will act in accordance with the law or existing moral standards suffice. It is only when aspects of behaviour transcend this that trust emerges systematically as an aspect of social organisation.
Thus, the origins of trust are rooted in our recognition of the freedom of others to act freely. Trust is based on the ability to act autonomously and outside of predefined or ascribed roles, and on the recognition that others have this ability, too.
Trust is therefore a very rare thing indeed. And because it is based on free will, trust cannot be demanded, only offered and accepted. Trust and mistrust thus develop in relationship to free will and the ability to exercise that will, as different responses to aspects of behaviour that can no longer be adequately contained within existing norms and social roles.
High trust, low privacy
This provides some important insights into the complexities surrounding the contradictory privacy behaviours mentioned above. Sharing of personal information on Facebook is thus a fundamentally different act from allowing one’s personal data to be used by the National Health Service or any other government institution.
In the first instance, social networking sites are voluntary. Joining and participating is based upon free will and the ability to act autonomously. By adding friends to our network, we implicitly recognise this capacity in our friends and recognise their ability to act outside of predefined roles. Reciprocity is an outcome rather than a condition for participation. Gaining acknowledgement from your peers does not assume what form that should take. Outcomes cannot be predetermined. It is a trust relationship because it is open-ended; individuals are free to control their identities, how they present themselves and share what’s on their minds, and they recognise in their friends the same capacities. For younger people, in particular, this is perhaps their only truly private space. Not even their homes or bedrooms are as private as this.
This is thus a high-trust space where privacy has a negligible impact on behaviour.
Low trust, high privacy
The opposite pertains to environments where requests for information are made from institutions and organisations that we interact with. From what has been said above it should be clear that our relationships with state institutions are based upon confidence rather than trust: roles are ascribed while outcomes are intended and expected. Transgressions are resolved through the legal system. There is neither unconditionality nor active engagement, but a passive relationship based on prescribed roles that are not subject to change or control. Passivity and/or the expectation of trust being delivered are thus anathema to the establishment of real trust relations.
In these circumstances it is clear that privacy concerns will come to the fore and influence behaviour far more than in the case of social networks. The blurring of the boundaries of public and private today, and the general disengagement of atomised citizens with the political process, means that the lack of confidence in the institutional frameworks of society is extremely high. In these circumstances of a lack of confidence, indeed, a lack of trust, privacy concerns come to the fore.
Attitudes to privacy, and the behaviours that arise from those attitudes, will change according to the level of trust or mistrust people have with regard to the people or institutions they are interacting with. How much people trust the potential beneficiary of their self-disclosure is now the overriding motivator of behaviour.
This has a number of important consequences, which require further research and debate.
First, this insight suggests that any discussion that does not take questions of trust as a starting point will inevitably get it wrong. Regulation and legislation (around data protection, for example) or technologically based solutions (like identity management solutions, privacy policies and so on) can exacerbate rather than allay fears because they fail to take into account the trust relations underpinning them. Indeed, even raising the question of safeguards increases rather than allays privacy fears. This is a problem of perception and social attitudes, not something that is susceptible to legal or technical fixes. For regulation or technological solutions to be effective they have to be based upon the prior question of how much trust the given institution or system has with the public.
Second, because privacy is increasingly understood as a trade-off, its link to freedom and free will is increasingly brought into question. In these circumstances the right to be left alone is no longer a universal principle. Rather, subjectivity and the randomness of individual choice become the realm within which a degraded notion of privacy is upheld or encroached upon. A universal principle is replaced by the tyranny of subjective judgement, which can only result in the loss of social and political solidarity.
Third, the defence of privacy as a political right needs to be re-established on the basis of its centrality to the development of individual identities. Today’s identity formation through social networking represents a degraded sense of identity. ‘Facebook identities’, which are constantly exposed, force social conformity. Individuated conformity is not the basis upon which a robust defence of privacy can be mounted. This represents the loss of privacy and is a regressive force.
The rethinking of privacy as a trade-off mediated by trust is thus a critical starting point for mounting a defence of privacy today. The right to be left alone is not something whimsical or self-indulgent, but a crucial personal and social freedom. The loss of privacy represents a real threat to the spirit of human progress through social experimentation.
Previously on spiked
Norman Lewis and Neil Barrett debated privacy online. Rob Killick looked at the online threats to privacy. Nathalie Rothschild warned about overblown fears about Facebook. Josie Appleton wanted to bring back privacy. Or read more at spiked issue Privacy.
(1) See Esther Dyson, ‘How loss of privacy may mean loss of security’, Scientific American, September 2008
(2) See Anderson, et al, Database State, Joseph Rowntree Reform Trust, York, 2009
(3) See Assessing Privacy Impact, ESRC, October 2009, p11
(4) Adam B Seligman (1997) The Problem of Trust, Princeton University Press
To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.