Regulating IT to make it more ‘trusted’ confuses computers with people.
When Bill Gates, chairman of the world’s largest software company Microsoft, announced his ‘Trustworthy Computing’ initiative in January 2002, he ensured that the question of trust was now intimately connected to computing, IT and the internet.
For Gates, ‘Trustworthy Computing’ meant ‘computing that is as available, reliable and secure as electricity, water services and telephony’ (1). And to prove how serious he and Microsoft were, Microsoft engineers were instructed to stop all their development work and spend the following two months reviewing Windows source code for security holes and fixing them. ‘Trustworthy Computing’ reverberated across the industry and the IT media’s pages.
The Trusted Computing Platform Alliance (TCPA) has been established by Compaq, HP, IBM, Intel and Microsoft. Although all five companies have been individually working on PC reliability for years, they realised that ‘the level, or “amount”, of trust they were able to deliver to their customers, and upon which a great deal of the information revolution depended, needed to be increased and security solutions for PC’s needed to be easy to deploy, use and manage’.
The result was an open alliance to work on creating a new computing platform for the next century ‘that will provide for improved trust in the PC platform’ (2).
At first glance, the idea of improving ‘trust in the PC platform’ appears obvious and incontrovertible. What could possibly be problematic about the desire to make computers more reliable, secure and resilient? In a Microsoft White Paper ‘Trustworthy Computing’, Craig Mundie, senior vice president and chief technology officer (CTO), advanced strategies and policy, spells this desire out clearly:
‘Many people are reluctant to entrust today’s computer systems with their personal information, such as financial and medical records, because they are increasingly concerned about the security and reliability of these systems, which they view as posing significant societal risk. If computing is to become truly ubiquitous – and fulfil the immense promise of technology – we will have to make the computing ecosystem sufficiently trustworthy that people don’t worry about its fallibility or unreliability the way they do today.’
Mundie continues: ‘All systems fail from time to time; the legal and commercial practices within which they’re embedded can compensate for the fact that no technology will ever be perfect…. Hence this is not only a struggle to make software trustworthy; since computers have to some extent already lost people’s trust, we will have to overcome a legacy of machines that fail, software that fails, and systems that fail. We will have to persuade people that the systems, the software, the services, the people and the companies have all, collectively, achieved a new level of availability, dependability and confidentiality. We will have to overcome the distrust that people now feel for computers.’ (3)
In referring to legacies of failing computer systems, it is difficult not to wonder if Mundie is simply conflating people’s experience of Microsoft with their anxieties about computers and the internet in general. The problem lies not in Mundie’s desired outcomes, but the assumption underlying the ‘Trustworthy Computing’ initiative: namely, technological determinism. This initiative assumes that the question of trust and computers requires a solely technological solution.
The demand for trust has certainly been raised in the past few years. As more and more people shop online, for example, people have had to accept on faith that their credit-card numbers are safe from hackers, that their personal information will not be sold without their permission, and even that the goods they order actually fit the online descriptions when (and if) they are delivered. Businesses increasingly rely upon mission-critical information, financial and other data services being delivered through their computer and communications networks.
However, the strength of the demand for trust on the internet or computing platforms has not arisen from endless bad experiences with these networks. Although problems certainly still occur, computers and the networks that connect them perform at impressive levels. The steady growth of e-commerce attests to the fact that more and more of us are more at ease with the online experience than before.
In fact, while Mundie is correct to make the point that ‘no technology will ever be perfect’, computing and software has on the whole been remarkably resilient and reliable over the years. Donald Mackenzie’s book Mechanising proof: Computing, Risks, and Trust reports on a study he conducted in 1992, which aimed to determine how many people had died in computer-related accidents worldwide up to the end of 1992. At around 1100, the number seems surprisingly small (4).
When one considers the ubiquity of computers and computer information systems in everyday life, the computer industry has performed remarkably well over the years. However, the demand for trust highlights that there remains a gap between perception and reality. This suggests there is something more complex at work which the ‘Trustworthy Computing’ initiative fails to address: that the demand for trust has arisen from broader social developments that have little to do with technology.
The technological determinism underlying ‘Trustworthy Computing’ sees the problem of trust in purely technical terms. The public’s technological scepticism, however, is a symptom of a society disenchanted with progress.
The response to the terrorist attacks of 11 September 2001, for example, highlighted how much scepticism, fear and ambivalence there is today towards IT and general technology. Almost all the commentators agreed with an underlying assumption that technological development and advance have made society more vulnerable to destructive acts. The reaction has been to question increased complexity which, it is felt, has led to dependency and increased vulnerability.
This does not reflect the actual impact of technological development. Rather, it is an expression of contemporary society’s difficulty in assimilating change. Change today tends to be experienced as a negative, purposeless force that is beyond human control; and when people react against change, they react to a tangible outcome – for example, a particular innovation. Almost every form of innovation today invites scepticism and uncertainty, and it is this that leads to the strange proposition that technological advance makes society more vulnerable to destructive acts.
The difficulty facing the computing industry is that technological innovation is necessarily about engaging with uncertainty. That is the nature of this industry. This is precisely why the ‘Trustworthy Computing’ initiative, as it is presently constituted, can only fuel the demand for trust it is trying to assuage. The underlying cause of the demand for ‘trust’ is a socially derived scepticism to change, particularly technological innovation. And this has no technological solution.
The advocates of ‘Trustworthy Computing’ argue that they are addressing social concerns. Donald MacKenzie, for example, argues that ‘concern for safety or security does not diminish concern for rigor or for intellectual consistency; it increases it’ (5). This is undoubtedly correct and indeed, welcome. However, this view assumes that the IT industry is proactive – that it is taking the initiative and engaging in ways that will shape or change social perceptions. In reality, the IT industry is being shaped by social perceptions and in ways that few realise threatens its future.
A more fundamental problem is the supposition that these heightened activities will lead to increased trust in technology itself, rather than simply raising people’s confidence about using these technologies. To understand this, it is worth exploring the distinction between trust and confidence.
The TCPA defines ‘trust’ as ‘the ability to feel confident that the software environment in a platform is operating as expected’ (6). However, trust is not about expected outcomes. In one of the most illuminating studies of this question, Adam B Seligman, associate professor of religion at Boston University, draws out the critical distinction between trust and confidence. He argues that, if a trusting act was based upon calculation of expected outcomes or on the rational expectation of a quantified outcome, this would not be an act of trust at all, but an act based on confidence; that is, confidence in the existence of a system that delivered what it promised (7).
The suspension of reciprocal calculation is precisely what characterises trusting relationships. In any computing environment, it is the opposite. What we want (and what the ‘Trustworthy Computing’ initiative wants to deliver) is precisely the confidence that the computers we use do what they are supposed to do.
Seligman clarifies a number of important distinctions that are helpful in this discussion. First, he insists that this is the basis of the fundamental difference between trust in people (interpersonal relationships) and confidence in institutions or technological systems.
With regard to our interpersonal relationships – in the realm of trust – we act as free individuals and recognise in others their free agency as well. But when we act in predefined ways (that is, ways in which we are constrained), trust is not called for, nor established. Confidence that everyone will act in accordance with the law or existing moral standards suffices. It is only when aspects of behaviour transcend this that trust emerges systematically as an aspect of social organisation.
Thus, the origins of trust are rooted in our recognition of the freedom of others to act freely. This is a fundamentally social act, which links trust to the ability to act autonomously, to recognise that in others, and to act outside of predefined or ascribed roles. In short, trust is a fundamental part of risk taking.
Seligman’s approach underlies the importance of understanding that trust is not only a means of negotiating risk, it implies risk. Trust is a means of negotiating that which is unknown. The implied risk is central to recognising others’ capacity to act freely and in unexpected ways; if all actions were constrained or regulated there would be no risk, only confidence or a lack of confidence.
Trust is therefore quite a rare commodity; and because it is based on free will, trust cannot be demanded, only offered and accepted. Trust and mistrust develop in relationship to free will and the ability to exercise that will, as different responses to aspects of behaviour that can no longer be adequately contained within existing norms and social roles.
Trust as an aspect of social solidarity is very different to confidence, which is based on market exchange. In the market, roles are ascribed while outcomes are intended and expected. Transgressions are resolved through the legal system. Trust, by contrast, emerges only as an element of a very particular type of unconditionality, one based upon the autonomous acts of individuals.
Trust is the result of active engagement. Consequently, the passive expectation that trust should be delivered is an anathema to the establishment of real trust relations. What does this mean for notions of ‘Trustworthy Computing’?
In today’s society, trust is a very rare commodity, and things are increasingly organised along the lines of mistrust. What we see is an overriding impulse to regulate society, where society can be confident that human aspirations, risk taking and experimentation are constrained and limited. This culture of limits forms the backdrop to the IT industry’s attempt to win friends through the promise of delivering trustworthy computing.
The absence of trust in social relations has been transposed on to computers and communications networks. Demanding ‘trust’ from computers or the internet cannot be fulfilled other than through the forcible imposition of expected outcomes and roles. This goes beyond the question of standardising these technologies. Whether the IT industry likes it or not, the floodgates are being opened to regulating the use of these technologies in ways that will insist on the same limitation of human aspirations.
The problem with ‘Trustworthy Computing’ is not simply that it will never deliver its targeted outcome. The technological determinism underlying this initiative overestimates what can be achieved by technology, while underestimating what is really at play. Instead of the IT industry convincing the public of its fitness to deliver this elusive ‘trust’, the IT industry will be forced to adapt to society’s lowered expectations.
The long-term danger is that instead of better, more robust and secure computing platforms and networks – the proper goal of the initiative – we will have less innovation and experimentation, as the IT industry scores home goals in its failure to deliver trust.
Dr Norman Lewis is director of technology research at Freeserve.com plc. He writes here in a personal capacity.
He is speaking at the spiked-seminar ‘Trusting our technology: Is trusted computing deliverable – or even desirable?‘, at Hill and Knowlton in London, on Thursday 27 March 2003. For further details, email Sandy.Starr@spiked-online.com
(1) See the email memo sent by Bill Gates to all Microsoft employees on 15 January 2002.
(2) Homepage of the Trusted Computing Platform Alliance website
(3) Trustworthy Computing, Craig Mundie, Pierre de Vries, Peter Haynes and Matt Corwine, Microsoft, October 2002
(4) See Mechanising Proof: Computing, Risk and Trust, Donald MacKenzie, MIT Press, 2001, p300-301. Buy this book from Amazon (UK) or Amazon (USA)
(5) Mechanising Proof: Computing, Risk and Trust, Donald MacKenzie, MIT Press, 2001, p6. Buy this book from Amazon (UK) or Amazon (USA)
(6) Trusted Computing Platform Alliance Frequently Asked Questions, rev 5.0 (.pdf 24.4 KB), 3 July 2002
(7) See The Problem of Trust, Adam B Seligman, Princeton University Press, 2000. Buy this book from Amazon (UK) or Amazon (USA)
To enquire about republishing spiked’s content, a right to reply or to request a correction, please contact the managing editor, Viv Regan.