Last week, we at Privacy International brought a complaint to the Metropolitan Police about Google Street View vehicles intercepting wireless communications content, because there is an argument that criminal law has been breached. Part of the reason behind this is that Google has not changed its simplistic interpretation of events for several weeks now while facts have been emerging at the edges. So we felt that we needed an evidence-based approach to find the truth, and the only body that could discover those facts is the police.
We don’t accept Google’s explanation that it was simply a mistake. Nobody would accept an explanation such as that in any other circumstance. If there had been an instance of fraud or physical assault, then an explanation such as ‘I’m sorry it was an innocent mistake’ would not wash.
Many other questions have to be resolved: Why was Google involved in developing the code that captured the data? And to what extent did the rest of the company have knowledge of this activity? Much of the innocent-mistake explanation revolves around ‘plausible deniability’ – that’s an important legal question which we have raised directly with Google. Clearly there is a question of the extent of the knowledge of this activity across the company. We know, for example, that there probably was a product team, and probably an engineering team, with awareness of what was happening. Does that still constitute an innocent mistake?
A police investigation could resolve these questions and provide a detailed explanation of what actually occurred.
Google has long had a problem with privacy. While there are people within Google who take privacy seriously, the problem seems to sit at the structural level. Google has a prime directive, which is the exploitation of information reserves. So all the organisations’ priorities are pointed in that direction.
The difficulties the company faces and the challenges it faces are twofold. The first is that at the senior executive level, privacy doesn’t appear to have permeated anyone’s consciousness other than as a risk factor. And yet privacy has to be something one believes in so that you can actually build it into product design and deployment. The second problem is that there seems to be a free-market philosophy within Google. That gives some credence to the organisation’s explanation that there was no conspiracy behind the Street View collection of wi-fi data.
Google has an imperialistic nature. It sees data as universal, as global, and it therefore sees its presence and its activities as being necessarily borderless. That is, Google believes itself to be free of jurisdictional constraint, which is why it can’t get its head around this concept of Europe. The notion that there will be a jurisdiction with different laws and different approaches to information restriction is something the company has never quite understood. That goes to the heart of the ongoing problem Google faces
Why is Google so blind to the issue of privacy? It’s partly arrogance and it’s partly a product of being insular and imperialist. The arrogance stems from the company coming to believe its own hype. That is a fatal mistake for any company. If you’re in an organisation that is universally hailed as being totally cool, as being cutting edge, as being the saviour of the world, then that belief will permeate the culture of the company. So anybody working for the company therefore becomes resistant to competing ideas. That attitude is symptomatic of all the companies who believe they are on the cutting edge, who believe that they are doing something for the common good.
I do not want to single out Google as a spectacular example of that sort of attitude. Facebook, for example, is also having to pull itself back from the brink constantly because it, too, began to believe the hype. The difference is that Facebook has a much more interwoven relationship with its members. Facebook sees itself as a membership body, whereas Google sees its users as product.
Why does Google keep getting in hot water? To an extent I think the problem lies with senior management. Google’s CEO Eric Schmidt has a problem with privacy – he just doesn’t get it. He is extraordinarily protective of the company and he has never understood privacy. If someone were to sit Schmidt down for a day and give him a motivational talk about these issues, then the company would change focus almost overnight. But Schmidt has a belief in himself that eclipses even his company’s belief in itself. He has a belief in the supremacy of his modus operandi and believes that way of working should never change.
Google does not seem to have grasped that the privacy issue that has got it into so much trouble over the past few years could be mitigated without compromising Google’s identity or its business model. It really wouldn’t have to affect the company that much to become privacy-friendly. From a purely strategic perspective, ensuring that your business model incorporates strong accountability and oversight is surely a point for due diligence that protects the company. It could be seen that protecting privacy is just an added bonus. If a company has a fast-and-loose approach to designing and developing its products, then it is going to continue to run afoul of the user population.
So perhaps even more central than an understanding of privacy is Google’s lack of understanding of the importance of trust. Google never saw the importance of trust because it saw people as product. If you are Facebook, you see people as users and therefore you have to build trust. But if you are Google, and the people are product, then why would you want to develop trust in the first place? You have a monopoly, you’re only interested in the data. So trust becomes a tertiary issue. That then determines the environment in which privacy controversies unfold.
The way I would like to see Google develop is for it to overhaul its product design and deployment process. What we need from Google is a very public formula that is adopted whenever a concept is initiated within the organisation. That means that every time there’s a product idea, somebody is responsible for risk-assessing it.
I want to see every Google product risk-assessed. I want to see every Google product subject to a privacy checklist and every Google product subject to a rigorous and published oversight regime. If something like this were in place, then we would know that whenever an idea came out, a team would be responsible for ensuring that there was no associated risk to individuals.
That would be a start. Microsoft went down this route about four years ago. The company created the internal process so that there’s now so much oversight that it would be difficult to imagine risk being so prominent in a Microsoft product as it is currently in Google’s. Now I am not saying that Microsoft will not take risks, but the company clearly makes a business decision and a strategic decision about whether it proceeds with a foreknowledge of those risks. I do not think Google understands the risks to begin with, because it doesn’t understand the issue. If you don’t know the nature of the calamity, then you can’t risk-assess that calamity.
Simon Davies is director of Privacy International. This article is based on an interview by Tim Black.
Let us know what you think: