Google portrays itself as the sort of responsible internet giant that pushes back against intrusive federal requests for user data and is not a collaborator in the US government’s program to eavesdrop on the internet traffic of pretty much the entire world. But merely by providing ever more ways to record our actions, the company is assuming the risk that the US or other governments will simply compel it to give up that data, or else acquire it by other means.
Take, for example, the latest report that the FBI can activate the microphones in smart phones that run Google’s Android mobile operating system, in order to conduct surveillance. Nor is this ability unique to Google’s products. It can apparently be done with the microphones and cameras of laptop computers (this isn’t specified by the Wall Street Journal’s report, but they were probably running Microsoft Windows). And in 2002, the FBI tapped the microphone of a vehicle’s emergency call system.
But Google, more than perhaps any other company, is aggressively putting sensors and the software to activate them into our environment. The just-unveiled Moto X phone from Google subsidiary Motorola has a custom microchip that allows it to listen for voice commands literally all the time, even when the phone is “asleep”. Google’s Chrome web browser now supports voice commands; that means it’s also rolled into every Chrome OS notebook computer, which run Google’s answer to Windows. Google’s face-based computer, Google Glass, responds to voice commands. Even though it was Apple that took voice control mainstream with the Siri system on its iPhone, voice is a dominant theme in the future of Google, and is clearly slated to make its way into every product the company makes.
If anyone is smart enough to know how to protect user privacy, it’s the engineers at Google, and that’s probably one reason why so many of us trust Google with what amounts to our entire online lives. But as I mentioned earlier, there are two problems with this trust:
Thanks to the recent revelations by Edward Snowden, we now know that the US National Security Agency—and its overseas equivalents—has the ability to tap just about all internet traffic within the country’s borders. Most of the processing of Google’s voice control systems is done in the cloud, because it’s faster to ship a compressed version of your voice to remote but enormously powerful servers than it is to try to parse the same data on, say, your phone. This data, like most of our transactions with companies online, is probably encrypted, so even if the NSA were slurping it up, it could be extremely difficult to unscramble. But that encryption often often has loopholes, and some companies are less good at avoiding them than others.
One reason Google and other internet companies periodically erase old data they have gathered about us is that they know that simply having that data around is a liability. If data exists, governments can compel internet companies to give it up. As Google’s then-CEO Eric Schmidt put it, when it comes to complying with national laws, however much Google doesn’t like it, “they have guns and we don’t.”
Given these two principles—and even leaving aside the times when users accidentally expose their own data by using weak passwords or getting caught in phishing and other attacks—there is a moral question that Google’s leaders do not seem to be discussing, at least in public: Is the simple act of recording ever more data morally defensible when there is always the possibility it can be mis-used for ever more intrusive surveillance?
This is a debate worth having. But the direction of all of Google’s recent products, from the self-surveillance required to enable Google Now to the lack of a simple affordance (like a recording light) on Google Glass to let others know whether or not you’re recording at any given moment suggest that Google isn’t sufficiently worried about how its actions appear. Of course, that could just be because the public demonstrably doesn’t care about privacy itself.