The robots are coming.
Indeed, it may just be time for Americans to welcome more scrutiny into their lives. The United States has had surveillance cameras for decades, and facial recognition software tied to some of the thousands of cameras in use in public places for most of the past 10 years. However, the April 15 bombing at the Boston Marathon may, like 2011’s summer riots in the UK, may be the moment when public awareness of the presence and more advanced capabilities of what are now being called “analytic cameras,” or cameras designed to not only capture but analyze what they see.
In this particular horrific incident, hundreds of cameras were trained on various parts of the crime scene since it took place at the focal point, the finish line, of a globally known and watched event. While not a nationally televised live sporting event, the Boston Marathon drew thousands of onlookers, hoping to see the race, snap pictures or grab video of friends and family as they surged toward the line. Because it is a timed event, cameras were also trained at the end to determine finishing place. And lastly, because it took place in a dense urban environment and a major commercial zone, probably hundreds of private and city closed-circuit television (CCTV) cameras caught fragments of the scene.
It is this dense visual coverage—and the fact that these images weren’t centrally controlled but captured by both businesses and hundreds of people, creating a mountain of visual evidence, not quickly able to be parsed by humans, of varying quality, and spread over a large area—that has called machine intelligence into play. The FBI has fed mounds of visual evidence into face recognition applications since the bombing to get a fix on those responsible.
And the demand has not been solely by law enforcement officials. The clamor online and among media pundits to quickly call in the robots to grab images of possible suspects and identify them has been building over the last 48 hours. More than once I’ve heard a sentence starting with “You’d think they could…,” thus stoking the public expectation that these analytic cameras are the silver bullet. Some didn’t even wait, but made the story up themselves.
Of course, some haven’t waited, and instead played “human flesh search engine” as the Chinese have termed the phenomenon, with members of internet sites Reddit and 4chan doing collaborative markup jobs. Fortunately, some loud voices have called for this to cease, but it demonstrates the expectation among even tech-savvy early adopters for machines to do the work. Ironically, Reddit and 4chan users are often the most anti-surveillance of groups.
If you/someone you know is circulating CCTV pix of random people from Boston Marathon circled b/c they have a backpack, STOP. THAT IS DUMB.
— Cory Doctorow (@doctorow) April 17, 2013
The UK has led in deployment of cameras and smart software to spot and track individuals, read body geometry, and call out offenders. In the US, an already primed domestic security complex will undoubtedly succumb to public pressure as a call to double down on the latest security technology darling. Forecasters have anticipated such growth eventually, with the networked facial recognition (cameras connected to a network) market growing at a healthy 30%+ per year over the past three years, driven in part by use in surveillance societies such as China. A similar boom in development of behavioral analysis applications (starting with simple smile recognition on digital cameras, but manifesting as detection of unusual body movements at the high end) will give us a potential view into someone who might be limping in a crowd, or carrying something heavy.
This technology development was happening anyway, to be sure. However, history shows that the demand for instant, painless action to find and apprehend those responsible for the Boston incident—from both the tech-smart and man of the street alike—will provide the spur for more, not less, watching and analyzing of our daily activities. Drones, for example, can be a guide to the trajectory of sophisticated robot vision in public safety. Doubtless the two will come together domestically at some point soon, with advanced visual tech such as Gorgon Stare on tap.
Then again, this kind of recognition and tracking is being legitimized in safer surroundings, in the retail shops we visit, in bars, and through the smartphones we carry in our pockets. Consumer visual analysis is only held back at the moment by privacy concerns, not by technical capability. While we agitate for the latest technology to be brought to bear in solving a heinous crime, we should also know that we’ll eventually have that technology focused on ourselves.
We welcome your comments at firstname.lastname@example.org.