A recent wave of cyberattacks—from WannaCry and Equifax to the alleged Russian influence on the US election—has demonstrated how hackers can wreak havoc on our largest institutions. But by focusing only on hackers’ efforts to extort money or mess with our political process, we may have been missing what is potentially an even scarier possibility: data manipulation.
Imagine that a major Big Food company gets hacked. But this time, instead of leaking the company’s proprietary information to the public or freezing its systems with ransomware, the hackers subtly manipulate the data on which the company relies. Expiration dates on milk cartons get scrambled so that some are thrown away early while others make drinkers sick, despite appearing within their use-by date. Figures are tweaked slightly on pending invoices to vendors, altering the company’s balance sheets by hundreds of thousands of dollars. Small changes are made to food-safety tests so that a dangerous product that was failing suddenly looks like it is passing regulation tests.
Would the company even notice such changes happening? Could it still have the confidence that its backups were uncompromised? How could its investors accurately assess the company’s value when all of its financials might suddenly be based on faulty information? And how might its customers and suppliers respond?
Now apply this thought experiment to banks, medical institutions, and government organizations. It’s pretty scary.
Unlike “information-gathering” hacks (where data is stolen because it is valuable) or “hold hostage” attacks (when data is imprisoned until someone pays to release it), “manipulation hacks” are hard to detect: They result when individuals (or bots) illegally change vital information below the threshold of attention.
Take the recent Equifax breach as an example. A software engineer set up a fake website claiming to help customers—and even Equifax linked to the fake site. The site’s contents made clear it was not real (the purpose was to expose how easy it was to “phish”), but Equifax fell for cybersecurity’s equivalent of fake news, and it took 200,000 clicks before the company noticed.
In the most extreme cases, such hacks could have deadly consequences. A paper in the New England Journal of Medicine recently reported on the risks of data breaches in health systems, noting that a hacker could in theory change a single data point, such as the level of potassium in a patient’s blood, leading caregivers to provide incorrect and potentially lethal treatments. Given that medical errors are now the third leading cause of death in the US, there is plenty of reason to worry.
More broadly, data manipulation breeds uncertainty. When a hacker’s goal is to leak stolen information or hold data for ransom, their success depends on their ability to prove the information they hold is real. But with data manipulation, the goal is to call the underlying information into question. And uncertainty is its own weapon. Ten years ago, an announcement by the banking group BNP set the 2007 financial crisis in motion because they said they didn’t know what securities linked to subprime mortgages were worth. In today’s data-driven markets, the consequences of uncertainty for the financial industry might be far greater.
Admittedly, data-manipulation hacks are not as easily monetizable as ransomware, nor do they produce as much buzz as the public release of sensitive data. But that doesn’t mean they can’t have serious financial repercussions. When hackers took over the Associated Press’s Twitter account in 2013, a few fake tweets about a terrorist attack caused the stock market to take a nosedive. A future attack could call the quarterly earnings of a publicly traded company into question, and benefit from the postponement of that company’s earnings call.
There are also deep-seated political implications to data manipulation. The data on the servers of state election boards are an obvious target; in at least one US county in 2016, a hacker manipulated the voter data, though the issue was corrected before the election actually occurred.
Because of the opportunity that data manipulation provides, we need to take simple steps now before this kind of hack becomes more common. First, we need to design systems that are carefully watching for manipulation: Hard or offline backups are essential, and data holders should develop systems to regularly compare live versions of their data to their backups. (According to Osterman Research, most companies don’t do this continuously, and some don’t do it at all.)
We also need better database oversight, such as systems designed to identify precisely when data has been manipulated. For example, in the art industry, collectors use provenance documents to verify who has owned an artwork and how it has changed over time. Perhaps a similar system to verify the provenance of data could help us tell when a data manipulation hack has occurred.
But there’s is a small silver lining: One of the easiest ways for organizations to defend against hackers is to beat them at their own game. When infiltrators can’t tell what data is real, they won’t know what actually might be of value. Emmanuel Macron’s French presidential campaign, for instance, purportedly distracted hackers with fake data, which limited the effectiveness of campaign hacks as a result.
Perhaps this really is an instance where fire(wall) can be fought with fire(wall).