Google’s artificial intelligence unit DeepMind engaged in “highly questionable” practices when it struck a 2015 deal to access years’ worth of UK hospital patient records held by the National Health Service, says a paper published March 16 in the journal “Health and Technology.”
The paper, written by Cambridge University law academic Julia Powles and Economist journalist Hal Hodson, is the first piece of scholarship to analyze the terms by which 1.6 million patient records from three London hospitals that are part of the NHS Royal Free London trust were shared with DeepMind. That agreement was replaced by a 2016 deal that the authors will analyze in future. The earlier agreement is currently being investigated by two UK regulatory bodies. One of those investigations, by the Information Commissioner’s Office (ICO), is ”close to conclusion,” the ICO says.
The paper argues that both DeepMind and the hospital administrations, in their eagerness to take advantage of national data-sets, were too lax in the way the data was shared. Access to data is a crucial advantage among AI firms, who seek reams of it to train ever more complex machine intelligences. A favorable deal for DeepMind means it could obtain a “monopolistic” position over health analytics in future, the paper argues.
DeepMind and the hospitals say the authors have made “significant factual and analytical errors.” The company says it is commissioning an analysis of the paper that it will publish. The paper was peer-reviewed before publication.
The authors and DeepMind are now embroiled in rounds of technical arguments and counter-arguments. Dealing with privacy law issues in the context of artificial intelligence is tricky. For instance, DeepMind’s statement responding to the authors says:
This paper completely misrepresents the reality of how the NHS uses technology to process data.
The authors, in a statement, reply:
The accusations of factual inaccuracy and analytical error were unsubstantiated … [the] article sets out precisely what it is about this kind of data agreement that is unusual and of public interest.
On another point, on whether DeepMind was transferring more data than needed for its Streams app for NHS clinicians, DeepMind pointed to a Q&A on its website, which has been updated with new answers addressing the questions posed by Powles and Hodson:
The initial agreement between DeepMind and the Royal Free, signed in 2015, made clear that DeepMind was processing data strictly under the instructions of the Royal Free (which remains the case today, under our revised contracts). These instructions were to process data needed for the detection and treatment of acute kidney injury.
Powles and Hodson fired back:
[DeepMind’s claim] is simply not true. The initial agreement (pdf, p. 2 under “Why is the information being shared?”) provides the instructions for DeepMind’s data processing.
When DeepMind was asked why it didn’t consult the plethora of regulatory bodies in the UK on its agreement with the Royal Free, one of the authors’ criticisms, it said:
Hospitals wouldn’t routinely approach the ICO before signing an agreement with a data processor, and there’s certainly no obligation to do so.
To which Powles and Hodson replied:
DeepMind has never developed a piece of commercial software, let alone health software, in its existence. Does [DeepMind] have any reasoning beyond sticking to the letter of the law for not approaching these bodies?
And so on, and so forth.
These thorny exchanges underline the complexity of negotiating the right to exploit national, government-owned databases. DeepMind says it simply wants to make healthcare tech systems more efficient, and besides, it’s not applying any machine learning or AI techniques to this data. The authors warn that the scope of the data-sharing means DeepMind’s purpose for the data is ambiguous.
Medical data experts see missteps in the way DeepMind and the Royal Free handled their 2015 agreement. Nicola Perrin, who leads the Understanding Patient Data initiative at the non-profit Wellcome Trust, says the deal was ”not transparent” enough.”They needed to do a much better job of explaining why so much data was transferred, and to distinguish clearly between uses of data for care and for research,” she says.
Observers of the booming artificial intelligence scene note that much is at stake for AI firms like DeepMind—and society at large. “With health data, and government acquired health data, we need to be sure we aren’t, in effect, giving oxygen away for free to a private company that will start to sell it back to us,” says Azeem Azhar, who writes the popular Exponential View newsletter, which covers AI.
DeepMind’s deal with the Royal Free casts a spotlight on attempts by AI companies to access troves of healthcare data. This means dragging the antiquated computing systems running our hospitals into a new, technologically sophisticated age. But it also means the only thing shielding citizens’ private health data from some of the most powerful companies on earth is a skein of privacy laws and regulatory agencies.
If data is the new oil, then public health data is the new oil rush. As Powles and Hodson write: ”Without people, there is no data. Without data, there is no artificial intelligence. It is a great stroke of luck that business has found a way to monetize a commodity that we all produce just by living our lives.”