Healthcare inequalities are systemic and closely intertwined with social inequalities. In the US, black men and women can be expected to live a decade less than their white counterparts, and are also much more likely to die from heart disease, various types of cancer, and stroke. Rates of diabetes in Hispanic Americans are around 30% higher than in whites. Gay, lesbian, and bisexual adults are twice as likely to suffer with mental-health problems. Access to and quality of healthcare is similarly dismal when it comes to diversity, starkly cutting across racial, social, and economic divides.
If developed and used sensitively, artificial intelligence systems could go a long way to mitigating these inequalities by removing human bias. A careless approach, however, could make the situation worse.
AI has the potential to revolutionize healthcare, ushering in an age of personalized, accessible, and lower-cost medicine for all. But there’s also a very real risk that those same technologies will perpetuate existing healthcare inequalities. A large part of this risk comes from existing biases in healthcare data.
AI’s transformative potential comes from its ability to interrogate, parse, and analyze vast amounts of data. From this information, AI systems can find patterns and links that would have previously required great levels of expertise or time from human doctors. For this reason, AI is particularly useful in diagnostics, creating personalized treatment plans, and even helping doctors keep up to date with the latest medical research.
But this use of data risks exacerbating existing inequalities. Data coming from randomized control trials are often riddled with bias. The highly selective nature of trials systemically disfavor women, the elderly, and those with additional medical conditions to the ones being studied; pregnant women are often excluded entirely. AIs are trained to make decisions using this skewed data, and their results will therefore favor the biases contained within. This is especially concerning when it comes to medical data, which weighs heavily in the favor of white men.
The consequences of this oversight are pernicious. Women are far more likely to suffer the deleterious side effects of medication than men. Pregnant women get sick, but the consequences of taking many medications when pregnant are chronically understudied, or worse yet, unknown entirely. Women are far less likely to receive the correct treatment for heart attacks because their symptoms do not match “typical” (read: male) symptoms.
If evidence-based medicine is already far less evidence-based for anybody who is not a white male, how can the use of this unmodified data do anything other than unwittingly perpetuate this inequality? If we want to use AI to facilitate a more personalized medicine for all, it would help if we could first provide medicine that works for half the population.
The effects of this data can be even more insidious. AI systems often function as black boxes, which means technologists are unaware of how an AI came to its conclusion. This can make it particularly hard to identify any inequality, bias, or discrimination feeding into a particular decision. The inability to access the medical data upon which a system was trained—for reasons of protecting patients’ privacy or the data not being in the public domain—exacerbates this. Even if you had access to that data, the often proprietary nature of AI systems means interrogation would likely be impossible. By masking these sources of bias, an AI system could consolidate and deepen the already systemic inequalities in healthcare, all while making them harder to notice and challenge. Invariably, the result of this will be a system of medicine that is unfairly stacked against certain members of society.
This is especially true of less-connected communities. There is already an unhealthy digital divide where poorer and older members of society don’t have access to the digital technologies that can be used to improve healthcare. This also means they’re not producing the data that comes with its use, and as this chasm grows, the system will stack against older and poorer patients even further than it currently does. Even if they were to readily gain access to these technologies in the next decade, it would be too late, as the systems will already be calibrated for younger, more urban bodies.
If we don’t closely monitor AI’s use in healthcare, there’s a risk it will perpetuate existing biases and inequalities by building systems with data that systemically fails to account for anyone who is not white and male. At its core, this is not a problem with AI, but a broader problem with medical research and healthcare inequalities as a whole. But if these biases aren’t accounted for in future technological models, we will continue to build an even more uneven healthcare system than what we have today.