No one wants your data more than your health insurer.
Insurance companies in the US can’t discriminate based on prior conditions. But they can use your health data to incentivize healthier behavior that, in theory, would bring their costs down. Historically, that data only included your medical records, plus what you told your insurer when purchasing a policy. Now, the increasing availability of public data, the proliferation of affordable wearables, and new AI-driven techniques capable of analyzing all the data generated are providing insights of increasing importance to insurers. We’re told this is a win-win situation: We get healthier (and possibly a few bonuses from our insurers, too), and insurers pay out less accordingly. But we may be sleepwalking into a data-fueled panopticon.
From an actuarial perspective, it’s simply good business to collect as much data about your customers as possible and build profiles of them based on that information. “Insurers need to know as much about their customers as possible,” says Charlotte Tschider, a fellow in health and intellectual property law at DePaul University. “They need to see how much you’ll cost them.”
Insurers are today capable of and are, in fact, gathering ever-more-detailed information about us, using publicly available and purchasable information like shopping records, household details, and social-media profiles to inform decisions. You may have told your doctor, or insurer, that you stopped smoking, started eating more healthfully, and joined a gym, but unless you use cash (which is pretty much the only way to truly opt out of this monitoring) they might be able to see the cigarette- and fast-food-fuelled lifestyle you actually lead by reading the records kept by your loyalty and credit cards. And let’s not mention those late-night social-media posts from the backyard of the club, a lit cigarette and bottle in each hand.
The available avenues for data collection are growing at a frightening rate, not least because of the spread of connected devices. The Internet of Things can provide even more real-time data on our behaviors and movements, which are incredibly useful in revealing personal information about our health and wellbeing.
Apps and associated wearables, like Fitbit and Apple Watch, can be used to monitor everything from what you eat to your heart rate, blood pressure, and blood-sugar levels; record sleep patterns; and help manage chronic conditions like depression and diabetes. And yes, if you’ve used one of the ever-cheaper and proliferating genetic-testing services recently, your genome is quite possibly out there, and maybe even for sale. DNA-driven discrimination for health insurance may be illegal and immoral, but (in the US, at least) life and disability insurers can use your genetic information when deciding whether to insure you and even your entire family..
In isolation, each of these data streams wouldn’t be of much use to anyone; at the moment, we’re generating and collecting so much data that we scarcely know what to do with them. This is where artificial intelligence comes in, especially machine learning. Rather than relying on a set of pre-programmed rules to process data, machine learning allows AI to learn from the data itself, drawing its own insights. As a result, we are now able to get meaningful insights from massive datasets that would have previously been unmanageable. With the processing power of AI, these data open up a panoply of opportunities: creating deeply personalized medicine, helping determine whether you will respond to certain medications, indicating your disease risk, and developing new treatments.
Insurers are keen to use all of this information to try and nudge customers towards healthier behaviors. For example, United Healthcare customers can earn up to $4 a day in healthcare credits if they meet three daily fitness goals set by the insurer, colorfully named: “frequency,” “intensity,” and “tenacity.” To fulfill these goals, you must: 1) walk six sets of 500 steps, finishing each set in just seven minutes, while spacing the sets out throughout the day at least one hour apart from each other (frequency); take 3,000 steps in a single 30-minute period (intensity); and take a total of 10,000 steps in the day (tenacity). The New York-based insurance company Oscar will similarly give customers up to $240 a year in Amazon vouchers if they meet the company’s daily fitness goals. John Hancock, a US-based insurer, offers incentives to life-insurance policy holders who wear Fitbits that monitor their activity and give this data to the company.
Elsewhere in the world you’ll often find a similar story. British insurer Vitality, for example, offers its UK customers tracking devices at discounted prices and encourages them to earn “Vitality points” that can be spent on a variety of rewards including cinema tickets and Apple Watches.
As wearables become more affordable and the AI-driven techniques used to analyze the data they generate improve, the insights wearables provide will become more valuable to insurers. And we can expect programs like this will become more popular. At face value, this seems great: we get healthier and get to enjoy whatever exciting bonus we’re offered. What’s wrong with that? The reality is healthcare insurers are increasingly leveraging the insights from this data for their benefit, not yours.
To start, healthier customers tend to keep costs down. In addition, being in the business of risk, insurance companies are very interested in anything that might help them calculate those risks more accurately and potentially intervene in order to lower them. For example, if an insurer knew you had a behavior (smoking, say) that raised your risk of developing a certain condition (in this case, lung cancer), they could choose to charge you higher premiums or target you with schemes designed to alter your behavior and lower your risk. It may be illegal to deny coverage based on pre-existing conditions, but discriminating based on behavior is considered acceptable, even fair, by many. Insurers may even appear altruistic when reaching out to try and improve customer’s health. But, Tschider warns, “we need to remember this isn’t really about doing something for the benefit of humanity. It’s about cost.”
Viewing our bodies as a series of boxes to check washes out many of the social complexities that connect lifestyle and health. If you’re suffering from depression, for example, it is often harder to exercise or eat healthily. An insurance system that obscures these complexities serves to discriminate against those who are already worse off.
Further, many poorer people may live in neighborhoods in which they feel unsafe walking around, making it difficult or even impossible to get their recommended 10,000 steps a day. Others may be unable to arrange childcare in order to free up the time needed to exercise frequently enough to hit goals. Often, those same people are the ones less able to afford healthy and nutritious food. In the end, no matter how strong their desire is to reap the rewards promised by a supposedly merit-based insurance system, they’ll fail for systemic reasons.
Essentially, data-driven insurance policies promise incentives to the privileged while further discriminating against those most in need of support. And this discrimination is hidden. Many will find themselves signing up for policies they believe and are told to be in their best interests, both financially and medically, will instead find themselves in much worse positions than before. “These schemes incentivize behaviors individuals of greater means can more easily satisfy…and groups already worse off will probably not benefit from them,” says Tschider. “Previously, we’ve thought of discrimination in terms of discrete categories; it’s much more complex now. With so much information from programs and devices that can be used as proxies for more sensitive information we really need to think about how we can use this information.”
“The key thing to think about here is agency,” adds Matthew Fenech, a former UK National Health Service doctor who is now an AI policy researcher at Future Advocacy, a London-based think tank. “There are many complex reasons why people may or may not engage in certain behaviors. We need to unpack why and [have] policies that are capable of protecting those less well-off to ensure they don’t end up suffering adverse consequences.”
Looking forward, as more and more people elect to share their information with their insurer, sharing will become the default, leaving others with little choice but to comply and also share, lest they pay higher rates elsewhere or potentially find themselves unable to secure coverage. Before this truly does become the norm, we should be asking ourselves, “Who is it who really benefits when we strap on a Fitbit?” If we think about it hard enough, the answer becomes clear pretty quickly: not us.