In our quest for convenience, we are trading away our free choice. This is the free choice that our forebears fought wars of independence for: the right to decide where, how, and with whom we live, and the sacred rights of self-determination, a full life, and the opportunity to reach our full potential.
Yet today, AI makes decisions for us in every area of our lives without our conscious involvement. Machines mine our past patterns and those of allegedly similar people across the world, and then decide not just what news articles we see, but with whom we should commune and forge bonds, what goods and services we should purchase, or for whom we should vote in our political processes. This influences our opinions, our relationships, and our social fabric.
By replacing human-curated judgement with data-backed judgement, AI ultimately narrows our field of vision and reduces our social and economic choices—in retail, dating, entertainment, education, health care, and job opportunities. Taken individually, the nudges of mercantile and political interests may be of little consequence. But en masse, our lives become more and more subtly influenced and molded by the companies we let make decisions for us.
In this way, the salient tradeoff in the AI age is not privacy, but choice itself.
Sponsors are tuned in to our behavior, adding a mercantile sway to the information we receive. It started with consumers trading our data for convenience. In our cars, we share behavior patterns with automakers and insurance companies so they can deliver better navigation, automated driving, and lower insurance rates. In our home lives, companies use our socioeconomic profiles, life patterns, and our cognitive and visual preferences to keep us “engaged” in richer, more customized entertainment—with the hopes that we’ll pay for that next episode, in-game advantage, irrigation system, video-monitoring service, or smart-home thermostat.
Shopping online gives us the convenience of searching a catalog of billions of products from our couch—but more often only shows us our recent searches, purchases, and similar products based off them. Is that really free choice? The Amazon experience theoretically offers a vast range of products that no print catalog can match. But it also reinforces our own tastes over and over again, based on past transaction data. In practice, we stew in our own consumer characteristics; our range of exposure and choice is limited by upping the odds we’ll buy.
Or look at how hard it can be to find something new to watch on Netflix or Hulu: A search for “film noir” often only shows part of the cinematic canon based on your device, and further orders the results based on your prior watching habits. While practical, an Apple TV, an Amazon FireTV, or a Google Chromecast narrows your natural exposure to art, even when you go searching for it.
Not only are our choices narrowed by monetary incentives—they are narrowed by the use of algorithms that put us into what statistics calls “clusters,” which are groups with similar behavior profiles. If you happen to watch 1930s classic movies, enjoy swing dancing, are close to paying off your mortgage, and buy deluxe birdseed for the window feeder, the machine may place you in a retired baby-boomer group. Now you’ll be hearing a lot from cruise lines, who find their clientele in that particular cluster, and you’ll stop receiving promotional coupons from The Gap or being suggested music by Ed Sheeran in your streaming channels.
As a result of your perceived cluster, your consumption choices may be adjusted. The options that appear before you narrow, and you cease to imagine the alternatives that aren’t presented to you. The lack of choice affects your free will to really choose.
Beyond the narrowing effect of clustering is the growth-limiting effect of predicting preference from past behavior. Rather than being presented new and potentially challenging experiences, we see echoes of our past trajectory. Amazon keeps us in clusters of like-behaved shoppers; Google of like-behaved searchers; and Facebook, tragically, of like-behaved citizens. Reliant on behavior data, the machine constrains us to what we have been, rather than what we wish or hope to become.
The 19th-century historian Alexis de Tocqueville observed that Americans are continually evolving. We have long prized the freedom to reinvent ourselves—to move to a new town or country, to take up a new trade, career direction, or hobby, or even a new religion or way of life. Choice and free agency are central to this character.
As AI narrows our choices, will it keep our careers on a single track? Will it guide our lives so that we meet only like-minded people, with whom we get along, and thus deprive us of the encounters and frictions that compel us to evolve into different, perhaps better human beings?
When our choices are constrained to narrow trajectories of consumption, relationships, news, and products, we cease to see these possibilities and life paths. If we trade more and more choice for convenience, we shut out other people’s divergent points of view and rest in the comfort of our cluster. Following this trend to its natural conclusion would extinguish our culture of constructive debate; further divide and stratify our society across political, intellectual, and commercial lines; erode our empathy, social coherence, and loyalty to those fellow humans not in our trajectory; and stifle innovation borne of cognitive and behavioral diversity, as well as the tensions that come from ideas, preferences, and tastes colliding.
In the name of our environment, economy, and humanity, we can’t afford to risk these consequences. Privacy was at the top of policymakers’ lists during the first wave of the internet. Choice and free agency deserve a top spot in the AI age.
This piece was adapted from Groth and Nitzberg’s forthcoming book, Solomon’s Code: Humanity in a World of Thinking Machines. Learn how to write for Quartz Ideas. We welcome your comments at firstname.lastname@example.org.