On my iPhone, I start typing out a text message and “Helloooo” pops up, a recommendation from Apple. Instead, I type “Hey. What’s up,” but my phone assumes I want to ask “What’s upsetting,” which is wrong and also, somehow, upsetting. The system then suggests three separate emoji instead of the word “up,” none of which make much sense.
This doesn’t just happen on my phone. In Google Docs, starting a sentence with “To,” elicits the suggestion “To whom it may concern.” In Gmail, after a woman named Amanda emails me, it rightly knows to suggest starting my reply with “Hey Amanda.”
These suggestions are called predictive text and they are ubiquitous on the internet. (In fact, predictive text helped me spell out “ubiquitous” before I could mess it up.) But how did this evolution come about? And what does it mean for the future?
🎧 For more intel on productivity innovation, listen to the Quartz Obsession podcast episode on Google Docs. Or subscribe via: Apple Podcasts | Spotify | Google | Stitcher.
The origins of predictive text
I first noticed predictive text on my first LG flip phone in 2007. Instead of cycling through the letters coded for each number, I could select T9 and press one number at a time. The system would figure out that typing 8-4-3 usually meant I wanted to write “the.”
But the concept dates back much further. Some scholars see mid-20th century Chinese typewriters as a basis for predictive text. These handcrafted machines place Chinese characters next to one another based on how frequently they might be used together. The QWERTY system that most modern keyboards are built on, which dates back to the 1870s, also somewhat optimizes for common letter combinations. And even that system pales in comparison to the hyper-efficiency of stenotype machines, which court reporters rely on to type out transcripts at record speeds.
Today, predictive text lives in the liminal space between what you will type and what you mean to type. It’s a step beyond auto-correct, also known as text replacement or replace-as-you-type, though the two technologies are interrelated: Both are rooted in natural-language processing (NLP), a field that combines linguistics and computer science. But instead of correcting a mistake, predictive text tries to ward off mistakes in the first place.
In functionality, predictive text feels less like autocorrect and much more like the Google’s search engine’s autocomplete system, which guesses what you’re about to type into the search bar—based on what billions of other people are searching.
The future of typing
While the future of typing may involve fewer misspellings and slightly less work, the ongoing evolution of how we type could have bigger consequences.
Predictive text reinforces certain speech, and serves as a reminder that no one is that original. Our words, way with language, and even our thoughts have conformed to societal standards, which aren’t that difficult for Siri, Alexa, Cortana, Watson, or whatever disembodied computer brain you prefer to figure us out. One study from Harvard and Calvin College researchers found that relying on predictive text makes writing, well, “predictable.” Indeed, the more we use predictive text, the more we reinforce existing standards and see them reflected back to us.
In a few years, computers could play an even more active role in how we write. Eventually, we may not even need a keyboard to type, and could instead pantomime the movements. Or maybe a neural implant like the one Elon Musk is pushing will usher in “think-to-type” technology. Depending on how such technologies are adapted, and by whom, typing itself may ultimately become obsolete.