More than half of the world’s population is bilingual, meaning 3.5 billion people speak more than one language every day. Regardless of whether you’ve spoken two (or more) languages since birth or have picked up a new vocabulary recently, it’s likely you’ve noticed one massive downside:
Communicating multilingually using modern technology es difícil / non è facile / Es ist nicht einfach.
Your phone and laptop can only be programmed to one language setting at a time. That means that every time you want to switch languages—such as replying to a text from your mother in Spanish and then the next from your colleague in German—you have to manually switch from one setting to the other. The consequence is that users are forced to simplify the content of their messages or standardize communication using the most common global language: English.
Many of multilingualism’s advantages are therefore being reversed by poorly designed software architecture. And society is missing out on our unique contributions, too.
According to many studies, being multilingual has a lot of benefits. These include the social and professional ones, such as giving you the opportunity to communicate with a whole different set of people and understand new ideas. Then there are the medical upsides, such as being more resilient to dementia caused by Alzheimer’s disease.
Good news for multilinguals, right? Not when they’re forced into monolingualism.
Non-English speakers are naturally handicapped when it comes to modern textual communication. The typing functionality of the QWERTY keyboard was derived from a system initially designed to support English: 26 letters, two cases (upper and lower), no bells and whistles. But if you want to write in a language that uses accented letters—such as French or Danish—or other symbols—such as ¿ and Ø —your typing will be slowed down because those glyphs are not part of the standard keyboard. (Nonetheless if you need to switch to a different alphabetic keyboard entirely.)
Going multilingual is even more complex. Anyone with their own lingua franca living in a foreign country knows the feeling of needing to cycle between two or more languages on a regular basis, depending on who you’re talking to. (Personally, as employee of a multinational company, living in Italy, I communicate daily in Italian, Spanish, Japanese, and English with colleagues and customers based in the whole of Europe and Japan.) But when it’s such a bother to switch settings, you often decide to default to English.
The QWERTY typewriter layout was created by American newspaper editor Christopher Latham Sholes, and it’s safe to say that the software was most likely designed by a coder working for a Bay Area-based company. But most tech companies’ coders usually have names like Ramesh, Murali, or Zhang Wei, not Frank, John, or Carl. (And yes, they’re mostly men, too.) In fact, a 2018 report found that 71% of Silicon Valley workers are foreign. If so many of the people creating these technologies today are multilingual themselves, why do such problems persist?
That’s because the information-technology industry and the internet are a virtual English-language empire. Researchers have pointed out that about 60% of the content on the internet is in English, but only about 20% of internet users are native English speakers. (More people speak English than any other language in the world, but as a first language, Mandarin Chinese and Spanish edge ahead.)
So do we have to just learn to live with this state of affairs? Must I continue texting in English with my Italian-speaking friends just to avoid the hassle of switching my language settings—or even writing this article in English?
Not for much longer. The mobile-first world is changing the way we interact with our devices, and we’re currently seeing a shift back to an older method of communication: speech.
Users are increasingly starting to type using voice inputs rather than using the keyboards. (The latest research shows that around 60% of users speak to their device at least once per day, with younger generations leading the way.) This allows multilinguals to bypass the limiting English-centric keyboard designs and speak more naturally.
The most simple version of this is the speech-to-text functions many phones now use to translate your voice in text, word by word. Using machine learning, enhanced computing power, and natural language processing (NLP), many of these technologies are able to ascertain what language you’re communicating in and adapt the text to that.
These new NLP algorithms are independent from a specific language; they analyze, understand, and derive meaning from human language without needing to know the language itself. The algorithm can quickly work out what language you’re speaking, meaning you’re able to switch from one to another without needing to explicitly ask the device to change the settings. (For example, I love to ask my Google digital assistant to do something in Italian, like “Scrivi un messaggio a,” and then start to dictate the content in a different language.)
The Big Tech companies are catching on quickly. Google recently released a new version of its assistant that is able to manage two language at the same time. Users can choose from six languages—English, Spanish, French, German, Italian, and Japanese—and they’ve promised to add more languages in the future, as well as the capability to manage three languages simultaneously. Others companies like Amazon have also announced the release of multilingual capabilities for their upcoming products.
The advent of voice technologies will start to reduce the digital language divide and benefit multilingual people, not disadvantage them. Even the concept of emulating devices such as Star Trek’s universal translator are no longer science fiction: Pocketalk from Sourcenext claims to be able to translate 74 languages; Weaverly Labs’ Pilot can manage 15 languages in 42 different dialects; and Chinese firm iFlytek has announced the second version of its translator, which is able to translate from 63 languages. With 6,500 spoken languages alive today, we’re still a while away from a universal translator, though.
It’s been shown that learning a new language is like reprogramming your brain, renewing your neurological pathways and making the whole thing stronger and healthier. If technology can eliminate its bias toward the English language, the whole of society will be saying grazie mille / どうもありがとう / muchas gracias.