From voice assistants to self-driving cars to tools that translate languages in real time, society at large is getting pretty comfortable with the role artificial intelligence plays in our day-to-day lives.
AI is helping us accomplish more in the context of work as well. According to industry research, in 2015, 10% of around 3,000 companies surveyed around the world said they used, or were soon planning to use, AI. In 2019, that same survey found the number has leapt to 37%—a 270% increase in just four years.
As the lead of user experience for G Suite at Google, it’s part of my job to ensure that AI has a positive impact on how we work. We’ve seen a few examples where AI has missed the mark, like when a high-school math test bested Google Deepmind’s algorithm. Or when AI has perpetuated bias in the workplace.
When it comes to bringing AI into our workplaces and cultures, we have to be keenly aware that the opportunities are vast, but the stakes are high. Getting AI wrong can have a long-lasting, residual impact on the quality of a product or service, thus on the success of the brand or business—and most importantly, on the people involved.
To ensure we’re building the right type of AI tools for employees, we tested AI products among our employee base of nearly 100,000 people to see how different machine-learning models played out in real circumstances. We identified the following core principles as the ones people most want to see in workplace AI.
We found that people tend to categorize work into two buckets: core work, which is the uniquely human, intellectual, and creative work that people feel is their primary value add; and peripheral work, which is the tedious, repetitive busywork like scheduling a time for everyone to meet.
We found that in most cases, workers were more than happy to hand off their peripheral work to AI. Take the common practice of booking a meeting for a team that’s located in multiple time zones; it can easily take 15 minutes just to schedule a 30-minute meeting. So in Google Calendar we introduced AI-powered meeting room suggestions, which provide times that work for everyone across time zones and take into account available rooms in different offices, saving time that can be spent actually preparing for the meeting.
Our research also showed that while people are quick to hand over the reins when it comes to more administrative or tedious work, they feel understandably protective of their distinct innovative contributions. They don’t want to feel ownership over those creative tasks slip away.
Research, for example, is most effective when guided by humans; only they can probe a question and designate what findings are relevant to supporting a theory or idea.
The key is to create a system in which the end user still provides the intellectual spark—AI simply enables the user to get to their answer more quickly using automated calculations of complex data sets. It essentially supercharges a person’s ability to produce insightful work.
Building appropriate, personalized tone into AI is a tough one to get right. This includes aspects of communication that are nuanced, and it is based on an inherent understanding of social or professional etiquette.
When developing Smart Compose, an AI-powered tool in Gmail, we started to see historical bias play out with gender-specific pronouns: An investor was referred to as “him,” and a nurse was referred to as “her.”
To make sure the tool didn’t perpetuate unfair perspectives, and to try to prevent potential biases from coming through, we eliminated any recommendations that include gender-specific pronouns as well as suggested text for any content associated with protected classes.
Cognizant of the social dynamics of email, we determined that the most important aspect of the Smart Compose experience is that people need to have the agency to review and accept all suggestions. This paid off, when 85% of opt-in respondents said seeing the suggestions improved their overall experience.
There’s also the group communication and collaboration aspect of any workplace to consider. Designing AI-powered assistance for an individual is one thing, but designing assistive experiences for groups of people and teams can be trickier.
If AI disrupts the social equilibrium in a collaborative setting, we’ve learned that people will get annoyed and disable it. Research participants made it clear they feel uncomfortable if the assistant only provides help for a single individual, or a few people in a group. This means people are more likely to use an assistive feature during meetings if it is helpful for everyone within the meeting.
AI technology is built from machine-learning models that leverage large data sets of historical user behavior, meaning an AI solution is only as intelligent as the data it is built upon. This data can be outdated and doesn’t always include the full range of ideas and backgrounds that today’s workplace reflects. AI data has the potential to reflect patterns of behavior we don’t want to perpetuate.
As a researcher on my team described it, we’re basing this technology on the actions of an earlier generation. As product designers, we’re like parents trying to guide our children toward a more enlightened approach than the one we ourselves were raised with.
AI has the potential to provide us with new ways of looking at old problems. The best use of AI is when a product feels intelligent by anticipating and providing the relevant data you need, or accomplishing a task quickly, without someone having to do mundane or repetitive work.
To implement AI in the workplace the right way, we need to respect these principles, and make sure we never intrude on the intrinsically human, creative, and nuanced work that defines human careers.