How AI will take away our humanity, and give it back

How AI will take away our humanity, and give it back
We may earn a commission from links on this page.

How is technology impacting our humanity, impacted by our humanity, and what does it all mean for the future of everything? These questions have pervaded many conversations, including those at the annual Web Summit conference in Lisbon. For insight, and even a few answers, we turned to EY professional Kim Paykel for her predictions on what will be automated, augmented, and wholly upended. 

A startup and tech veteran with special interest in artificial intelligence (AI), Paykel leads EYX, the EY innovation initiative based in London. Her team’s goal is to help clients navigate technology disruption and innovate the future of their businesses and help EY member firms do the same. Like many professionals, she is excited about the promise of AI—but realistic about its limitations and the challenges that lie ahead.

How do you see AI-driven decision-making spreading or evolving in the near term?

Rapidly and ubiquitously! It’s happening as we speak, oftentimes without people knowing that some form of AI is making decisions on their behalf.

The UK and the US respect AI practitioners and solutions but are more cautious in their deployment of decision-making AI systems. Their citizens have greater awareness of the ethical and practical implications of letting algorithms control aspects of their lives. There’s also greater oversight, both from government and councils created to ensure ethical and responsible AI development: the Center for Data Ethics and Innovation (CDEI), The Partnership on AI, OpenAI, etc. However, even with this awareness and oversight, there are still many, many examples of AI that’s not fit for purpose being rolled out and used to make bad decisions in both the UK and the US.  

Of course, in countries where the state isn’t held to quite the same levels of accountability, ethical and human rights considerations are less likely to hold back AI advancement. When you have unfettered access to 700 million internet users’ data, you have a lot of material to train AI with. So you’ll continue to see the rise of apps which seek to support every aspect of day-to-day life and gather every scrap of data about users. And when all this lovely data is in the hands of a state, we shouldn’t be surprised when we start seeing the rollout of social scoring/credit systems, taking this control a little (or a lot) further.

Does striking a balance between artificial intelligence and human well-being require rethinking our definition of business success?

Yes, in some circumstances—business success is often linked to lowering costs and making people more productive. This is seen as a “good thing.” However, take the example of using AI in the surveillance of workers. It removes a layer of human management and therefore cost. This may increase productivity, but it reduces human agency and privacy and is likely to cause stress.

All of this is an age-old problem. The mechanical looms introduced in the Industrial Revolution de-humanized workers and led to many social problems. Society, through legislation and regulation, had to intervene to curb the worst effects of this change. 

There’s increasing awareness of the risk of bias in AI, which can magnify the real-world harm biases do. Is this something we can teach AI to solve?

Humans are appallingly biased. So the immediate goal should not be to remove absolutely all bias in AI systems but to make them less biased than humans, which is a very low bar. We must be careful not to hold AI to impossibly high standards. Having said that, it is absolutely the case that human bias can be magnified in AI systems.

AI can help manage bias in some circumstances, such as listening in to call-center responses and flagging bias in workers’ replies. Methods created to help explain AI decision-making can also help identify bias by tracing which data influenced the decision or classification. However, when you’re dealing with complex neural networks, this is difficult to achieve.  

So, depending on the circumstances, AI can help identify and alleviate bias. But it won’t solve it.

According to

the World Economic Forum

, only 22% of global AI professionals are women. How does this imbalance affect the future of AI? 

Having such a significant gender disparity in the AI community will result in systems that meet men’s needs without meeting society’s needs. “Invisible Women,” Caroline Criado-Perez’s relentless study of gender bias in design, has endless examples of bias and its consequences through history, from the size of mobile phones to the creation of city transportation systems.   

With AI, there are more opportunities for bias to enter the system, from biased training data sets—as when AI is trained on so many images of women in kitchens that a picture of a man in a kitchen is labeled a woman—to the coding of algorithms themselves. And, as Criado-Perez points out, it’s not obvious how these models are being used, and, for the most part, they are owned by private companies. Fixing gender bias in the short term must involve raising awareness and putting structures in place to help AI developers consciously check themselves.  

Closing the gender gap will require more time. Encouraging more women to study STEM at advanced levels—and providing funding or programs to help achieve this—is important. I would also note that racial bias is another huge problem, as researchers like Joy Buolamwini have covered so well.

Tech and the environment was a major theme at this year’s Web Summit, and a

Pew Research report

from earlier this year found that climate change is seen as the #1 threat the world faces. Can AI play a role in addressing this threat?

Climate has been referenced across the board from plenaries to panels at Web Summit so far. Consumer and employee pressure on business is changing the focus of brands and how businesses are talking about sustainability. The UN SDGs are also going mainstream with many leading tech luminaries calling out their critical importance. Solving the impending water crisis has also been a major theme over the first day and a half.

Mitigating climate risk is about turning off the taps on carbon emissions. AI can help us model complex climate systems and energy systems: In 2016, Alphabet’s DeepMind subsidiary announced that by applying machine learning to Google data centers, they had managed to reduce the amount of energy used for cooling by up to 40 percent.

Climate change isn’t coming, it’s here, and we’re already starting to see extreme weather events. In those cases, aid workers are using AI to translate distress calls and analyze satellite imagery. But AI has its limits—AI won’t stop carbon dioxide going into the atmosphere.  

Is rapid technological change like we see today the new normal forever? If so, what can business leaders do to prepare themselves, their employees, and their organizations?

Business leaders should accept that change is the constant. They should be immersing themselves in these issues and hiring different sorts of people to help them navigate change and keep their business relevant. Humans, for the most part, hate change, and it’s about taking them on the journey with you. Creating upskilling and retraining opportunities will give business more of the skills it needs for the future, while including employees on that change journey.

What AI developments are you most excited about right now?

Projects that have showed that AI can learn from scratch with no training data, just a set of rules. And AI that can assist with medical diagnoses—such as cancer detection and eye diseases like macular degeneration—and treatment.

At Web Summit I’ve been interested to hear various tech leaders talk about what Mark Foster from IBM called the “cognitive transformation” of organizations. We’ve been working for years on so called digital transformation, but it now feels like we have a tangible set of tools to completely reimagine the core of business. That’s exciting too.

Final question: Do you think AI is changing who we are?

Technology always changes us. It changes the way we think and the way we communicate. It changes our metaphors and relationships. We always focus on the negative aspects of a technology change first, and often with no evidence. In the past, we believed that trains would shake loose our vital organs and that cars should be limited to 5mph and have a person walking in front of them with a red flag! 

So, yes, I do think AI will change who we are, in both good and bad ways. In some cases, AI can help make us more human by taking away mundane work and enable us to bring out human traits like empathy and creativity. 

However, it could also be making us less human. Young children are learning they can speak in a rude and commanding way to ‘women’ in the form of Alexa—the gendering of digital assistants being a whole other thorny issue. And there is a battle for all our attention played out through our mobile devices. Is this reducing our humanity? It’s probably not helping it, put it that way.

This article was produced on behalf of EY and not by the Quartz editorial staff. Sources are provided for informational and reference purposes only. They are not an endorsement of the EY organization or EY products. The views reflected in this article are the views of the author and do not necessarily reflect the views of the global EY organization or its member firms.