Zia Khan predicts the AI of the future will only be used for good

Zia Khan
Zia Khan
Image: Courtesy Zia Khan
We may earn a commission from links on this page.

It took a global pandemic and stay-at-home orders for 1.5 billion people worldwide, but something is finally occurring to us: The future we thought we expected may not be the one we get.

We know that things will change; how they’ll change is a mystery. To envision a future altered by coronavirus, Quartz asked dozens of experts for their best predictions on how the world will be different in five years.

Below is an answer from Zia Khan, the senior vice president of innovation at The Rockefeller Foundation, a private foundation that seeks to promote humanity’s wellbeing. Many of his professional experiences—as a management consultant, serving on the World Economic Forum Advisory Council for Social Innovation—have helped show him how to use data and technology to positively transform people’s lives. 

AI is both a hero and a villain. Its speed can help us find a vaccine to protect against Covid-19, while its biases can send the wrong person to jail. It is no longer just a technology, but interwoven into the fabric of our society. And like other disruptive innovations that do both good and harm, it will need to be regulated.

The recent convergence of health, economic, and social justice crises have put a spotlight on all that is not working in our existing institutions, opening the window for radical changes in how we manage and regulate important public services. Companies like Amazon and IBM have taken a step back to reassess how their facial recognition technology is impacting society, further demonstrating the need for oversight. This increased awareness will ultimately force the public and private sectors to develop the regulatory framework needed to ensure AI benefits the public good.

Over the next five years, citizens will demand that the government set the goals for AI’s impact on society, but policymakers and technology companies will recognize that governments’ regulatory toolkit will be ill-suited to the speed of AI development and the exponential growth of its applications in society.

The catalyst for change will be AI’s role in helping scientists and engineers around the world fight to control Covid-19. There will be little patience for current regulatory approaches to catch up to AI’s use in drug discovery, contact tracing, and virus transmission. There will be moral and political pressure for regulatory innovation that saves lives while protecting the common good.  Filling the gap will be a new kind of entrepreneur who creates publicly trusted certifications for AI according to their best guess of what AI principles have the broadest support. Private sector companies, facing increasing backlash from their disproportionate capture of AI’s value, will seek out regulatory solutions so they can develop technologies and compete for customers in a more predictable market. Companies will ultimately adopt responsible AI use as a core value.

“Safe AI Inside” will become an important brand attribute. The government will eventually step in and replace the hodgepodge of AI principles guiding these entrepreneurs with a set of social goals that AI-driven applications need to achieve. Innovations in AI technologies and applications will be matched by innovations in regulatory approaches that test and certify their positive impact.

AI has become too important to allow profit to serve as its guiding force. Its potential cannot be stifled by a bureaucratic approach to regulation. With a blended approach of public goal setting and private innovation, a new market for AI regulations can and will help us achieve a more positive future.

To read more New Normal answers, click here.