Alphabet’s Eric Schmidt: The design of AI should “avoid undesirable outcomes”

“Robots should only be this big.”
“Robots should only be this big.”
Image: AP Photo/Gemunu Amarasinghe
We may earn a commission from links on this page.

Before Skynet can become self-aware, before the robots can rise up, we need a system in place to safely pursue research into artificial intelligence. Or so argues Eric Schmidt, the chairman of Google’s parent company, and Jared Cohen, the head of its tech-minded think tank, Google Ideas.

Schmidt has long been bullish on the prospects for the technology, backing experimental projects like Alphabet’s self-driving car program and Google’s DeepMind predictive search engine, suggesting AI will revolutionize how we work and live, even going as far as tell us not to fear living in a world full of AI.

But it seems even Schmidt acknowledges that a degree of caution is required in AI research, much as other tech luminaries, such as physicist Stephen Hawking and Tesla CEO Elon Musk have called for. (Musk has gone as far as to pledge $1 billion with a group of scientists and technologists, calling themselves OpenAI, to promote AI research that has a “positive human impact.”)

In an op-ed in Time magazine, Schmidt and Cohen outlined three principles they believe developers, researchers, and companies should follow when exploring AI:

“First, AI should benefit the many, not the few.”

Life-altering technology, Schmidt and Cohen argue, should benefit everyone, not just businesses. “As a society, we should make use of this potential and ensure that AI always aims for the common good,” they wrote.

AI research “should be open, responsible and socially engaged.”

BothGoogle and Facebook have recently made overtures to bring greater transparency to their AI research. Facebook recently revealed the designs for the servers it uses for AI research, while Google open-sourced the code behind its AI engine, TensorFlow. Critically, though, neither company gave away the data they use to train, test, and strengthen their AI algorithms, which could be the determining factor to their success.

“[T]hose who design AI should establish best practices to avoid undesirable outcomes.”

Researchers need to ask themselves, while systems are still being developed, whether the data they’re using to train AI systems are right, whether there are any side-effects of their research they need to consider, and whether there are adequate failsafes in place within the system. “There should be verification systems that evaluate whether an AI system is doing what it was built to do,” Schmidt and Cohen wrote.

Artificial intelligence is quickly moving from the realm of science fiction to reality. While, thankfully, we haven’t had to worry about computer systems triggering armageddon just yet, we do have smart systems that can diagnose cancer, handle our appointments for us, and clean our floors on their own.

If scientists and deep thinkers are to be believed, once we’ve cracked AI systems that can truly think and act on their own, with their own agency, it won’t be long before they blow past us in terms of intelligence. To control this, we should be shaping the development of this intelligence to benefit humanity, rather than disrupt it.