Artificial intelligence has historically over-promised and under-delivered. That routine leads to spurts of what those in the field call “hype”—outsized excitement about the potential of a core technology—followed after a few years and several million (or billion) dollars by crashing disappointment. In the end, we still don’t have the flying cars or realistic robot dogs we were promised.
But DeepMind’s AlphaGo, a star pupil in a time we’ll likely look back on as a golden age of AI research, has made a habit of blowing away experts’ notions of what’s possible. When DeepMind announced that the AI system could play Go on a professional level, masters of the game said it was too complex for any machine. They were wrong.
Now AlphaGo Zero, the AI’s latest iteration, is being set to tasks outside of the 19×19 Go board, according to DeepMind co-founder Demis Hassabis.
“Drug discovery, proteins, quantum chemistry, material design—material design, think about it, maybe there is a room-temperature superconductor out and about there,” Hassabis said. “I used to dream about that when I was a kid reading through my physics books. That would be the Holy Grail, a superconductor discovery.”
So what’s hype, and what’s reality? Can AlphaGo Zero, itself considered impossible just a few years ago, be the tool that finally gets us to an often-promised future? Or is DeepMind falling into the Silicon Valley trap that every problem can be solved with a better algorithm?
When discussing the possibilities, Hassabis outlined two criteria for AlphaGo Zero to be effective at a given task. (We’ll refer to the AI as Zero from now on, because it’s shorter and I like to imagine the algorithm as the plucky hole-digger of the same name from Louis Sachar’s young adult novel, Holes):
- Zero needs a realistic simulator for the environment it’s working in (in Go, that was a simulated board). The simulator is important because it allows Zero to test faster than physically possible. In other words, it doesn’t need to actually move pieces to play 5 million Go games—the games are virtual, and multiple games are played in parallel in a matter of seconds.
- There needs to be an “objective function.” In computer science, that’s just a number to optimize, i.e. make smaller or bigger. Versions of AlphaGo, for example, optimized for the projected percentage that the AI would win the game. In something like materials science, this number could be conductivity. That’s typically the easy part to come up with in a new problem.
Machine learning has been around in materials science since the early 2000s, and algorithms in use today already do much of what DeepMind suggests Zero could do. Gerbrand Ceder, head of the CEDER experimental materials design school at Berkeley, says that algorithms currently used by materials scientists analyze the characteristics that makes a material ideal for a certain property—whether that be conductivity or something else—and then look for other compounds with similar characteristics that haven’t been specifically tested yet for the specific criteria. If none exist, they try to generate a compound that would fit the bill. Scientists then get a curated list of high-potential compounds, which speeds up the physical testing process in the lab. These discoveries have already helped with research into optimizing the lithium-ion batteries in phones and electric cars. Some specialized simulations can replace experiments in the lab, but Ceder says machine learning isn’t needed for those problems, since we can already compute them extremely quickly.
But the technology’s use is still nascent; three experts who spoke to Quartz attributed that to the relatively small amount of data. Simulators, like Zero requires, are built on having enough data to predict how an action would take place in the real world—we just haven’t done enough experiments to accurately build a versatile simulator. Even if we did, the molecular world is a lot more complex than a Go board, says Evan Reed, who leads the computational materials science group at Stanford University.
“You could try to couple this with a physics-based code, but there are no physics-based codes that predicts the critical temperature of a high-temperature superconductor,” Reed says. “There are some problems where you just can’t couple it with another algorithm.”
Reed says that with the quantity of a material typically used, there are around 10^23 atoms; for a compound like steel, there are nearly innumerable ways those 10^23 atoms could be configured over a period of time, each configuration producing a different set of attributes for the material. A simulator would first have to be able to model all of those possibilities, before it could even begin to run the millions of times required for Zero to learn how it works.
“You need an algorithm that calculates, using quantum mechanics, the properties of this material with large numbers of atoms, and you’ve got to do it many many times to sample lots of different possible atomic configurations,” Reed says. “Right now, today, that’s a completely intractable problem.”
DeepMind declined to comment.
Valentin Stanev, who does materials science research at the University of Maryland, suggests that no normal machine would be able to work efficiently enough to compute all this data. In this case, no AI is the field’s savior, but instead a shift in how computers work will bring new discoveries. He’s hoping quantum computing, an experimental branch of computing that flexes the known laws of physics to process data more efficiently, will be able to tackle these endless complex problems.
“Imagine playing the game of Go, but instead of making one move at a time, you make all the moves, just with different probabilities,” Stanev says. “We cannot really solve the problem [without quantum computing].”
Gerbrand Ceder at Berkeley says that the only way to really get data up to Zero’s simulator-learning needs would be to physically automate experimentation in the real world.
“The equivalent [of Zero’s Go data] would be if we could set up self-experimenting: Could we make a machine that takes a lot of compositions, makes the stuff, measures the properties, and then iterates on it?” he said. “You would have to automate all the experimental steps—which by the way, should be done. This is kind of why materials science lives in the Stone Age; this is what makes it so slow.”
That testing process might prove necessary across the sciences, including drug discovery, which Hassabis mentioned. Startups like Atomwise have been working on AI approaches to virtually simulating drug interactions for years, but they still only make progress once those drugs are tested in the lab and iterated. Atomwise is now involved with 37 research projects.
“There really are not shortcuts where the technology off the shelf swaps out a Go board for some application in this domain and it just works like magic,” says Atomwise co-founder and COO Alexander Levy. “There are a lot of details in practice.”