Skip to navigationSkip to content
Visitors are reflected in the installation Mirror Maze by artist Es Devlin, at the Copeland Park in Peckham, south London, Britain September 21, 2016.
Reuters/Stefan Wermuth
As you navigate life, the memory pathways in your brain grow deeper and clearer.
MEMORY IN MINIATURE

Electronic synapses that can learn signal the coming of the first real artificial brain

From our Obsession

Machines with Brains

AI is upending companies, industries, and humanity.

When Leon Chua joined the University of California-Berkeley in 1971, there were three known fundamental electrical devices. Combined in elaborate circuits, resistors (controlling current and voltages), capacitors (basically tiny batteries, and inductors (storing electrical energy in magnetic fields) worked in computers, radios, TV sets, and pretty much every other electrical machine imaginable at that time.

Chua, though, was thinking beyond what was imaginable for most:  In a groundbreaking paper, he theorized a fourth fundamental device, which he called a “memristor,” a device that would change its resistance depending on the history of current that had previously flowed through it, and remember this resistance even when the power supply was turned off. Now, almost half a century later, memristors are few years away from becoming basic building blocks of future artificial brains that truly learn and think like humans.

Overcoming the “memory bottleneck”

Thinking like humans makes even the most powerful computers struggle. That’s because their design is dramatically different from the biological brain. “Almost all modern computers are based on the so-called Von Neumann architecture,” says Shinhyun Choi, a researcher at MIT who specializes in neuromorphic computing.

Von Neumann architecture consists of three elements: a processor, an input/output unit, and memory. Every task thrown at a computer is broken down into simple steps. The first step is stored in the memory and transferred to the processor. After computation, the results of the processing are sent back to the memory for storage. Then step two is sent to the processor; it’s processed, and then sent back to the memory. Even simple operations like adding two plus two takes several of these cycles. Our computers seem to work fast, because they can do huge numbers of such operations every second. But even the fastest computers have nowhere near the efficiency of the human brain.

Biological brains can do multiple operations at once. Neurons transmit signals through synapses; the more often a signal goes through a given synapse, the more this synapse gets reinforced, finally building a permanent memory. There’s no need to endlessly shuffle data between processing and memory units because both processing and memorizing are being done at the same time thanks to the phenomenon of synaptic plasticity—the ability of synapses to reconfigure the strength with which they connect two neurons depending on the past electrical activity of these neurons. The brain’s processing and memorizing work in parallel, rather than step-by-step computing. “That means the brain doesn’t suffer from the memory bottleneck problem, which is a scourge of every Von Neumann machine,” says Choi.

The real brain requires just 20 watts, barely enough to power a dim light bulb.

Shuffling data back and forth between memory and processor all the time consumes lots of power. The Lawrence Livermore National Lab had to call on 96 Blue Gene/Q racks of its Sequoia supercomputer, using 7.9 megawatts of power, to simulate a human brain with 100 trillions of synapses—and the simulation ran 1,500 times slower than the real-time speed of the brain. According to some estimates, simulating the human brain in its full capacity with Von Neumann computers would take 12 gigawatts of power—roughly the annual power consumption of Norway. The real brain requires just 20 watts, barely enough to power a dim light bulb.

That’s why a new kind of computing architecture is needed to run truly brain-like machines.

Shrinking things down to the “nanobrain”

Chua’s memristors seem like they might be the key—like the real brain, memristors process and memorize in parallel, and thus should, in theory, be faster and more energy efficient than current computer processors.

Julie Grollier leads the “Nanobrain” project team at the CNRS/Thales lab in France trying to bring memristor technology from theory into the world. Grollier says her team has already built a memristor small, fast, and durable enough to work as synapses in physical neural networks. Their design is also reportedly power-efficient and generates very little heat, so unlike transistors in every classical microprocessor, they can be arranged in 3D structures, layer upon layer. Future machine brains will look more like a cube than like a wafer.

And, as Grollier argued during her speech at World Economic Forum in Davos 2016, they will work just as our brains do. When you keep sending a signal between two memristors, the resistance of voltage flowing between the two lowers, which eventually builds up long-term memory in a device.

Grollier’s device doesn’t simulate learning. It learns.

In a paper published April 3 in Nature Communications, Grollier’s team describes a simulated nine by five array of their memristors that can cope with a simple image-recognition task amazingly well, reaching 100% accuracy. The array, the team argues, exhibits synaptic plasticity and seems to have the same physical properties that let biological-learning systems—insects, animals, humans—learn without any external control or prior knowledge. Grollier’s device doesn’t simulate learning. It learns.

“But that’s all about a synapse,” says Grollier. ”We also need a neuron.”

One of the most compelling artificial neuron design proposals to date would use nonlinear oscillators, electronic devices that behave differently depending on the strength of the input signal. They can spike when the incoming voltage reaches a certain threshold, just like real human neurons. But all attempts at building an actual device so far have failed, because the oscillators were either too big, making the whole idea a bit pointless (with billions of neurons working in the human brain, the desired size of a suitable oscillator was below one micrometer), or unstable at nanoscale. They also wouldn’t interfere with each other’s work when stacked at very high density and therefore should be able to work in huge networks—a necessity when you want to build a brain with millions of neurons.

A team led by Jacob Torrejon of Université Paris-Saclay, France (of which Grollier was also a member) seems to have come pretty close to ticking all those boxes. Their nano-oscillator is roughly 0.3 micrometer in diameter, easy to manufacture, and reliable. “When you put them close together, their magnetic fields start to interact dynamically and synchronize somehow—just like biological neurons,” says Grollier.

One of the most common benchmarks measuring neural networks’ performance is a spoken-digits test. Nine different speakers say a few different digits, and the AI tries to recognize them. Modern neural networks, like Siri, achieve way above 99% recognition rate. Torrejon’s team threw the same spoken digits at just one of their artificial neurons and ended up with average score of 99.8%. But there is lots of work ahead.

“People in the field are currently experimenting with small arrays of memristors, say 12 by 12. But it’s obviously too little to process large datasets,” says Choi. His own team at MIT wants to build a memristor-based artificial brain for the Cheetah, a four-legged robot which can run and jump over obstacles. Today, you can stick a high-quality camera on a robot and give it great real-time visual data—but the bot can’t handle the processing of all that data to make sense of it. It’s too much information for our current neural networks to cope with in a timely manner. With its new brain, Cheetah will be able to see and process images in real time, pretty much like humans. “I think in a few years we can achieve human speed of cognition. Cheetah’s brain should be ready three, maybe four years from now,” Choi says.

Grollier is more cautious—but only a bit. “We’re betting on entirely new basic devices and this will obviously take some time. I expect us to reach a working prototype within five years or so.”

How machines could beat biological brains

Grollier thinks neuromorphic memristor technology will get close to a biological brain in terms of computing power and energy efficiency, but won’t ever get better. Choi thinks quite the opposite.

“A biological neuron is quite large, in the range of one micrometer. Memristors can be 10 times smaller than that,” he says. “So, it’s physically possible to achieve much higher density in an artificial chip than in a biological brain. Those things can theoretically have more computing power than humans with the same energy consumption.” But eventually, we can go even beyond artificial brains as we know them, and build robots that think with their entire bodies.

The human brain basically works as a biological CPU. It takes input from the available sensors—eyes, ears, nose, skin—and process the data. Our hands or eyes don’t do any processing, they simply act as sensors. This doesn’t mean, though, that their artificial cousins should be limited in the same way. “Imagine an artificial limb that recognize a shape of an object upon touching it. Then, rather than doing the computation centrally, with sending all the input from the sensors all the way to the CPU, you can do it in a limb itself,” says Alberto Salleo, a professor at Stanford University. Salleo’s team recently built an organic artificial synapse which seems radically different from everything that’s been proposed so far.

“Do we want to become the new Nvidia? Of course.”

Their artificial synapse is based off a battery design. It consists of two thin, flexible films with three terminals, connected by salt water working as an electrolyte. Depending on the extent to which the battery is charged, you can get a different state of the electrolyte. When the battery is disconnected from the outside world, that state stays the same, and when you reconnect the battery, the state changes again. Like a neural path in a biological brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly to weaken or strengthen the connection between the neurons.

Salleo says that their design is low-cost and easy to fabricate—unlike the Nanobrain devices which, he says, “are made of very high-quality materials, not easy to make.” In addition, they are flexible and can even be inkjet printed, so they can be used in applications like soft robotics and prosthetics.

There are downsides. The device is relatively slow, so unlikely to ever act as a basis of a future CPU. Researchers haven’t made the artificial synapse small enough to fit on a chip, so it’s hard to say how dense they can be packed. There also remains the question of what is going to act as the neuron.

We’ll need a solution very soon. Artificial intelligence keeps finding its way to more and more devices and is going to need hardware acceleration.

Back in 1999 Nvidia marketed its GeForce 256 as “the world’s first GPU”, a dedicated graphics processor predominantly for gaming. Since then, the company has become one of the most recognizable brands in consumer 3D hardware, and dedicated graphics chips have become ubiquitous. “And dedicated chips for artificial intelligence are sure to go down that road as well. Do we want to become the new Nvidia? Of course we do,” says Choi. Only this time, the stakes are even higher.

“Robotics, prosthetics, self-driving cars, autonomous drones, all kinds of consumer devices are possible applications for our technology. The market is huge,” says Choi. “I’m not saying those dedicated chips will make machines smarter than humans. We have no idea how to design algorithms for creativity and everything that goes into human-like intelligence. But sure I want to build hardware to run them.”

Subscribe to the Daily Brief, our morning email with news and insights you need to understand our changing world.