AI hacks are trying to turn code into intelligence like alchemists tried turning lead into gold

Molten money.
Molten money.
Image: Reuters/Arnd Wiegmann
We may earn a commission from links on this page.

Our quest for—and expectations of—artificial intelligence are rather like those of the alchemists.

For more than a century, alchemists tried to graft the attributes of gold—yellow, fusible, inert, malleable—onto a single substance. Modern AI advocates are doing just the same, taking the attributes of “intelligence”—raw computational power, recognizing faces, mapping spaces, processing language, spotting patterns—and hoping that if we smush them together in a very powerful computer, somehow it will magically add up to what we call “intelligence.”

But you can’t make gold from lead. And you can’t make intelligence from code.

The golden years

Before the scientific revolution, many wise people believed that things were made of some sort of underlying substance. This gave rise to the pursuit of alchemy:  the medieval fusing together of different matters to create new ones. The theory went that if you could get hold of just the right combination of attributes and swap them around, you could take some base metals, smash them together, and create some bona-fide gold. Hey presto, you’re rich!

And people really did try hard to make gold. So seriously was this taken that from 1404 to 1689, there was a law in England against “multiplication”: Not the mathematical operation, but the multiplication of gold through alchemy. Many “very serious people” thought that such a capability would significantly destabilize the prevailing economic and social order.

You generally don’t go to the trouble of passing a law against something that you think is impossible. Nor would you go to the trouble of trying to repeal the law, as Robert Boyle—one of the pioneers of modern chemistry—did for most of his working life. Indeed, it was at his insistence that the law was finally taken off the statue books at the end of the 17th century.

Why was he so adamant that people be allowed to “multiply” gold? When Boyle died, John Locke, England’s most influential philosopher, was an executor of the will. Among the many manuscripts Boyle left behind, Locke found Boyle’s recipe for the production of gold, and immediately wrote to Isaac Newton (yes, the Isaac Newton—himself a practicing alchemist) to communicate the news, and set about acquiring the materials to carry out the experiment. Boyle evidently thought he’d discovered how to transmute base metals into gold—which seems a decent reason to try and make alchemy legal again.

Isaac Newton, John Locke, Robert Boyle: These were about as heavyweight a set of intellectuals as you could find in any country in any era. Nowadays, their ardent interest in alchemy seems laughable, not to say absurd: Of course you can’t melt metals together to make gold. We now have a much better understanding of the fundamental structure of matter, and can see the alchemists’ speculation and expectations as hopelessly ill-grounded and naïve.

The alchemists were fundamentally mistaken about the structure of gold;  its attributes—its yellowness, fusbility, inertness, and malleability—were the consequence of the elemental composition, not its cause. I would contend that the same applies in this case  of AI: that “intelligence” is not just a bit of processing power, reasoning, spotting patterns, or processing language.  Those are its indications, consequences, and manifestations. And these parts cannot be forced together to make a whole.

Alexa, define “intelligence”

The real question AI adherents need to answer is “What is the fundamental structure of intelligence?”

And that is a very hard question to answer. Indeed, I don’t think many people (or even practitioners) have given the answer too much more thought than, “I’ll know it when I see it.” Even those who spend their time studying intelligence can’t seem to agree on what it really is. And you’ll have a damn hard time creating something when you’re not really sure what you’re creating.

Part of it comes down to understanding context. The perception of what serves our best interests changes as the circumstances do. Ordinarily, we see gold as a scare, tradable commodity that, as a permanent store of value, can be very handy. On the other hand, water is freely available, and therefore more or less worthless—unless, of course, we’re thirsty, or our arm is on fire, in which case other imperatives apply and we make different very choices.

All of these choices are backed by our perception of the value of these things to us in these circumstances. If we were unable to attach any value to either of these things in any circumstance, we’d be hard pressed to make any decisions and completely incapable of what we’d call “intelligent” decisions.

We’ve got so good at this sort of thing that we don’t just see and respond to our immediate environment: We create immensely complex models of the world and hypothesize about the rules that underpin it, creating ever more abstract and ever more complex representations of the world, which we can then manipulate to our ends. And when we find individuals who are particularly good at these type of things, we call them “intelligent.”

And where do our estimations of intelligence come from? From being things acting in the world—and abstract computer emulations of the world simply aren’t things acting in the world. The philosopher John Searle puts it nicely. We could create an immensely powerful computer simulation of the brain that matches what we might think of as the computational power required for human intelligence. Is that emulation “intelligent?” Suppose you create an equally accurate computer emulation of the stomach — you wouldn’t shove a piece of pizza into the disc drive and expect it to be digested.

Fusing intelligence

There’s no doubt that machines will continue to make incredible leaps and bounds in their capabilities: They’ll be better able to map and navigate their environments, monitor and anticipate our needs, learn and translate languages, prove mathematical theorems, spot patterns in data to generate new hypotheses, and all sorts of other impressive feats of computation. But that is what they will be—impressive feats of computation.

Those who hype, boost, and preach the coming of AI super-intelligence are generally looking in the wrong places at the wrong problems, and will continue to be disappointed if they continue to do so. Just like those alchemists of old.

That doesn’t mean there’s nothing to AI or AI research;  after all, all that alchemical research eventually laid the foundation for the modern discipline of chemistry. But the fact of the matter is that we’re several fundamental theoretical breakthroughs away from creating any substantive form of general artificial intelligence.

So all you AI-boostersfuturistsvisionaries, and associated hangers-on, please stop wasting your time (and ours) telling us how amazing, different, scary, and exciting the world is going to be when we can transmute base metals into gold (“We’re all going to be rich, I tell you! Rich!”). Instead, spend more time considering the fundamental structures that underpin your subject matter to see if what you’re so worried about is even possible—let alone probable, never mind imminent.

Then your words might be worth their weight.