Why tech companies need philosophers—and how I convinced Google to hire them

What does AI mean for human exceptionalism?
What does AI mean for human exceptionalism?
Image: Reuters/Dado Ruvic
We may earn a commission from links on this page.

I have spent the better half of the last two years trying to convince companies like Google, Facebook, Microsoft, DeepMind, and OpenAI that they need to hire philosophers.

My colleagues and I—a small collective of academics that make up a program called Transformations of the Human at the Los Angeles-based think tank called the Berggruen Institute—think that the research carried out by these companies has been disrupting the very concept of the human that we—in the West particularly—have taken for granted for almost half a millennium.

It’s not only that, though. These companies have helped create realities that we can no longer navigate with the old understanding of what it means to be human.

We need new ones—for ourselves, so that we are able to navigate and regulate the new worlds we live in, but also for the engineers who create tech products, tools, and platforms, so that they can live up to the philosophical stakes of their work.

To make that possible, we need philosophers and artists working alongside computer and software engineers.

What’s at stake

Until relatively recently, we in the modern West knew what it means “to be human.”

We knew that we had what no one and nothing else had: intelligence.

We knew that our potential to think, to wonder, to know, made us exceptional and set us apart from the rest of creation. By this theory, we humans were more than just another animal. And we were much more than just a machine.

While we had intelligence, animals had only instinct, and machines had mere mechanisms.

We also knew that there is an unbridgeable difference between natural and human-made artificial things, and between organisms and machines, as well as between living, sentient things and non-living or non-sentient things. We knew that only natural, living things—i.e., organisms—can sense, perceive, and think

We knew all of this with unwavering certainty—until we knew better.

Today none of these distinctions—nor the concept of the human that they helped stabilize—holds as certainly. And this loss of certainty has much to do with the rise of artificial intelligence. (It also has to do with a lot of other things, like microbiome research and climate change, but in this article I am going to focus on AI).

Going deep

Take, for example, deep learning, in which machines endowed with thousands of layers of neurons are able to learn and remember. This enables the machine to reason and to make decisions.

Given the abilities of these neuronal machines, it does not seem very plausible to assume we humans are intelligent while machines are not. Or that only living things can be sentient and can think, investigate, and understand. Or that there is a categorical distinction between natural things and artificial things.

On the contrary, it appears that there is a continuity between the natural and the artificial, between humans and machines.

A philosophy and arts department … for Google

As these observations make amply clear, the relatively recent advent of AI is a far-reaching philosophical event. And AI labs and tech companies are our most potent philosophical laboratories. They are powerful experimental spaces within which people create new concepts of the human and the world around us.

In places like Google, Facebook, Microsoft and OpenAI, engineers elaborate radically new notions of what it means to be human, to live a life, and to live together.

The vast majority of cutting-edge AI research is carried out in companies. The problem is that most of the people who lead these companies don’t know that they are radically reinventing our definition of what it means to be human. They think of themselves as just people who work at tech companies.

One of the major ambitions of my work is to change this. I want these labs and companies to understand their enormous philosophical responsibility: the self-aware design of new possibilities of being human and of living together.

Which is why my colleagues and I have placed philosophers and artists in places like Google.

Let me underline that while we work with companies, the purpose of our work is not to help big tech to devise some novel marketing strategy: Our goal is not to provide philosophical and artistic means for corporate ends.

Rather, our ambition is to engage the major AI companies in a philosophical and artistic project of massive scope, in the experimental search for and articulation of what it means philosophically to be human in our modern world.

History lesson

The modern concept of the human—the concept that we until recently took for granted—first surfaced in Europe in the 1630s. This was a time when more and more reports about non-European life forms arrived in Europe, making philosophers wonder what all these people had in common.

The answer they gradually came up with assumed the form of two differentiations:

On the one hand, they argued, humans are more than mere nature (more than animals and plants). And on the other hand, they insisted, humans were said to be other than (or qualitatively different from) mere machines.

The criterion of differentiation was intelligence: Humans have it, or so the story went, and nature and machines don’t.

At the time, these two differentiations served two powerful purposes: To argue that all humans are defined by intelligence—the capacity to think, examine, reflect, and know—was a most powerful tool against the unfounded authority of the clergy.

And to argue that nature, as opposed to humans, is devoid of intelligence allowed the early modern philosophers to exempt humans from the cosmos (of which they had been a part up until then) and to reduce nature from a metaphysical surrounding organized by divine laws to physical matter organized in terms of mechanisms.

It is difficult to exaggerate the importance these two differentiations (more than nature/other than machines) have had for our modern experience of self and of the world that surrounds us.

Almost all of the vocabulary we have assigned to be distinctively human—art, culture, society, history, politics—silently suggests the more/other:

Art and culture are the opposite of nature. Society and politics are a space of action and organization that opens up when humans leave the status naturalis, or animal state.

History is an exclusively human realm, made up of successive layers of human action. 

Where (and why) our definition of the human fails us

It was sometime around 2013 when I first recognized that the modern concept of the human—again, the very concept that has organized our sense of self and our experience of reality—fails us.

Take the microbiome, which has been becoming increasingly popular in science, health and wellness circles in the past few years. There is no single organ system that is not contingent on microbial metabolites. Most of the neurotransmitters in our brain are made by bacteria living in our guts. No one can tell where a human ends and their microbiome begins.

Or, take AI. Once AI researchers succeeded in building machines endowed with neural nets that learn, that experience, that remember, that think and reason, the assumption of an unbridgeable difference between humans and machines—between intelligence and mechanism, between the animate and the inanimate—became untenable.

It seemed clear that we cannot continue to live by concepts we know are both untenable and destructive to the planet. But the question that concerned me most was what to do with it all.

Can we reinvent the concept of the human?

This question troubled me for a long time, until I realized that fields like AI and microbiome research or synthetic biology not only undermine the historic way we think of the human—they also allow for new possibilities for understanding the world.

It suddenly dawned on me that I could look at each one of these fields, not just AI and the microbiome, but also synthetic biology, biogeochemistry, and others, as if they were a kind of philosophical laboratory for re-articulating our reality.

Isn’t AI, by undoing the formerly exclusive link between humans and intelligence, opening up whole new possibilities of understanding how the world is organized and how humans fit into this world?

Intelligence is now no longer an exclusively-human property, but something animals and machines have as well.

By establishing a continuum between the natural and the artificial, AI research invites us to think of machines as natural, and of engineering as a kind of natural (meaning biological) practice.

From accident to design

We are living in an era of a major, most far-reaching philosophical event: A radical re-articulation of what it is to be human and of the relation between humans, nature, and technology.

Yet at present, no one really formally talks about this philosophical quality of tech. Hence, no one attends to it, with the inevitable consequence that the sweeping re-articulation of the human unfolds around us in a haphazard, entirely unconscientious way.

Shouldn’t we try to change this?

When I shared my enthusiasm with my colleagues in academia, I found that what was exciting to me was an unbearable provocation for many others.

My suggestion that the question concerning the human has migrated into the fields of the natural sciences and engineering—that is, into fields not concerned with the traditional study of the human and humanity at all—were received as threat to academics in the arts. If humans are no longer more than nature or machines, then what are the arts even good for?

My insistence that the best way to defend the human was to re-invent it was dismissed.

But my suggestion has hardly been to abandon philosophy or the arts. Rather, I want to bring into focus how fields like AI (or microbiome research or synthetic biology) are actually philosophical fields.

But the problem was not just in the arts: Most engineers I talked to were too busy being engineers. They were fully absorbed by their research questions and displayed little interest in what I desperately, and clumsily, called the philosophical stakes of their work.

When education is part of the problem

I entered one of the biggest crises of my adult life: I had to accept that the university—the place I cherished, loved, called home—was part of the problem rather than the solution.

The re-invention of the human in terms of philosophy, art, and engineering could not occur, at least for now within the academy as we know it. In 2016, I decided to give up my endowed chair and to leave the university. A little more than a year later, as luck would have it, investor and Berggruen Institute founder Nicolas Berggruen offered me the opportunity to build a small, experimental program on the contemporary transformations of the human that would allow me to test my ideas.

philosophy + art + engineering

In spring 2018, I began to call researchers in AI and biotech sites and suggested that they hire philosophers and artists to work alongside their engineers.

I explained, with all my enthusiasm, that AI labs and companies are the unrecognized but most powerful and cataclysmic philosophical laboratories in which new concepts of being human, of politics, of understanding nature, of understanding and practicing technology, are thought up.

I told my interlocutors that their work is at the very center of a vast philosophical event, of similar historical proportions to the Renaissance or the Scientific Revolution.

I called, followed up, visited—and hoped my enthusiasm would be infectious and help open the door.

Today, we have philosophy and art teams at Element AI, Facebook, and Google, and also at AI labs at MIT, Berkeley, and Stanford. Our researchers are in regular conversation with DeepMind, OpenAI, and Microsoft.

This is just the beginning.


My work over the last two years has led me to conclude that these research and collaboration platforms I’ve had the fortune to build at the Berggruen Institute can only be a first step in a much larger process.

What we need now is a completely new model for an educational institution, one that can produce a new kind of practitioner.

We need a workforce that thinks differently, and that can understand engineering, from AI to microbiome research to synthetic biology to geoengineering and many other fields—as philosophical and artistic practices that ceaselessly re-invent the human.

Almost every month, you’ll likely read about another billion-dollar endowment for a new tech school. On the one hand, there’s nothing wrong with this—I agree we always need better, smarter, tech.

On the other hand, these tech schools tend to reproduce the old division of labor between the faculty of arts and the faculties of science and engineering. That is, they tend to understand tech as just tech and not as the philosophical and artistic field that it is.

What we need are not so much tech schools, as institutions that combine philosophy, art, and technology into one integrated curriculum.

We need a school that combines philosophy, art, and engineering, one that can produce the workforce of the future—like a contemporary Bauhaus movement, focused not on exclusively on architecture, but on technology as well.

If we fail to embrace these differences today, and if we fail to recognize that radically new things are occurring, and fail to recognize the radically new as opportunities and responsibilities, we run the risk of leaving the definition of the world we live in to the conservative forces that stubbornly continue to try to frame our changing world in the terms of the old one.

And that is a certain recipe for disaster.