“To have strong innovation, you need a strong state”: How Silicon Valley gets the future wrong

Steve Fuller’s message: the future won’t just happen by itself.
Steve Fuller’s message: the future won’t just happen by itself.
Image: Akshat Rathi/Quartz
We may earn a commission from links on this page.

Not too many years from now, your day will look very different. You will have self-driving cars to rid you of your nightmare commute and AI assistants to take care of everyday drudgery: from grocery shopping to setting up appointments. Smart drugs and nanobots loaded in your blood will keep your body and mind performing at their peak. Wearable sensors and smart clothing will ensure that your doctor knows if anything goes wrong.

It’ll be a life that even the Jetsons will be jealous of, and it’ll be yours if you just let Silicon Valley do its thing—right?

Steve Fuller doesn’t agree. The University of Warwick philosopher and sociologist is a techno-optimist, but he thinks this future won’t become a reality without a few critical things that we’re currently lacking—in particular, stronger, more involved governments that both fund innovation and regulate it intelligently. Earlier this year, I met him at the futurism conference Brain Bar Budapest to find out what we need to fix. Here is our conversation, edited and condensed for clarity.

Quartz: We’ve always thought about the future. What is different about future-thinking today?

Fuller: In a certain sense, HG Wells could have had this conversation in 1900. What’s different now is that more innovative interventions are possible and they have the potential to have a greater impact. It’s a scale issue. More inputs and larger impact. That’s the difference compared to, say, 1900.

The way to think about the future has a lot to do with how we think about the past. Not in the straight way of extrapolating things. But that the kind of goals and visions people have for the future have been promoted earlier. They are in remembrances in that sense, but have been updated because of science and technology.

Do you agree with the future-thinking that’s happening in Silicon Valley, the cradle of innovation?

It’s one thing for Elon Musk to come up with a spaceship that he can get people to on their first trip to Mars. He has the capital to do that. But that’s not an institution.

This is where Evgeny Morozov, the tech critic, is right. Tech entrepreneurs don’t think in terms of institutions. They think in terms of solutions. They think there is some kind of magic bullet: the right tech or the right platform. They think the world will mobilize around it to do what it needs to do to make it work. That’s not how the world works.

Image: Akshat Rathi/Quartz

The kind of intelligence we are lacking is institutional intelligence. That will take an innovation and make it part of the general infrastructure to benefit every one. That’s the bigger problem to solve.

We tend to underestimate the role of the state. It provides an initial capital investment with a vision of a future. Even in the cold war, what I was told as a kid to be defense industries turned out to be the seed bed for where Silicon Valley came from.

The cold war was the golden age for science and technology. There was globalization then but it was dominated by the visions of the future. It wasn’t dominated by some nebulous notion of the market, which is the kind of world we live in. There’s no incentive to have a large vision.

Maybe the vision today is about being more inclusive.

That would be true, as long as the state has some power. While it’s great to promote science and technology, what’s not clear in transhumanism philosophy is the role of the state and the distribution of resources. So everybody can eventually benefit from it.

In order to have strong innovation, you need a strong state. That’s where these marketeers get it wrong. The more you weaken the state, the less you’re going to have a platform for a big vision that can encompass a lot of people and drive long-term investment. If you have a bunch of businesses competing against each other, they are all going to try to find a little bit of advantage over the other. There won’t be space to think about this long-term, large-scale shift.

What will be the key challenge we will face in the future?

Science and technology is the driver of the long-term trend. This is why I agree with the transhumanism agenda, generally speaking, namely that it will make the biggest difference to the human condition in the long term. So all my disagreements on ideas such as extending life indefinitely or uploading the brain will be on timeframe and not on the direction of travel.

The problem at the moment is that we are in a politically unsophisticated environment in dealing with these ideas. By that I mean we have no regulations. If there is an accelerator to these ideas, it’s not going to be because governments are making policies encouraging this to happen so that people can benefit from it. But rather because there aren’t any regulations and things can just happen.

Consider China. It’s basically an ethics-free zone. Even when things get invented in the West, like with the gene-editing technology CRISPR, they immediately get sandbagged with ethical issues. This then delays the introduction of the technology. Whereas in China, which has no scruples about this thing, can pick this up immediately.

This is something we will see more of in the way science and technology develops. My view is not that the West ought to be ethics-free like China, but that right now we should have our policymakers, lawyers, politicians, and so forth thinking strategically. They should take it for granted that such situations will come up, rather than treating it as science fiction, and then think what are the public consequences of such technology. This is the time to start thinking about this stuff. That’s the aspect of forecasting that is completely lacking at the moment. It’s the relevant anticipatory policy management for new technology.

I say it as someone who does not want to stop new technologies. Regulation is not necessarily about stopping things. It’s about channelling them and making sure that the benefits are distributed properly. It’s about holding people who do harm accountable.

Here the EU bears some of the burden, because it has an overly precautionary approach towards science and technology. You can see the debacle it caused in the case of genetically modified organisms. The EU’s handling of it was a nightmare compared to the US. Here in Europe we need some really smart regulatory thinking that anticipates these things are going to happen.

In terms of future of planning, we already see the contours of it pretty clearly and the transhumanists have got their finger on that. But what we don’t have a clue about is the kind of political regime that’s going to be able to manage it for the maximum benefit of the people.

How do you think the next 100 years will play out?

I think things like artificial intelligence and biotechnology will be important not just in pushing the economy but how our lives are lead. That’s obvious. The thing that remains open is whether this is going to lead to class warfare or increasing inequality. That’s where the relationship between the modes of economic production driven by science and technology need to relate to the state in a much closer way. Only that can prevent it from becoming something that’s a highly class-stratified and potentially divisive system.