Unlike Big Tech, some AI startups aren’t at all ready to invite regulation

Unlike Google and Microsoft, AI startups have a fragile position in the industry—and they're worried that regulations will cramp them further

We may earn a commission from links on this page.
Google CEO Sundar Pichai.
This is not Google’s first rodeo with regulators.
Photo: Brandon Wade (Reuters)

It isn’t often that you hear the CEOs of big tech firms singing the praises of government intervention. But as the capabilities of artificial intelligence have built and multiplied, these executives have been calling for greater government involvement in the regulation of AI.

  • “OpenAI believes that regulation of AI is essential,” said Sam Altman, the company’s CEO, in written testimony ahead of his meeting with Congress last month. (It is also true, though, that Altman then traveled to Europe and argued against the European Union’s stricter brand of tech regulation.)
  • “With the technology now at an inflection point...I still believe AI is too important not to regulate, and too important not to regulate well,” wrote Sundar Pichai, the CEO of Google, in an op-ed published in the Financial Times.
  • “[W]e will obviously engage with any regulation that comes up in any jurisdiction,” said Satya Nadella, Microsoft’s CEO, on a recent conference call with analysts and investors. “But quite honestly, we think that the more there is any form of trust as a differentiated position in AI, I think we stand to gain from that.”

But while Big Tech seems united, AI startups are less enthused about regulation. These smaller companies are either completely against regulation at this moment or want to operate only within a loose set of rules. In part, this is because they face greater uphill battles to survive as small businesses, at a time when the field of AI is in such ferment.

Why do Big Tech CEOs want AI regulation?

The CEOs of Google and Microsoft are taking a proactive approach to regulation in part because they can afford to do, and because they’ve had many run-ins with regulators in the past. Perhaps calling for regulation in advance is a way to preempt another battle, said Alexandre Lebrun, the CEO of Nabla, a Paris-based startup that develops AI-abetted note-taking software for clinical settings.


“I think first—and, you know, coming from Meta, and probably the same for Google—they have been in the crosshairs of regulations for a long time,” said Lebrun, who formerly worked at Meta. “And so, asking for regulation, I think it’s like being a good student, showing some goodwill.” And getting on the good side of regulators is always beneficial, Lebrun added, given that government agencies are increasingly being urged to break up Big Tech firms.

Perhaps, too, OpenAI’s embrace of regulation is a way to solidify and protect its own dominant position in the industry. To be sure, many startups use OpenAI’s GPT at the moment. But another large language model may always come along to make GPT irrelevant. Building such a model in an era of high or even moderate regulation will be harder than OpenAI’s challenge of building it when no regulations existed.

The problems with AI regulation

The AI tools from Big Tech stables are already well under development, but a lot of the value of generative AI will come from other companies and users inventing their own applications. Regulation can get in the way of that, said Noam Shazeer, the CEO of Character.AI, a chatbot service that allows people to have open-ended conversations with “personalities” and “characters.” Some of these personalities may be wholly original; others may mimic fictional characters or real-life individuals like Elon Musk.


Shazeer, who was formerly with Google Brain, said that when his company first launched last year, traffic had started to exponentially grow in China for around three days until the government blocked access to the service. In February, China ordered apps and websites enabling access to ChatGPT to terminate their services. Western regulations are unlikely to be as draconian, but they will still be some form of control, and Shazeer worries that they will stifle innovation. It’s much too early to know what the best applications are like, he said. “We’re at the infancy of the technology,” he added. “The best applications haven’t been invented.”

If Big Tech truly yearns for regulation, it’s getting its wish quickly. The EU, which just passed a draft of its AI Act, is at the forefront of creating the first set of AI regulations. The act proposes classifying AI systems by risk. High-risk systems, including AI hiring tools and exam scoring software, would face greater compliance standards, such as data validation and documentation. Generative AI systems, like ChatGPT, must identify that its output has been AI-generated, and provide detailed summaries of the copyrighted data used to train it. In predictive policing and biometric identification, AI systems would be outright banned.


Critics of the AI Act find it too restrictive. “Regulation is necessary, but a very high-level regulation,” said Lebrun. Building AI systems for military use should be prohibited, for instance, he said. “But I think we should regulate the intent of the model and not how you should build these models,” he said. Such restrictive policies could force ambitious AI startups to move out of the EU, said Lebrun.