What tech CEOs Elon Musk, Mark Zuckerberg, and Brad Smith have to say about AI regulation

US Senate hearings with tech leaders resulted in a consensus that AI regulation is needed, but it will be a long time before it becomes reality

We may earn a commission from links on this page.
Three men sitting at a table, wearing suits, with name plates in front of them, the middle one says Mr. Brad Smith.
Mr. Smith goes to Washington.
Photo: Leah Millis (Reuters)

Silicon Valley CEOs and US lawmakers met this week in a series of hearings that resulted in one general consensus: AI regulation is necessary. The tech leaders who made an appearance made up a veritable who’s who of the burgeoning AI industry, including Meta’s Mark Zuckerberg, X’s Elon Musk, Nvidia CEO Jensen Huang and chief scientist William Dally, OpenAI’s Sam Altman, Palantir’s Alex Karp, and Microsoft’s Brad Smith.

Here’s a round-up of what several of these leaders—some of which are not especially known for their friendliness towards regulation in general—had to say on AI regulation:

Microsoft president Brad Smith

Licensing is “indispensable” in high-risk scenarios, but he acknowledged it won’t address every issue. “You can’t drive a car until you get a license,” Smith said. “You can’t make the model or the application available until you pass through that gate.”


Nvidia’s chief scientist William Dally

“Many uses of AI applications are subject to existing laws, and regulations that govern the sectors in which they operate. AI-enabled services and high-risk sectors could be subject to enhanced licensing and certification requirements when necessary.”


He added that no country or company controls AI development. “While US companies may currently be the most energy-efficient, cost-efficient and easiest to use. They’re not the only viable alternatives for developers abroad,” Dally said. “Safe and trustworthy AI will require multilateral and multi-stakeholder cooperation or it will not be effective.”

Meta CEO Mark Zuckerberg

In prepared remarks, Zuckerberg wrote that the government should create regulation that supports innovation. He continued that two defining issues for AI right now are 1) safety—that it’s on companies (not the government) to build and deploy products responsibly and to build safeguards into their generative AI models and 2) access to state-of-the-art AI.

Twitter CEO Elon Musk

Musk told reporters there was a need for a “referee” to ensure the safety of AI and added that a regulator would “ensure that companies take actions that are safe and in the interest of the general public.”


AI regulation in the US is still far away

It’s not clear when and how the US government will regulate companies that provide AI applications. Given how long it has taken the EU to move through with its own AI laws, immediate action appears unlikely. Senators Richard Blumenthal and Josh Hawley proposed an AI framework last Friday that focused on the licensing of “high-risk” AI models and an independent body to oversee the licensing.


The challenge with coming up with laws on AI is that explaining why an algorithm does something is not always possible. Academics who study AI advocate that regulation should focus on the outcomes of AI systems, such as, when there is evidence that AI hiring tools have discriminated against a job candidate, focusing on determining who is responsible.