
Europe’s new AI rules are here — and they’re coming for Silicon Valley.
The landmark legislation, which became law last year, began its rollout early this month with provisions banning certain “unacceptable risk” AI applications, including social scoring systems and manipulative AI techniques. Violations of the European Union’s AI Act can result in fines of up to 7% of global revenue or €35 million (almost $37 million), whichever is greater.
The rules have sparked a transatlantic showdown, with the Trump administration vowing to protect American tech companies from European overreach. But the U.S. tech industry is already preparing for compliance — and similar regulations are gaining momentum in state capitals across America.
The clash highlights a growing divide in approaches to AI governance. While the E.U. has opted for comprehensive regulation aimed at ensuring AI development aligns with European values and rights, the U.S. has not passed anything at the federal level. Former President Joe Biden only put out an executive order that outlined voluntary AI safety guidelines. Even that minimal effort was rolled back by President Donald Trump, who is advocating minimal oversight to promote innovation — and going after anything that cuts against that, including the AI Act.
“These are American companies whether you like it or not,” Trump told the crowd at the World Economic Forum in Davos last month, characterizing the E.U.’s regulatory approach as “a form of taxation.” His administration has taken an increasingly confrontational stance, with Vice President JD Vance warning at a Paris AI summit that the U.S. “cannot and will not accept” foreign governments “tightening screws on U.S. tech companies.”
Big Tech leaders have echoed these concerns. Meta CEO Mark Zuckerberg criticized what he called Europe’s “institutionalized censorship,” while Meta’s policy chief Joel Kaplan indicated the company would seek Trump administration intervention if it felt unfairly targeted by E.U. enforcement.
European officials maintain their regulatory framework is both necessary and fair. “When we are doing business in other countries we have to respect their rules,” said E.U. Commissioner Henna Virkkunen, emphasizing that the rules apply equally to American, European, and Chinese companies. But the pressure seems to be having some effect: Virkkunen also told Reuters that the Commission will look over its rules to eliminate overlapping regulation.
“We will cut red tape and the administrative burden from our industries,” she said.
Civil society organizations have urged the E.U. to stand firm. A coalition of NGOs recently warned the European Commission President against being “bullied by the likes of [Elon] Musk and Trump into weakening its DSA and DMA enforcement,” referring to the E.U.’s broader tech regulation framework.
While politicians trade barbs across the Atlantic, the reality looks quite different on the ground, where companies are already figuring out how to follow the AI Act.
“It doesn’t matter where the company is, it matters where products and services and consumers are,” said Rayid Ghani, professor at Carnegie Mellon University’s Heinz College. “If they touch E.U. people or residents in any way, then it applies to them. The companies very much know that.”
The first major deadline, which hit on Feb. 2, requires AI literacy among staff working on AI products. While Ghani said this initial requirement may not dramatically change how companies build products — partly due to a lack of established training programs — companies are already preparing for more substantial requirements coming in 2027. Many have experience with this kind of regulatory preparation after the E.U.’s privacy law GDPR took effect in 2018.
“They’re doing exactly the same types of things they did for GDPR,” Ghani said. “They’re taking inventory of what AI systems they have in place, categorizing each piece by risk level, and figuring out how they’re going to monitor and ensure compliance.”
Meanwhile, regardless of the Trump administration’s stance, U.S. states aren’t waiting for federal action.
“States are looking at precedents,” Ghani said. “They’re looking at the E.U. AI Act because that was the first thing, but they’re also looking at things like the AI Risk Management Framework and the Biden executive order.”
More than a dozen states — including California, Texas, Virginia, and New York — are considering or implementing laws that mirror key aspects of the E.U. AI Act, particularly around “algorithmic discrimination” in automated decision systems. Colorado became the first state to pass comprehensive AI legislation last year, and others are following suit with bills that require risk management plans and impact assessments for AI systems used in “high-risk” contexts like employment, education, and financial services.
While the E.U.’s full regulations won’t kick in until 2027, this patchwork of state laws could reshape the American AI landscape much sooner. And Silicon Valley might have a harder time fighting multiple battles in its backyard than just taking on Brussels.
“Each state is trying to figure out what to do, which things are horizontal, which things are vertical, which is going to be able to pass,” Ghani said. “Especially now that they’re realizing that things aren’t going to move at the federal level.”
—Jackie Snow, Contributing Editor