The EU's AI Act is now in effect. Here's what you need to know

The bloc's landmark act is the first major AI law in the world

We may earn a commission from links on this page.
Conceptual image of a justice scale with an EU flag on one side and a red artificial intelligence symbol on the other
Image: J Studios (Getty Images)
In This Story

The European Union’s law to regulate the development, use, and application of artificial intelligence is now in effect.

The AI Act is the first major AI law in the world, and was given the final green light by the bloc’s member states, lawmakers, and executive body, the European Commission, in May. The law harmonizes rules on AI use and development across the EU’s single market.

Advertisement

“This landmark law, the first of its kind in the world, addresses a global technological challenge that also creates opportunities for our societies and economies,” Mathieu Michel, Belgian state secretary for digitalization, said. “With the AI Act, Europe emphasizes the importance of trust, transparency and accountability when dealing with new technologies while at the same time ensuring this fast-changing technology can flourish and boost European innovation.”

Advertisement

Although the AI Act is EU legislation, it will have a major impact on global tech companies, especially in the U.S. However, most of the act’s provisions won’t be in effect until 2026, and companies have months to come into compliance.

Advertisement

Here’s what you need to know about the landmark AI law.

What does the AI Act look like?

The AI Act was first proposed by the European Commission in 2020 with the objective of ensuring AI systems used and developed in the EU are safe and trustworthy. It also aims to ensure AI systems are respectful of existing laws on fundamental rights of EU citizens and to boost investment and innovation of AI in the bloc.

Advertisement

The act follows a “risk-based” approach to regulating AI, meaning the higher the risk of harm, the stricter the rule is. AI systems categorized as “limited risk” are only required to be transparent, while those in the “high-risk” category have to follow certain requirements and obligations to be allowed on the EU market. Those obligations include risk assessment and mitigation measures, as well as high-quality data sets that minimize biases. High-risk AI systems include medical devices and biometric identification systems.

Meanwhile, AI systems that can be used for cognitive behavioral manipulation and social scoring are banned in the bloc. AI systems that depend on biometric data, such as race and sexual orientation, for predictive policing are also banned.

Advertisement

Where does generative AI fall into all of this?

Under the act, generative AI models are considered general-purpose AI, or models that can complete tasks on close to a human level. General-purpose AI models are considered to not pose systemic risks and have limited requirements and obligations, such as transparency of how the models are trained.

Advertisement

The AI Act defines general-purpose AI models as: “capable of generating text, images, and other content.” While general-purpose models, such as OpenAI’s ChatGPT and Google’s Gemini, “present unique innovation opportunities,” they also present “challenges to artists, authors, and other creators and the way their creative content is created, distributed, used and consumed.”

Therefore, these models are subject to strict requirements in respect to EU copyright law, routine testing, and cybersecurity.

Advertisement

Meanwhile, open-source AI, such as Meta’s Llama models, also falls under regulation with some exceptions for developers that make parameters publicly available and allow for “access, usage, modification, and distribution of the model.”

How are U.S. tech companies impacted?

Most of the advanced AI systems that fall under the AI Act are developed by tech companies in the U.S., including Apple, Google, and Meta. However, the law will likely go beyond tech firms building and developing AI, and will impact businesses using the technology or developing their own systems.

Advertisement

In July, Meta announced it had decided not to release its upcoming and future multimodal AI models in the EU “due to the unpredictable nature of the European regulatory environment.” The company’s decision followed Apple’s, which said in June it would likely not roll out its new Apple Intelligence and other AI features in the bloc due to the Digital Markets Act. Even though Meta’s multimodal models will be under an open license, companies in Europe will not be able to use them over the company’s decision, Axios reported. And companies outside of the bloc can reportedly be blocked from offering products and services on the continent that use Meta’s models. However, Meta has a larger, text-only version of its Llama 3 model that will be made available in the EU when it’s released, the company told Axios.

In June, Meta said it would delay training its large language models on public data from Facebook and Instagram users in the European Union after facing pushback from the Irish Data Protection Commission (DPC).

Advertisement

What are the penalties for not following the rules?

If a company breaches the AI Act, it faces a fine of either a percentage of its global annual turnover from the previous financial year, or a predetermined amount — whichever cost is higher. Fines can range from 35 million euro ($38 million) or 7% of global turnover, to 7.5 million euro ($8.1 million) or 1% of turnover.

Advertisement

The fines are proportional for small to medium-size enterprises and startups.