Sunday, November 24, 2024

Forum: Global impact of the EU AI Act – Thomson Reuters Institute

Must read

Companies that conduct business in the European Union need to prepare now to get into compliance with the EU’s new AI Act

On March 13, the global landscape of artificial intelligence (AI) changed when the European Parliament voted to approve the European Union’s Artificial Intelligence Act (EU AI Act). On first look, the Act is similar to the EU’s General Data Protection Regulation (GDPR), passed in 2016; but while the GDPR concerns privacy law, the EU AI Act goes into detail on artificial intelligence regulation.

The EU AI Act will have reverberations and repercussions throughout the world, as it leads the way in the type of legislation that will likely be coming to the United States and other countries, encouraging a paradigm that offers governing by preemptive regulation, rather than by penalty.

An EU AI Act compliance program or an AI ethical risk & responsibility program has to be creates as an enterprise-wide endeavor. The design, implementation, scaling, and maintenance of the program will require companies’ boards of directors, C-suites, compliance professionals, and team managers to determine and complete different responsibilities.

Building an AI program

To begin implementing these programs, it may be tempting to look solely at the technology itself as the solution. Any proper program, however, will include a combination of people, processes, and technologies from the beginning.

Saskia Vermeer-de Jongh, a partner in AI an digital law with HVG Law LLP, part of the global EY law network, says that Act “is clear that building trust in AI starts with ensuring human oversight. Therefore, it is good that this value is reflected, with the stipulation that safeguards must match the risks, autonomy level, and context of AI usage.”

Vermeer-de Jongh continues, noting that “safeguarding the opportunity that AI brings” necessitates a better understanding of the potential risks of AI, and develops an ability to govern these effectively. Indeed, initiatives and guiding statements from international governing bodies, such as the Organisation for Economic Co-operation and Development (OECD), the G7’s AI Principles, and the Bletchley Park Summit are testament to this. “The EU AI Act’s detailed legislation provides a level of clarity and certainty for companies across sectors in developing and deploying AI systems.”

The Act itself covers AI systems “placed on the market, put into service, or used in the EU,” which can cause global reverberations. The requirements of the Act generally apply to three roles: providers, deployers, and users.

The general thought is the development of a tiered, risk-based system to determine the level of oversight needed for the system’s processes. The first level is unacceptable risk systems that are wholly prohibited. The next level is high-risk, which must be registered, and bears the burden of proving that it does not pose a significant threat to health, safety, and fundamental rights. This level includes technology used in critical infrastructures, educational and vocational training, product safety, border control management, law enforcement, essential services, administration of justice, and employment. The third level is the limited and minimal risk systems; this level is subject to its own transparency and ensures that humans are informed whenever necessary, fostering trust.


While the EU AI Act is the first legislation of its kind in addressing AI, it is not likely to be the last. That means for companies, developing business and risk mitigation plans is important regardless of where they are located.


There are three broad and total exceptions to the EU AI Act. First, any system developed exclusively for the military, defense, or national security is exempted. Second, AI developed exclusively for scientific research is exempted. Third, free and open-source AI, in which the code is in the public domain and available for anyone to use, modify, and distribute, is exempted.

The EU AI Act sets out a phased timeline for compliance. It starts with a ban on prohibited AI systems within six months after the Act becomes enforceable. Requirements on the use of general-purpose AI models capable of performing a wide variety of tasks either alone or integrated into other applications, including generative AI, are needed within 12 months after the Act becomes enforceable. Finally, requirements for high-risk AI systems should occur within 24 months.

The maximum penalty for noncompliance with the prohibitions stated in Article 5 of the EU AI Act is the higher of either an administrative fine of up to EUR 35 million or 7% of worldwide annual revenue.

Business implications of the Act

With potentially harsh penalties and complex standards for risk, the EU AI Act could have far-reaching business implications. “Organizations need to start preparing now by ensuring they have regularly updated inventories of AI systems being developed or deployed, assessing which of their AI systems are in-scope of the legislation, and identifying their risk classification and relevant compliance obligations,” says Vermeer-de Jongh, adding this is particularly important as there are three classes that require different levels of care.

Beyond this, she explains, organizations need to have a thorough understanding of the many requirements, risks, and opportunities of this legislation so they can review, evaluate, and adjust their current AI strategy accordingly. “Companies also need to train AI users, maintain transparency in AI operations, ensure high-quality datasets are used for developing AI systems, and uphold robust privacy standards.”

Vermeer-de Jongh also recommends consulting with legal and tech experts in order to navigate the compliance processes. “Finally, companies will need to put in place the appropriate accountability and governance frameworks and keep the appropriate documentation for when the EU AI Act comes into force. Since AI regulations are evolving, companies need to continuously stay updated with the changes to maintain compliance.”

While the EU AI Act is the first legislation of its kind in addressing AI, it is not likely to be the last. That means for companies, developing business and risk mitigation plans is important regardless of where they are located.

“Reflecting the diverse cultural approaches to any regulation, we are seeing different regions adopt distinctly different strategies on AI policy,” Vermeer-de Jongh says. “However, there are some general trends in the EU AI Act, including consistency with the core principles for AI set out by the OECD and endorsed by the G20.” These core principles involve, in particular, respect for human rights, sustainability, transparency, and strong risk management.

“While comprehensive legislation is not expected in the US in the short term, there is general consensus growing there around the need to limit bias, strengthen data privacy, and mitigate the impact of AI on the US workforce.”

Latest article