Wednesday, December 25, 2024

Landmark AI Deal: Will Europe Succeed as the World’s Digital Regulator?

Must read

The European Union has agreed on a sweeping new law to regulate artificial intelligence – hoping to set a global benchmark for the powerful new technology.

After a 36-hour negotiating marathon, EU policymakers struck an agreement on the EU AI Act. The challenge now will be to show that the new rules can be enforced and keep up a fast-moving technology.

Final drawn-out negotiations focused on AI’s riskiest uses for national security and on whether to regulate general-purpose systems such as OpenAI’s ChatGPT and Google’s Bard, as well as their underlying large language models.

European parliamentarians wanted strict limits. Governments, led by France, demurred, insisted on a broad exemption for any AI system used for military or defense, and worried that heavy restrictions on general AI models would block European competitors such as Mistral AI and Aleph Alpha from competing in this strategic and fast-growing market.

In a compromise, broad AI models such as GPT-4 and Bard will face new transparency requirements requiring them to reveal their training datasets and energy consumption. They will have to watermark their content as AI-manipulated or produced. Most facial recognition will be banned, along with predictive policing software to assess an individual’s risk for committing future crimes and software that categorizes persons based on race, political opinions, or religious beliefs.

In return, the European Parliament accepted an exemption for AI applications developed for military or defense uses. It also allows law enforcement to use biometric Identification to prevent terrorist attacks or to locate the victims or suspects of serious crimes.

While European policymakers hailed the law as revolutionary, much technical work remains to fill in the details. A final text is not expected to be published until late January. It will go into effect in two years, during the first half of 2026. The deadline is shortened to six months for the banned practices and one year for general broad AI models.

Europe’s AI Act was first proposed in 2021 before ChatGPT and Bard were launched. The introduction of those chatbots turned a technical debate into a hot political one. Here’s a breakdown of the just agreed-on deal:

Scope

The regulation accepts the main elements of the OECD’s definition of artificial intelligence. Along with the national security exemption, most free and open-source software will be excluded. So will social media recommendation systems.

Get the Latest

Sign up to receive regular emails and stay informed about CEPA’s work.

But transparency obligations will apply to all general AI models, requiring them to publish a detailed summary of their training data “without prejudice of trade secrets.” For the most powerful AI models, deemed entailing a “systemic risk,” additional obligations will require them to assess and keep track of societal risks, detailing their cybersecurity protection.

The regulation includes a list of high-risk use cases presenting dangers to safety and fundamental rights. AI systems that fall into this category will have to undergo a strict regime in terms of risk management and data governance. High-risk areas include education, employment, critical infrastructure, public services, law enforcement, and border control.

Enforcement

Companies that violate the regulation’s bans will face harsh fines of up to 7% of global turnover.

An AI Office will be established within the European Commission to oversee the rules. National authorities will sit on a new European Artificial Intelligence Board to ensure consistent application. An advisory forum will gather stakeholder feedback, including from civil society. A scientific panel of independent experts will advise on the regulation’s enforcement, flagging potential systemic risks.

Expect contentious PR battles, fines, remedies — and long, drawn-out court cases. The AI Office’s budget remains undefined. When the EU’s General Data Protection Regulation was passed in 2018, it became the world’s toughest rules to protect people’s online data. But no serious fines were imposed before this year. In response, the EU moved to centralize enforcement of the new AI rules in the Brussels-based European Commission.

Law Enforcement Exemptions

Although EU governments introduced several exemptions for law enforcement agencies, public bodies using high-risk systems must report them to an EU database. For police and border control authorities, a dedicated confidential database will be established, accessible to an independent supervisory authority.

International Impact

European policymakers were outspoken about their desire to set a global AI benchmark. While the US has issued a series of executive orders and regulated how the government can use AI, it has fallen short of passing a law impacting the private sector, relying instead on voluntary commitments.

When Europe passed its GDPR privacy rules, Japan, the UK, India, Israel, and other democracies followed suit. The success of the AI Act might largely depend on whether other democracies will do the same with this new law. Companies will have to decide whether to adopt the European framework for all their AI products, limit their application to products distributed in Europe, or avoid launching them in Europe.

It shapes up as a ‘make it or break it’ test for the Brussels effect.

Luca Bertuzzi is EURACTIV’s Technology Editor.

Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.

Read More From Bandwidth

CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy.


Read More

Latest article