Saturday, November 23, 2024

Experts react: The EU made a deal on AI rules. But can regulators move at the speed of tech?  

Must read


New Atlanticist

December 11, 2023


Experts react: The EU made a deal on AI rules. But can regulators move at the speed of tech?  

By
Atlantic Council experts

Ahead of the curve or behind the times? That’s what some are asking just days after European Union (EU) policymakers reached a deal on the world’s first comprehensive artificial intelligence (AI) rules in the form of the AI Act. The law would subject AI technologies to different requirements, with “limited risk” AI systems being required to abide by transparency rules and AI tools with “unacceptable” risk being entirely banned from the EU. But the rules aren’t slated to come into effect until 2025. Will the legislation keep up with the explosion of AI tools or will it get stale? And will other countries follow soon? Our (human) experts gave us their takes below.

Click to jump to an expert analysis:

Frances G. Burwell: A marker for how democracies should deal with AI 

Rose Jackson: Big questions remain about what’s in the Act and how it will work with other legislation

Graham Brookie: For a peek at what this Act could do, look to other EU tech policy

Konstantinos Komaitis: The agreement includes carveouts that could lead to potential abuse 

Kenneth Propp: An attempt to strike the right balance on AI use by law enforcement

Nicole Lawler: The EU must strongly enforce the AI Act

Trisha Ray: The EU has opened the door to accounting for AI’s harm to the environment


A marker for how democracies should deal with AI 

The agreement Friday by EU member states and the European Parliament on the AI Act opens an era in which governments will move beyond codes of conduct and voluntary guidelines in their efforts to manage this new technology. The law bans some uses of AI that infringe on human rights and civil liberties. It also requires that developers of “high risk” AI systems—systems that impact how companies treat their workers, as well as uses involving critical infrastructure, law enforcement, border control, and others—must provide details on training, ensure transparency for users, and offer opportunities for appeal and redress.  

The AI Act’s aspirations mirror those expressed in the Biden administration’s recent executive order on AI and Blueprint for an AI Bill of Rights. But Brussels has created real rules, backed up with serious potential fines. Some issues of concern to US companies, such as including recommender systems from social media as high risk, are not in the final text, although many questions about implementation remain. But overall, the United States and EU are headed in the same direction. 

The EU’s AI Act clearly sets criteria for differentiating between those who seek to protect citizens from the abuses of AI—while still availing of the opportunities presented by this new technology—and those who seek to use AI to bolster the power of the state. China’s efforts to develop a system of social scoring is clearly contrary to the ethos of the AI Act and is indeed banned. Thus, the AI Act is not only the first serious attempt to regulate this quickly evolving technology but also a marker for how democracies in particular should deal with the challenges and opportunities of AI. 

Frances G. Burwell is a distinguished fellow at the Atlantic Council’s Europe Center and a senior director at McLarty Associates.


Big questions remain about what’s in the Act and how it will work with other legislation

We don’t actually know exactly what is in the EU’s recently passed AI Act, meaning you’d be wise to view definitive takes with a bit of skepticism. We do know that the marathon session in which European policymakers rushed to meet the end-of-year deadline focused on debates about where to draw lines on national-security exemptions and police uses of related technologies, approaches to account for major consumer-facing generative AI models (like those from OpenAI, Google and others), and outright banning practices or uses such as monitoring inferred emotions in the workplace. Importantly, while we wait for the final text of the AI Act in the next few weeks, we know it won’t come into force for another two years.  

All of this drops at the exact moment the EU is finalizing the details on its last groundbreaking package of tech-related bills—the Digital Services Act and Digital Markets Act. Those two laws not only already apply to elements of AI technologies (such as the use of algorithms in social media and commerce-related sites), but they also will establish a complex and important transparency and data-access regime, which will set a global standard for what tech companies are expected to share about how they operate and what people do with their technologies. Undoubtedly, those standards will impact (if not apply directly to) elements of the AI Act when it comes to life. So for now, I’d watch that space more closely.  

Those wondering whether or how the United States might take a similar approach in governing this widespread and often speculative technology should remember that the United States still needs to figure out how to pass comprehensive federal privacy or data protection laws to even have a shot at meaningfully engaging. 

Rose Jackson is the director of the Democracy + Tech Initiative at the Atlantic Council’s Digital Forensic Research Lab. She previously served as the chief of staff to the Bureau of Democracy, Human Rights, and Labor at the US State Department.


For a peek at what this Act could do, look to other EU tech policy

The EU AI Act is important because it is first—not because it is the most comprehensive. There will be no public text of the AI Act for at least the next several weeks unless it is leaked, and thus it is hard to have a concrete assessment of what it does or does not do. That said, the AI Act will undoubtedly be designed to build on the EU’s Digital Services Act and Digital Markets Act, which are public and now being implemented. In particular, the transparency and information-sharing standards in the Digital Services Act will likely be the most solid indicator of what the AI Act could, eventually, do. In the world of AI governance, the White House’s executive order sticks out as more concrete guidance to industry, but it sorely needs a legislative companion in the US Congress. 

Graham Brookie is the vice president and senior director of the Atlantic Council’s Digital Forensic Research Lab.


The agreement includes carveouts that could lead to potential abuse

On Friday, after intense negotiations that spread across three days and lasted in total thirty-six hours, the European Commission, the European Council, and the European Parliament reached a political agreement on the details of the EU’s AI legislation. The political agreements, however, are not necessarily agreements over the expectations citizens have and, to this end, Europe is still quite far from having a comprehensive piece of legislation on AI. 

For the past few years, European policymakers have focused on drafting principles that would guide the way Europeans and others understand and accept the use of AI in their societies. After the agreement last week, the AI Act has the potential to impose some strict rules around high-risk AI applications and to enforce some much-needed mandatory rules around transparency, both of which could be considered a welcome evolution for the use of AI. However, given that there is no final text and that the language on the technical standards still needs to be fleshed out, there is still work to be done to ensure that any such standards respect the fundamental rights and freedoms of European citizens. 

The agreement Europe’s main institutions reached continues to have carveouts that could lead to potential abuse and create the conditions for AI to be used for mass surveillance purposes. More specifically, even though the trilogues concluded with the intention to ban emotional recognition in the workplace and schools, its use could still be allowed by law enforcement. The same goes for biometric categorization, where there is a general prohibition but there are some narrow exemptions for law enforcement. These exemptions include, for instance, use of the technology to prevent attacks or to identify victims of kidnapping. And, of course, national security provides an additional layer of exemptions for the EU’s member states to have the opportunity to abuse the use of AI for their own purposes.  As policymakers were in final negotiations, seventy civil society groups and thirty-four expert individuals sent an urgent letter to the Council of EU Member States, the European Commission, and the European Parliament urging them not to trade away European citizens’ rights. 

What the OpenAI saga revealed is how volatile the AI industry is and how AI is another place where there is a massive concentration of power, both market and societal. This creates an additional point of pressure for Europe to get AI regulation right. No matter the pressure though, Europe should make sure that its legislation has strict and clear safeguards that limit its potential abuse and are compatible with human rights considerations; this becomes particularly important if one considers that this legislation will be the first of its kind for a democracy and it will be compared to China’s vision for AI legislation. As jurisdictions around the world look for inspiration, Europe needs to set an example that is respectful of citizens and their rights. 

Konstantinos Komaitis is a nonresident fellow with the Democracy + Tech Initiative of the Atlantic Council’s Digital Forensic Research Lab and a nonresident fellow and senior researcher at the Lisbon Council.


An attempt to strike the right balance on AI use by law enforcement

Use of AI by law enforcement can yield real advances for public safety, but it also can put civil liberties at risk. In pursuing a comprehensive approach to regulating AI, the EU legislature did not shy away from the challenge of striking the right balance. 

On the one hand, civil libertarians, well represented in the European Parliament, were determined to avoid broad law-enforcement use of this technology out of fear of the development of a panopticon. On the other hand, the member states comprising the Council demanded that their law-enforcement authorities be permitted to deploy remote real-time biometric identification systems at high-profile public events of potential volatility, such as the 2024 Summer Olympics in Paris. 

So it was no surprise that agreeing on the rules for law enforcement use of AI was one of the two last issues to be resolved in the marathon trilogue negotiations among the EU institutions. (The other, more visible end-game issue was regulating AI foundation models.) The result, it seems, was a classic EU legislative compromise. 

Although precise details of the compromise are not yet known, the AI regulation, in general, will specify limited objectives for which law enforcement may use remote biometric identification systems in public spaces. These include “cases of victims of certain crimes, prevention of genuine, present, or foreseeable threats, such as terrorist attacks, and searches for people suspected of the most serious crimes,” according to the Council. This final list in fact appears to be quite close to the exceptions that the Commission had proposed in the first place.   

As anyone who has taken a flight recently from a US airport knows, the use of facial recognition in commercial contexts is expanding rapidly in this country. Only a small number of US local jurisdictions have banned or restricted law enforcement’s use of remote biometric identification systems—and there is no realistic prospect of federal action. The EU compromise is bound to have an impact on the US debate, just as the General Data Protection Regulation has. 

Kenneth Propp is a nonresident senior fellow with the Atlantic Council’s Europe Center and former legal counselor at the US Mission to the European Union in Brussels.


The EU must strongly enforce the AI Act

The EU reached a political agreement on the AI Act, becoming the first to set comprehensive rules on the regulation of AI technologies. Now, the key remaining questions before the AI Act comes into force surround implementation and enforcement: Will the AI Act strike the right balance between setting necessary guardrails and providing a platform for European firms to innovate, particularly as Europe grows increasingly concerned with its own tech competitiveness? Will the EU’s internal politics prevent enforcement of the AI Act and diminish the intended “Brussels effect”?

In the weeks ahead of the final trilogue, France, Germany, and Italy lobbied to remove obligations for foundation model systems, fearing the impact on their domestic startups. As a result, Friday’s political agreement includes innovation measures, such as regulatory sandboxes, that would enable new high-risk AI systems to be road-tested in advance. However, experts argue while European firms wait for EU regulators to approve their AI system, their products risk becoming outdated in the rapidly evolving market.

Looking forward, the EU will need to strongly enforce the AI Act. Without that, experts argue, the legislation will inevitably lack teeth, and member states could rely on weak enforcement of the AI Act to protect their interests. This is not without precedent. Millions of European small and medium-sized enterprises were reportedly not compliant with the General Data Protection Regulation years after the law went into effect. It stands within reason that enforcement of the AI Act could take a similar route. To oversee the Act’s implementation, the Commission will set up a new office which will require new hires—that the EU’s budget may not be able to support. With the European Council summit approaching, member states are reviewing the EU’s seven-year budget. The latest draft negotiations are looking to reserve funds for Ukraine and limit budget increases in line with the demands of more frugal member states. Limited funds for a new AI office will likely mean fewer staffers and resources to do the hard work of seeing the AI act fully implemented.

EU lawmakers were determined to reach an agreement on the AI Act this year, in part to drive home the message that the EU leads on AI regulation, especially after the United States unveiled an executive order on AI and the United Kingdom hosted the international AI Safety Summit—China also developed its own AI principles. Next year’s European elections in June are also quickly closing the window of opportunity to finalize the Act under this Parliament. Despite these challenges, the EU’s success in finalizing the first comprehensive regulatory framework on AI is an impressive feat. These AI rules could add to the “Brussels effect,” fueling the adoption of the law elsewhere in the world. While the AI Act’s provisions largely regulate activities within the EU’s digital market, globally accessible AI systems including OpenAI’s ChatGPT and Google’s Bard will be impacted by the rules. The Biden administration seems to have taken a page from the draft of the EU’s AI Act in increasing the opportunity for regulatory harmonization between the two trading partners. For the EU to continue to scale up its vision for AI governance, however, it must implement the Act in full, which is easier than it sounds. It must also continue to work with allies and partners to reinforce cooperation on technical AI standards and definitions to help avoid regulatory fragmentation. 

Nicole Lawler is a program assistant at the Atlantic Council’s Europe Center


The EU has opened the door to accounting for AI’s harm to the environment

The EU AI Act has the opportunity to set out the world’s first set of environmental regulations for AI. In his post announcing the agreement, EU Commissioner for Internal Market Thierry Breton noted that the trilogue “also agreed that future-proof technical standards are at the heart of the regulation. This also includes certain environmental standards.” Harm to the environment is considered ground for categorizing AI systems as high risk. Yet there are no standards for reporting the environmental impact of AI tools, nor a certification system for these tools along the lines of the Leadership in Energy and Environmental Design (LEED) system for buildings. The Act must also prioritize net zero over carbon-neutral approaches, especially noting Jevons Paradox, where efficiency-oriented solutions lead to increased demand. Large models, in particular, consume massive amounts of energy and should be subject to more stringent reporting. One 2019 study quantified emissions associated with natural language processing tools, estimating that the tools (such as, for example, GPT-2) produce as much carbon dioxide as five cars emit over their entire lifetimes. 

Trisha Ray is an associate director and resident fellow at the Atlantic Council’s GeoTech Center.

Further reading

Related Experts:
Frances Burwell,
Rose Jackson,
Konstantinos Komaitis,
Kenneth Propp,
Trisha Ray, and
Nicole Lawler

Image: The statement from the European Commission is being displayed on a smartphone with AI and EU stars in the background, in this photo illustration. EU policymakers are reaching a political agreement on what is set to become the global benchmark for regulating Artificial Intelligence, in Brussels, Belgium, on December 12, 2023. (Photo by Jonathan Raa/NurPhoto)

Latest article