Lawmakers in the European Union (E.U.) last week overwhelmingly approved legislation to regulate artificial intelligence in an attempt to guide member countries as the industry rapidly grows.
The Artificial Intelligence Act (AI Act) passed 523–46, with 49 votes not cast. According to the E.U. parliament, the legislation is meant to “ensure[] safety and compliance with fundamental rights, while boosting innovation.” It is far more likely, however, that the law will instead hamstring innovation, particularly when considering it is regulating a technology that is quickly changing and not well-understood.
“In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk-based approach should be followed,” the law reads.
The legislation classifies AI systems into four categories. Systems deemed unacceptably high risk—including those that seek to manipulate human behavior or ones used for social scoring—will be banned. Also off limits, refreshingly, is the use of biometric identification in public spaces for law enforcement purposes, with a few exceptions.
The government will subject high-risk systems, such as high-priority infrastructure and public services, to risk assessment and oversight. Limited-risk apps and general-purpose AI, including foundation models like ChatGPT, will have to adhere to transparency requirements. Minimal-risk AI systems, expected by lawmakers to make up the bulk of applications, will be left unregulated.
In addition to addressing risk in order to “avoid undesirable outcomes,” the law aims to “establish a governance structure at European and national level.” The European AI Office, described as the center of AI expertise across the E.U., was established to carry out the AI Act. It also sets up an AI board to be the E.U.’s primary advisory body on the technology.
Costs of running afoul of the law are no joke, “ranging from penalties of €35 million or 7 percent of global revenue to €7.5 million or 1.5 percent of revenue, depending on the infringement and size of the company,” according to Holland & Knight.
Practically speaking, the regulation of AI will now be centralized across the European Union’s member nations. The goal, according to the law, is to establish a “harmonised standard,” a routinely used measure in the E.U., for such regulation.
The E.U. is far from the only governing body passing AI legislation to bring the burgeoning technology under control; China introduced their temporary measures in 2023 and President Joe Biden signed an executive order on October 30, 2023, to rein in the development of AI.
“To realize the promise of AI and avoid the risk, we need to govern this technology,” Biden said subsequently at a White House event. Though the U.S. Congress is yet to figure out long-term legislation, the E.U.’s AI Act could give them inspiration to do the same. Biden’s words certainly sound similar to the E.U.’s approach.
But critics of the E.U.’s new law worry that the set of rules will stifle innovation and competition, limiting consumer choice in the market.
“We can decide to regulate more quickly than our major competitors,” said Emmanuel Macron, the president of France, “but we are regulating things that we have not yet produced or invented. It is not a good idea.”
Anand Sanwal, CEO of CB Insights, echoed the thought: “The EU now has more AI regulations than meaningful AI companies.” Barbara Prainsack and Nikolaus Forgó, professors at the University of Vienna, meanwhile wrote for Nature Medicine that the AI Act views the technology strictly through the lens of risk without acknowledging the benefit, which will “hinder the development of new technology while failing to protect the public.”
The E.U.’s law isn’t all bad. Its restrictions on the use of biometric identification, for example, address a real civil liberties concern and are a step in the right direction. Less ideal is that the law makes many exceptions for cases of national security, allowing member states to interpret freely what exactly raises concerns about privacy.
Whether American lawmakers take a similar risk-based approach to AI regulation is yet to be determined, but it’s not far-fetched to think it may only be a matter of time before the push for such a law materializes in Congress. If and when it does, it is important to be prudent about encouraging innovation, as well as keeping safeguarding civil liberties.