The EU’s AI Act, passed last week, will impose significant obligations on businesses.
The world’s first major set of rules on the use of the technology classifies AI into various categories, from “unacceptable” to high, medium and low risk.
“We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency,” said Internal Market Committee co-rapporteur Brando Benifei. “Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected.”
There’s a total ban on the use of biometric categorization systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in workplaces and schools, social scoring, predictive policing based solely on profiling a person or assessing their characteristics and AI that manipulates human behavior or exploits people’s vulnerabilities will also be forbidden.
There are, however, certain exceptions for law enforcement—and these have drawn fire from rights groups.
“The European Parliament set out to ban biometric mass surveillance in Europe, but is ending up legitimizing it. Chilling monitoring of our behavior and ubiquitous real-time face surveillance in public spaces, error-prone biometric identification used on CCTV recordings even for petty offenses, racial classification of persons, unscientific AI, video lie detector technology—none of these dystopian technologies will be off limits for EU governments, including illiberal governments such as Hungary’s,” said German Pirate Party MEP Patrick Breyer.
“Rather than protecting us from these authoritarian instruments, the AI Act provides an instruction manual for governments to roll out biometric mass surveillance in Europe,” Breyer added.
High-risk AI systems will carry extra obligations, requiring them to assess and reduce risks, maintain use logs, be transparent and accurate and ensure human oversight. There are, though, concerns that the legislation doesn’t go far enough.
“A key feature of the AI Act, introduced after sustained civil society advocacy, is the obligation for high-risk AI deployers to conduct fundamental rights impact assessments,” wrote Laura Lazaro Cabrera and Iverna McGowan of the Center for Democracy and Technology in a brief on the act. “However, this obligation is limited in scope as it only applies to public sector bodies and a narrow subset of private bodies.
“While the result of a FRIA must be reported to a national authority,” they added, “nothing in the Act makes the deployment of a high-risk AI conditional on the FRIA being reviewed or approved by authorities. In other words, once carried out and reported on, the FRIA does not seem to have any meaningful impact in the roll-out of a high-risk AI.”
Meanwhile, general-purpose AI systems, and the models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training.
The new rules will come into force in stages, but companies will need to prepare.
“Businesses have no time to lose when it comes to getting ready for the EU AI Act—the most significant step to regulating AI in the world,” said Ray Eitel-Porter, responsible AI lead at Accenture UKIA.
“Leaders can take steps to deploy AI tools governed by strong principles and controls that promote fairness, transparency, safety and privacy, for powerful technology to support their people, customers, and society in a positive way,” Eitel-Porter added. “Implementation can take at least two years for a large company, the full extent of the grace period allowed by the EU AI Act for high-risk AI systems.”
The new law is likely to have a ripple effect around the world, said Sabeen Malik, vice president of global government affairs and public policy at security firm Rapid7.
“The EU’s approach will certainly go on to inspire other nations’ approaches to AI regulation,” she said.
“Industry self-regulations and best practices are likely where you will see the US and UK sit. They will aim to strike the balance of flexible regulation and pro innovation, as well as cautiously follow AI use cases that won’t be regulated by other laws, such as data protection, consumer protection, product safety, and equality law.”