Saturday, November 23, 2024

5 Things You Should Know About the New EU AI Regulation

Must read

European Union (EU) lawmakers on Wednesday (March 13) passed a groundbreaking law on artificial intelligence (AI), setting a precedent that surpasses the U.S. in AI regulation.

The legislation aims to significantly alter the use of AI in various sectors across Europe, including healthcare and law enforcement, by prohibiting specific uses of the technology deemed “unacceptable” and strictly regulating “high-risk” applications. Among the practices banned by the EU AI Act are AI-driven social scoring systems and biometric tools designed to infer a person’s race, political views or sexual orientation.

In early December, the EU reached a provisional political agreement, which was later confirmed in a Parliament session with a significant majority of 523 votes in favor, 46 against and 49 abstentions. The next steps for the law are final adoption by parliament and a formal endorsement by the EU Council, after which it will be published and go into effect. 

Here’s what you should know:

What is the new EU AI Regulation?

The EU has introduced a landmark AI regulation, marking a significant move in regulating AI globally. This law positions the EU as a leader in AI governance, and aims to oversee the application of AI across various sectors, including healthcare and law enforcement.

What practices do the EU AI Act ban?

The EU AI Act bans specific “unacceptable” uses of AI. These include social scoring systems, tools that predict a person’s race, political orientation or sexual orientation, the use of AI for emotion recognition in educational and professional settings, and certain types of automated profiling for predictive policing.

What are considered ‘high-risk’ AI applications under the law?

The law identifies “high-risk” AI applications, especially in education, hiring and government services. To ensure they are used responsibly, these applications will face stringent requirements, including measures for transparency and accountability.

Are there any transparency requirements for AI companies?

Yes. Companies developing significant and complex AI systems, such as OpenAI, will be subject to new transparency requirements. The law includes the obligation to clearly label AI-generated content, such as deep fakes, to prevent misinformation and ensure public awareness of AI’s influence on content creation.

How does the EU’s legislative action on AI compare to other regions?

The EU has acted swiftly in legislating AI, reflecting the urgency with which it views the rise of AI tools like ChatGPT. The regulation, set to be implemented in about two years, was proposed in 2021, showing the proactive approach of EU policymakers in addressing AI’s potential challenges. The EU law is in stark contrast to the United States, where comprehensive federal AI legislation is still in progress despite efforts by key lawmakers. 

Latest article