Wednesday, December 25, 2024

EU Approves World’s First AI Regulations—Here’s What To Know

Must read

Topline

The European Union approved regulations for artificial intelligence on Wednesday, the world’s first framework governing AI amid concerns the quickly developing technology could pose risks to humanity.

Key Facts

The EU’s AI Act—which received final approval from the European Parliament—places regulations on various AI technologies based on “its potential risks and level of impact.”

High-risk AI systems like those used in critical infrastructure or medical devices will face more regulations, requiring those systems to “assess and reduce risks,” be transparent about data usage and ensure human oversight.

Some AI applications will be banned outright because they “threaten citizens’ rights,” including emotion recognition systems in schools and workplaces, among others.

Biometric identification systems—applications used to identify people in public spaces—can only be used by law enforcement to find victims of trafficking and sexual exploitation, to prevent terrorist threats and to identify people suspected of committing a crime.

The regulations also require labels for AI-generated images, video or audio content.

What To Watch For

The AI Act is expected to become law in May, following final approval from some EU member states. A complete set of regulations—including rules governing chatbots—will be in effect by mid-2026, according to the European Parliament, which noted each EU country will establish its own AI watchdog agency.

Big Number

$38 million. That’s the maximum fine for violations of the AI Act, or up to 7% of a company’s global revenue.

Crucial Quote

Dragos Tudorache, a parliament member who helped draft the AI Act, said: “The AI Act has nudged the future of AI in a human-centric direction, in a direction where humans are in control of the technology and where it, the technology, helps us leverage new discoveries, economic growth, societal progress and unlock human potential.”

Key Background

The EU began drafting regulations for AI in 2018 to become the leading regulator for the fast-developing technology. An early draft for the AI Act became available in early 2021, though it did not initially address “general-purpose” models like those used by ChatGPT or other chatbots. Lawmakers agreed on terms for the AI Act in December, which based regulations on risk-assessment. If a system posed a higher risk, it would face more restrictions and require more oversight. Europe’s rules follow calls by tech executives for regulations amid concerns the technology was developing too quickly. Former Google CEO Eric Schmidt said last year the technology could pose “existential risks,” while OpenAI chief executive Sam Altman suggested his company—which developed ChatGPT—was “a little bit scared” of AI’s potential, suggesting the technology will be “the greatest technology humanity has yet developed.”

Tangent

President Joe Biden signed an executive order on AI in October, outlining “the most sweeping actions ever taken to protect Americans from the potential risks of AI.” The order required some companies to share results of safety testing and other information with the government, among other things. At least seven U.S. states have also proposed bills that would regulate the technology, according to the Associated Press.

Further Reading

ForbesEU Officials Reach Deal On ‘Historic’ AI RegulationE.U. Agrees on Landmark Artificial Intelligence Rules

Latest article