Sunday, October 20, 2024

EU AI Act’s Passage Starts the Clock for US Companies to Comply

Must read

European Union lawmakers on Wednesday approved a sweeping new regulation on artificial intelligence, the key final step in adopting the act. Now the clock starts ticking toward implementation of the law’s major provisions, which come into effect over the next three years.

The EU AI Act takes a risk-based approach, looking at how the technology will impact individuals. Uses of AI deemed to pose unacceptable threats are banned, likely effective by the end of this year. Higher-risk applications—uses of AI that can have significant consequences for the people affected—are subject to scrutiny.

“The AI Act is a starting point for a new model of governance built around technology,” Dragos Tudorache, a member of the European Parliament and co-rapporteur, or leader, of the bill said in a statement released Wednesday by the Parliament. “We must now focus on putting this law into practice.”

The law will be implemented in stages stretching into 2027.

Next Few Weeks:

While there are no political questions still facing the law, a few formalities remain before it comes into force. The text will be vetted by EU lawyers and translated into all of the European Union’s official languages and get a formal parliamentary approval in mid-April. The Council, which comprises EU member state governments, will also formally endorse it. Finally, 20 days after the text is published in the EU’s official journal, the act comes into force. That will probably be in May.

In Six Months:

The act’s ban on unacceptably risky AI takes effect six months after entry into force, probably the end of this year. The act will prohibit uses of AI including:

  • Systems that manipulate people’s behavior
  • Emotion-recognition systems used in workplace or education settings
  • Biometric-categorization systems that infer characteristics like a person’s religious beliefs, political opinions, or sexual orientation

Violations of the prohibited use rules also carry the biggest fines: up to 7% of a company’s global annual revenue, or 35 million euros ($38 milllion).

In 12 Months:

Rules for general-purpose AI models—including generative artificial-intelligence models like ChatGPT that can produce text, images, and video—come into effect one year after the law’s entry into force. Developers of general- purpose models will face transparency requirements, including providing summaries of the content they trained their models on, and AI-generated content must be identified, or watermarked.

In 24 Months:

The majority of the AI Act begins to apply two years after the law’s entry into force, probably mid-2026. This set of rules includes many of the obligations for AI uses deemed high-risk, including:

  • AI systems that make decisions about whether to admit job or school applicants, as well as those used to evaluate workers and students
  • AI decision-making about an individual’s creditworthiness or insurance risk
  • Certain uses of biometric systems
  • AI systems used by a judicial authority to research and interpret facts and the law, and those used for alternative dispute resolution

Requirements for providers and deployers of high-risk AI include registering the systems and establishing internal procedures around the AI use.

In 36 Months:

High-risk AI uses already covered by EU product safety law—including AI embedded in medical devices, toys, elevators and boats—will face some obligations under the AI Act, but with a longer on-ramp to the date of application.

Read more: World’s Most Extensive AI Rules Approved in EU Despite Criticism

Companies React

News of the vote was largely welcomed by the tech industry.

“I commend the EU for its leadership in passing comprehensive, smart AI legislation,” Christina Montgomery, IBM’s vice president and chief privacy and trust officer, said in a statement.

Eric Loeb, executive vice president of government affairs at Salesforce, said in a statement that his company “applauds EU institutions for taking leadership in this domain.”

Wednesday’s vote is a sign companies should start paying attention, if they’re not already, said Navrina Singh, the CEO and founder of the AI governance company Credo AI.

“It is not only EU-based AI companies that will need to comply, but any AI-enabled enterprise doing business in the EU or offering a solution that impacts EU citizens,” Singh said in a statement. “Even for multinational companies not currently operating in the European market, this legislation is a wake up call.”

Read more: Why US Companies Should Care About the EU AI Act

Latest article