Saturday, November 23, 2024

EU greenlights AI Act as tech giants commit to responsible model development – SiliconANGLE

Must read

The European Union’s AI Act, a piece of legislation designed to mitigate the risks posed by artificial intelligence, today received the final greenlight it required to become law.

The milestone came against the backdrop of a high-profile AI safety summit in Seoul. During the event, 16 tech firms committed not to develop AI models that could pose severe risks to users.

The EU’s AI Act was first drafted by the European Commission, the bloc’s executive branch, in 2021. Member states agreed on the text of the legislation this past February and the proposal received the greenlight from the European Parliament a few weeks later. The final approval that the law received today was issued by the European Council, one of the EU’s two legislative bodies.

The legislation bans certain AI systems, such as those that could be used to manipulate consumers’ behavior, within the EU. It permits the deployment of some AI applications that are deemed high-risk by regulators, but only if they’re used in compliance with a strict set of safety and transparency rules. Machine learning models that pose a limited risk will only be subject to “very light transparency obligations.”

The legislation creates several new regulatory mechanisms to ensure the new AI rules are upheld. In particular, it calls for the European Commission to create an AI office focused on regulatory compliance enforcement. The EU will also establish a panel of independent experts to support enforcement activities, an AI board comprising representatives of member states and an advisory forum tasked with providing technical input.

Companies that breach the new AI rules could face steep fines. According to Reuters, violations will carry penalties ranging from €7.5 million euros or 1.5% of a company’s annual revenue to €35 million or 7% of sales. The AI Act is expected to officially go into effect within a few weeks, although some provisions will only start applying in about three years.

The European Council signed off on the legislation today against the backdrop of the AI Seoul Summit, a two-day policy event dedicated to AI safety. The summit included the participation of representatives from the G7 group of major economies, the EU, South Korea, Singapore and Australia. A number of technology executives took part as well.

During the event, 16 tech firms pledged to take a series of steps designed to mitigate the risks posed by their AI models. The participants include Amazon.com Inc., Microsoft Corp., and Google LLC. Several well-funded AI startups including OpenAI also signed up to the commitments along with companies from China and the United Arab Emirates.

The signatories will publish documents detailing how they plan to evaluate the potential risks posed by their AI models. Additionally, the companies participating in the initiative have agreed to “not develop or deploy a model or system at all” if the identified risks exceed a certain threshold. They also plan to publicly disclose how that threshold will be defined.

Photo: Unsplash

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU

Latest article