Saturday, November 23, 2024

With the EU AI Act incoming this summer, the bloc lays out its plan for AI governance | TechCrunch

Must read

The European Union has taken the wraps off the structure of the new AI Office, the ecosystem-building and oversight body that’s being established under the bloc’s AI Act. The risk-based regulatory framework for artificial intelligence is expected to enter into force before the end of July — following the regulation’s final approval by EU lawmakers last week. The AI Office will take effect on June 16.

The AI Office reflects the bloc’s bigger ambitions in AI. It will play a key role in shaping the European AI ecosystem over the coming years — playing a dual role of helping to regulate AI risks, and fostering uptake and innovation. But the bloc also hopes the AI Office can exert wider influence on the global stage as many countries and jurisdictions are looking to understand how to approach AI governance. In all, it will be made up of five units.

Here’s a breakdown of what each of the five units of the EU’s AI Office will focus on:

One unit will tackle “regulation and compliance”, including liaising with EU Member States to support harmonized application and enforcement of the AI Act. “The unit will contribute to investigations and possible infringements, administering sanctions,” per the Commission, which intends the Office to play a supporting role the EU country-level governance bodies the law will also establish for enforcing the broad sweep of the regime.

Another unit will deal with “AI Safety”. The Commission said this will focus on “the identification of systemic risks of very capable general-purpose models, possible mitigation measures as well as evaluation and testing approaches” — with general purpose models (GPAIs) referring to the recent wave of generative AI technologies such as the foundational models that underpin tools like ChatGPT. Though the EU said the unit will be most concerned with GPAIs with so-called “systemic risk” — which the law defines as models trained above a certain compute threshold.

The AI Office will have responsibility for directly enforcing the AI Act’s rules for GPAIs — so relevant units are expected to conduct testing and evaluation of GPAIs, as well as using powers to request information from AI giants to enable the oversight.

The AI Office’s compliance unit’s work will also include producing templates GPAIs will be expected to use, such as for summarizing any copyrighted material used to train their models.

While having a dedicated AI Safety unit seems necessary to give full effect to the law’s rules for GPAIs, it also looks intended to respond to international developments in AI governance since the EU’s law was drafted — such as the UK and US announcing their own respective AI Safety Institutes last fall. The big difference, though, is the EU’s AI Safety unit is armed with legal powers.

A third unit of the AI Office will dedicate itself to what the Commission dubs “Excellence in AI and Robotics”, including supporting and funding AI R&D. The Commission said this unit will coordinate with its previously announced “GenAI4EU” initiative, which aims to stimulate the development and uptake of generative AI models — including by upgrading Europe’s network of supercomputers to support model training.

A fourth unit is focused on “AI for Social Good”. The Commission said this will “design and implement” the Office’s international engagement for big projects where AI could have a positive societal impact — such as in areas like weather modelling, cancer diagnoses and digital twins for artistic reconstruction.

Back in April, the EU announced that a planned AI collaboration with the US, on AI safety and risk research, would also include a focus on joint working on uses of AI for the public good. So this component of the AI Office was already sketched out.

Finally, a fifth unit will tackle “AI Innovation and Policy Coordination”. The Commission said its role will be to ensure the execution of the bloc’s AI strategy — including “monitoring trends and investment, stimulating the uptake of AI through a network of European Digital Innovation Hubs and the establishment of AI Factories, and fostering an innovative ecosystem by supporting regulatory sandboxes and real-world testing”.

Having three the five units of the EU AI Office working — broadly speaking — on AI uptake, investment and ecosystem building, while just two are concerned with regulatory compliance and safety, looks intended to offer further reassurance to industry that the EU’s speed in producing a rulebook for AI is not anti-innovation, as some homegrown AI developers have complained. The bloc also argues trustworthiness will foster adoption of AI.

The Commission has already appointed the heads of several of the AI Office units — and the overall head of the Office itself — but the AI Safety unit’s chief has yet to be named. A lead scientific advisor role is also vacant. Confirmed appointments are: Lucilla Sioli, head of AI Office; Kilian Gross, head of the Regulation & Compliance unit; Cecile Huet, Excellence in AI and Robotics Unit; Martin Bailey, AI for Societal Good Unit; and Malgorzata Nikowska, AI Innovation and Policy Coordination Unit.

The AI Office was established by a Commission decision back in January and started preparatory work — such as deciding the structure — in late February. It sits within the EU’s digital department, DG Connect — which is (currently) headed by internal market commissioner, Thierry Breton.

The AI Office will eventually have a headcount of more than 140 people, including technical staff, lawyers, political scientists and economists. On Wednesday the EU said some 60 staff have been put in place so far. It plans to ramp up hiring over the next couple of years as the law is implemented and becomes fully operation. The AI Act takes a phased approach to rules, with some provisions set to apply six months after the law comes in force, while others get a longer lead in of a year or more.

One key upcoming role for the AI Office will be in drawing up Codes of Practice and best practices for AI developers — which the EU wants to play a stop-gap role while the legal rulebook is phased in.

A Commission official said the Code is expected to launch soon, once the AI Act enters into force later this summer.

Other work for the AI Office includes liaising with a range of other fora and expert bodies the AI Act will establish to knit together the EU’s governance and ecosystem-building approach, including the European Artificial Intelligence Board, a body which will be made up of representatives from Member States; a scientific panel of independent experts; and a broader advisory forum comprised of stakeholders including industry, startups and SMEs, academia, think tanks and civil society.

“The first meeting of the AI Board should take place by the end of June,” the Commission noted in a press release, adding: “The AI Office is preparing guidelines on the AI system definition and on the prohibitions, both due six months after the entry into force of the AI Act. The Office is also getting ready to coordinate the drawing up of codes of practice for the obligations for general-purpose AI models, due 9 months after entry into force.”

This report was updated with the names of confirmed appointments after the Commission provided the information

Latest article