Sunday, December 22, 2024

Two Key Moves The EU’s New AI Office Should Make To Foster Innovation

Must read

With the AI Act predicted to enter into force this summer, policymakers must now focus on implementing this complex legislation. One of the first steps will be establishing the AI Office, a central body mandated by the Act to coordinate its application, undertake AI safety research, develop Codes of Practice, and investigate compliance issues. But there is a risk that the Office will fall victim to the same bureaucratic hurdles affecting a number of other EU institutions and bodies, resulting in lengthy and complex administrative burdens that ultimately hurt European AI innovation.

To better support firms as they navigate the Act’s application across member states, the AI Office should prioritise two tasks. First, it should work closely with member states to support national implementation of the Act, paying attention to specific member state needs. The AI Office should facilitate member state to member state cooperation on regulatory sandboxes, and collaboration with the European High-Performance Computing Joint Undertaking (EuroHPC JU)—an initiative to coordinate and pool supercomputing resources across member states. Second, the AI Office should create, as soon as practicable, the Codes of Practice, which outline explicit General Purpose AI (GPAI) developer obligations and watermarking techniques. The Codes of Practice will act as placeholders for harmonised standards which will not be available until member states fully implement the AI Act. The AI Office should iteratively develop the Codes of Practice to reflect current research and best practice. Firms’ voluntary adherence to the Codes presumes compliance which goes far in drastically reducing regulatory complexity for EU AI firms.

Support National Implementation Through the AI Board

The AI Act tasks the AI Office with coordinating its implementation, however it remains unclear precisely how the AI Office will achieve this. The AI Office should use its role as secretariat to the AI Board, a group representing the 27 member states plus the European Data Protection Supervisor (EDPS), to work with and between member states.

To realise this, the AI Office should introduce an AI Board Liaison to manage discussions between the AI Office and the AI Board. As a non-voting member, the Office can offer independent advice on how national authorities should implement the Act. To begin with, the AI Board should operate as the forum for member state national authorities tasked with implementation to talk to each other. The AI Office should leverage this connection to offer targeted support, particularly across member states who may lack domestic infrastructure to rollout the Act in line with transposition timelines. It is likely that, given the makeup of the Act itself along prohibited and acceptable use cases, some of the enforcement of the Act will fall to national, sector regulators to monitor those use cases. Therefore, the Office, in coordinating and monitoring implementation across member states, should support the specific needs across member states, such as consulting national sector regulators, as voiced through the AI Board.

Secondly, the Office should facilitate cross-border regulatory sandboxes. The AI Act mandates the coordinated rollout of regulatory sandboxes to foster cutting edge AI innovation, with the establishment of at least one regulatory sandbox per member state by the two-year mark post entry into force. This timeline is too slow when compared with the immense pace of AI innovation. The Office should therefore work with member states on the AI Board to set up fewer, more coordinated cross-border regulatory sandboxes, by matchmaking among member states that have complementary regulatory frameworks. This will ensure wider and quicker rollout, and make these regulatory sandboxes more attractive for firms because they will have access to broader markets. It would also act as a microclimate for testing a broader EU-wide digital single market.

Thirdly, the Office should establish stronger links with the EuroHPC JU to ensure access for innovative AI solutions that show greatest promise. For example, EU member states could nominate their topmost promising AI solutions through the AI Board, and the AI Office could use these submissions to coordinate with the EuroHPC JU to ensure sufficient access. Challenge grants could also offer a  prize of heightened access to the EuroHPC JU.

Iteratively Develop the Codes of Practice

The AI Office must develop the Codes of Practice by Q2 2025. The Codes act as the first place for AI Act compliance until the EU establishes harmonised standards. As such, creating the Codes should be a top priority, particularly as voluntary adherence to the Codes by GPAI developers generates a presumption of conformity with the Act that would go far towards reducing regulatory complexity.

The Codes must have specific, measurable objectives and key performance indicators (KPIs). As AI model evaluations, safety, transparency, and explainability techniques are an emerging field, development of the Codes should be agile and iterative. To promote practical compliance, the Codes should be grounded in technical feasibility, achieved by working closely with the scientific panel of independent experts. The AI Office should hold consultations to hear the views of industry and other interest groups. Similarly, as the Act applies to foreign firms wishing to operate within EU markets, the Office should ensure collaboration with the international research community, prioritising best practices and readily implementable solutions.

The AI Act also requires GPAI developers to publish summaries about the data used for model training. Given the tensions around protecting trade secrets and providing sufficient information, the Office should thoroughly investigate the level of granularity required for model training summaries. For example, it should work with a variety of stakeholders, including the EDPS, AI Board, the scientific community, and industry, to determine what minimum level of information is necessary for compliance. It is key the Office strikes the right balance to limit unnecessary exposure of model training data. This could impact fair competition, and drive firms away; firms compelled to share trade secrets may put them off from operating in the EU. There is also the security element. It is currently unclear what information is useful for nefarious actors, including state actors. Sharing information on these models may inadvertently expose sensitive information that malicious actors can capitalise on. The Office should be alive to these concerns when publishing GPAI developer requirements.

Competitiveness Hinges on Implementation

The implementation of the AI Act comes at a time of flux for both the industry and the technology itself. It is crucial the AI Office spearheads a pro-innovation outlook to AI governance that ensures AI solutions have the chance to develop and diffuse through society. More than that, European competitiveness hinges on this and the attitudes expressed by top level EU institutions towards emerging technology. The AI Office should not waste the opportunity it has to set the tone for the next mandate, and steer Europe through the AI Act towards AI innovation.

Image Credit: Copyright Flickr/Lisbon Council/Creative Commons.

Latest article