Saturday, November 23, 2024

Artificial intelligence regulation in Europe: Exploring the AI act

Must read

On May 21, 2024, the Council of the European Union definitively approved the Artificial Intelligence (AI) Act, the first European regulation on AI, marking the conclusion of a legislative journey that began in 2021. This process involved the European Parliament, the European Commission, and the Council of the European Union, culminating in an initial political agreement on the draft AI Act in December 2023, a preliminary approval in January 2024 and a first crucial approval by the European Parliament on March 13, 2024.

The AI Act aims to balance the protection of rights and freedoms with the facilitation of a “space” conducive to technological innovation. Its primary goal is to ensure the safe deployment of AI systems in Europe, aligning their use with the fundamental values and rights of the EU while encouraging investment and innovation within the continent.

The Regulation fits into a broader European and International framework of initiatives that, although fragmented, have consistently sought to address the critical issue of AI tools utilisation, with the same focus on protection and development. Additionally noteworthy are the actions undertaken by organisations such as the OECD and the United Nations – i.e., “Principles on Artificial Intelligence”, adopted in 2019, and UN interim report on “Governing AI for Humanity”.

Defining artificial intelligence

To offer a precise definition of AI tools while allowing for future adaptability, the Act aligns with the definition set by the Organisation for Economic Co-operation and Development (OECD). According to the Regulation, an AI tool is a “machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

This broad definition intentionally excludes simpler traditional software systems and non-adaptive programming algorithms. The Commission is tasked with developing specific guidelines for applying this definition.

Scope of application

  • The Regulation specifies which entities it covers. 
  • It applies to both public and private organisations within the EU and in third-party countries that produce or distribute AI tools in the European market.
  • AI systems used for military, defence, or national security purposes, as well as those developed exclusively for scientific research and development, are excluded from its scope.

Regulatory approach

The AI Act adopts a “risk-based” approach: the higher the risk to people’s safety and rights, the stricter the regulations. AI systems are categorised into four risk levels:

  1. Unacceptable risk: AI systems that contravene EU values and principles and are therefore banned.
  2. High risk: These systems can significantly and negatively affect people’s rights and safety, so market access is granted only if certain obligations and requirements are met, such as conducting a conformity assessment and adhering to European harmonisation standards.
  3. Limited risk: These systems are subject to limited transparency rules due to their relatively low risk to users.
  4. Minimal risk: These systems pose negligible risk to users and are, therefore, not bound by any particular obligations.

Additionally, the Act includes provisions for General-Purpose AI (GPAI) models, defined as “computer models that, through training on a vast amount of data, can be used for a variety of tasks, either singly or included as components in an AI system.” Due to their broad applicability and potential systemic risks, GPAI models are subject to stricter requirements regarding effectiveness, interoperability, transparency, and compliance.

Prohibited AI practices

The Regulation explicitly bans the use of AI systems that:

  • Employ subliminal, manipulative, or deceptive techniques beyond a person’s awareness.
  • Exploit vulnerabilities of individuals or specific groups of individuals.
  • Evaluate or rank individuals or groups based on social scores, leading to adverse or unfavourable treatment in unrelated social contexts or disproportionate to social behaviour.
  • Use “real-time” remote biometric identification in public spaces, except in cases of:
    • Targeted searches for victims of kidnapping, human trafficking, and exploitation, or missing persons.
    • Preventing imminent threats to life or significant harm.
    • Identifying persons suspected of crimes.

Supervision and enforcement

The AI Act mandates the establishment of authorities to ensure compliance:

  • National supervisory authorities: Designated by each Member State to enforce the Regulation at the national level.
  • European Artificial Intelligence Committee: Coordinates national authorities and ensures consistent application across Europe.
  • Market Supervisory Authority: Monitors market compliance with the Regulation.

Regulatory sandboxes

The Act introduces “Regulatory Sandboxes,” described as a “controlled framework set up by a competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real-world conditions, an innovative AI system, pursuant to a sandbox plan for a limited time under regulatory supervision.”

These sandboxes promote innovation by allowing controlled experimentation and testing of AI systems, offering regulatory exemptions to improve understanding and ensure compliance with EU laws.

The EU is also establishing physical and virtual Test and Experimentation Facilities (TEFs) for large-scale AI testing in sectors like agri-food, health, manufacturing, and smart cities.

Control and sanctions

The Regulation outlines a variable sanctions system based on the severity and nature of infringements and the operator’s turnover. This approach aims to be proportionate and dissuasive while considering the interests of SMEs and startups. Member States have discretion in setting sanctions within the limits set by the EU. The Commission will issue directives, delegations, and guidelines to support the standardisation process.

Criticism

Initial criticisms of the AI Act include:

  • Ambiguities regarding the roles and responsibilities of different actors, particularly for open-source AI models.
  • Need for enhancements to consumer protection, such as a broader AI system definition and fundamental principles and obligations for all AI systems.
  • Lack of provisions addressing systemic sustainability risks and potentially ineffective rules on prohibited and high-risk practices.
  • Insufficient enforcement structures and coordination between the relevant authorities.

Next steps

The approval of the AI Act is a historic milestone, positioning the EU as the first to implement a comprehensive AI legal framework. The AI Act is expected to be published in the Official Journal of the EU in the coming days and will enter into force 20 days after publication.

The implementation of AI Act rules will be rolled out with unacceptable-risk systems that are banned by the Regulation being eliminated within six months. Governance and authority rules will apply within 12 months, and high-risk system rules will take effect within 36 months. Full applicability of the Regulation, including all high-risk system rules, is expected within two years.

Find more insights related to artificial intelligence here.

Latest article