Monday, November 25, 2024

Limited-Risk AI—A Deep Dive Into Article 50 of the European Union’s AI Act

Must read

This blog post focuses on the transparency requirements associated with certain limited-risk artificial intelligence (AI) systems under Article 50 of the European Union’s AI Act.

As explained in our previous blog post, the AI Act’s overall risk-based approach means that, depending on the level of risk, different requirements apply. In total, there are four levels of risk: (1) unacceptable risk, in which case AI systems are prohibited (see our blog post on prohibited AI practices for more details); (2) high risk, in which case AI systems are subject to extensive requirements, including regarding transparency; (3) limited risk, which triggers only transparency requirements; and (4) minimal risk, which does not trigger any obligations.

We analyze below the transparency requirements that apply to various players in relation to limited-risk AI systems in the meaning of the AI Act. The European Commission will assess the need to amend this list of limited-risk AI systems every four years (Article 112). 

Some obligations apply to providers of AI systems, i.e., legal entities or natural persons who develop AI systems or have them developed and market them, whether for payment or free of charge. Other obligations apply to deployers, i.e., legal entities or natural persons who deploy AI systems under their authority in the context of professional activities. We will discuss providers’ and deployers’ obligations separately below.

Provider Obligations

  • “Hey, I’m a Chatbot.” Providers must ensure that AI systems intended to directly interact with natural persons, such as chatbots, are designed and developed in such a way that individuals are informed that they are interacting with an AI system.

This requirement does not apply where this is obvious for reasonably well-informed, observant and circumspect individuals, taking into account the circumstances and the context of use.

  • AI-Generated Content. Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content must ensure that their systems’ outputs are marked in a machine-readable format and detectable as artificially generated or manipulated. Such technical solutions must be effective, interoperable, robust and reliable as far as this is technically feasible. There is little clarity about what this means in practice, so the European Commission’s guidance will most certainly be helpful (see below).

This obligation does not apply to AI systems performing an assistive function for standard editing or that do not substantially alter the input data provided by deployers, or the semantics thereof. Again, this exception will need to be further refined in the next few months.

  • Other Transparency Obligations. For completeness, it is important to note that there are transparency obligations beyond Article 50 of the AI Act, namely for high-risk AI systems:
    • Providers of high-risk AI must be transparent vis-à-vis deployers. To that end, providers must design high-risk AI systems to enable deployers to understand how the AI system works, evaluate its functionality, and comprehend its strengths and limitations.
    • In addition, providers of general-purpose AI models must implement transparency measures, including the drawing up and keeping up to date of documentation, and the provision of information on the general-purpose AI model for its usage by the downstream providers.

Deployer Obligations

  • Deepfakes. The AI Act defines deepfakes as AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, or other entities or events and would falsely appear to a person to be authentic or truthful.

Businesses using deepfakes in the course of a professional activity must disclose that the content has been artificially generated or manipulated. This obligation does not apply where the use is authorized for law enforcement purposes. Where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or program, the transparency obligations are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work.

With upcoming elections in June 2024, the European Union is concerned about disinformation in the election context well before the AI Act applies. For this reason, European political parties pledged in April 2024 not to use deepfakes in their campaigns. Later that month, the European Commission issued guidelines under the Digital Services Act for Very Large Online Platforms (VLOPs) to mitigate risks to elections, including those posed by deepfakes. The commission recommends that VLOPs align their policies with the AI Act in advance of its entry into application. In particular, the commission recommends that VLOPs clearly label deepfakes or otherwise make them distinguishable through prominent markings.

  • Text. Deployers of AI systems that generate or manipulate text published to inform the public on matters of public interest must disclose that the text has been artificially generated or manipulated. This obligation does not apply where the use is authorized for law enforcement purposes, or where the text has undergone a process of human review or editorial control, and where a natural or legal person holds editorial responsibility for the publication of the content.
  • Emotion Recognition and Biometric Categorization. Deployers of emotion recognition or biometric categorization systems, which qualify as high-risk AI systems, must inform individuals exposed thereto of the operation of the system. This obligation does not apply to AI systems authorized for biometric categorization and emotion recognition for law enforcement purposes. Importantly, AI systems that infer individuals’ emotions in the areas of workplace and education institutions are prohibited, unless they are intended to be put in place or into the market for medical or safety reasons. 
  • Other Transparency Obligations. As explained above, high-risk AI systems are also subject to transparency obligations laid down in provisions other than Article 50. Deployers of specific high-risk AI systems listed in the AI Act (e.g., those used in critical infrastructures, education and vocational training, employment, workers management, and access to self-employment) that make decisions or assist in making decisions related to natural persons must inform these persons that they are subject to the use of the high-risk AI system.

Transparency, Timing and Format

The information regarding the limited-risk AI systems discussed above must be provided in a clear and distinguishable manner at the latest time of the first interaction or exposure. Other European Union or national laws may impose additional transparency obligations.

The European Commission’s AI Office will encourage and facilitate the drawing up of codes of practice at the EU level to facilitate the effective implementation of the obligations regarding the detection and labeling of artificially generated or manipulated content. The commission is empowered to adopt implementing acts to approve those codes of practice or, if it considers that they are not adequate, to adopt specifying common rules for the implementation of these obligations.

General Data Protection Regulation (GDPR)

Where personal data is processed, the GDPR transparency requirements apply in addition to the AI Act obligations. This includes, in particular, transparency about the purpose(s) of the data collection.

Timeline

EU governments approved the AI Act on May 21, 2024. It is likely that the AI Act will be published in the Official Journal of the EU in June. The AI Act will enter into force 20 days after such publication. The transparency requirements under Article 50 will apply two years after the AI Act enters into force, likely around late June 2026.

Enforcement and Fines

National competent authorities will be responsible for ensuring compliance with the transparency requirements mentioned above. Noncompliance with these requirements is subject to administrative fines of up to €15 million or up to 3% of the operator’s total worldwide annual turnover for the preceding financial year, whichever is higher.

 

For more information on this or other AI matters, please contact one of the authors. The authors would like to thank David Llorens Fernandez for his assistance in preparing this alert.

Latest article