Thursday, September 19, 2024

Internal audit’s role in the new European Union’s Artificial Intelligence Act

Must read

Scope and applicability

The EU AI Act provides a clear definition of what makes up an AI system, encompassing machine learning, logic-based and knowledge-based approaches, and systems capable of inference from data. Internal auditors must ensure that their organizations’ AI systems either align directly with or can be mapped to these definitions. Understanding these distinctions is the first step in providing assurance and insights related to compliance.

Embracing a risk-based approach

The European Union AI Act takes a risk-based approach by categorizing AI systems based on the level of risk they pose to health, safety, and fundamental human rights. One of the first steps for internal auditors is to verify that their organizations are not engaging in prohibited AI practices, such as subliminal, manipulative, and deceptive techniques, discriminatory biometric categorization, or expanding facial recognition databases through untargeted scraping of images, among others. Identifying, understanding, and assessing high-risk AI systems, especially those used in critical areas like healthcare, law enforcement, and essential services, is vital.

Meeting mandatory requirements for high-risk AI systems

High-risk AI systems are subject to stringent requirements under the AI Act. Internal auditors must assess that robust risk management systems are in place. These systems should include processes for identifying, assessing, and mitigating risks associated with high-risk AI systems. The internal audit data that is used by AI systems is also important, so assessing the organization’s data governance structures and processes is essential. Auditors should verify that high-quality data is used, appropriate documentation is maintained, and applicable record-keeping practices are followed. Auditors should consider:

  • Where did the data come from?
  • Are the processes and controls that produced the data designed and operating effectively?
  • Is the data complete, accurate, and reliable?
  • How is the data being used by the AI?

Ensuring compliance and continuous monitoring

Compliance with the EU AI Act does not end with the initial deployment of AI systems. Continuous monitoring is necessary to ensure ongoing compliance. Internal auditors must verify that high-risk AI systems undergo proper assessments and understand when external evaluations are required. Auditors should also assess whether proper mechanisms are in place for continuous monitoring, including incident reporting and timely corrective actions. By taking a more proactive approach, auditors can help ensure that potential risks are being addressed and that these systems remain compliant throughout their lifecycle.

Upholding human oversight

To ensure that organizations can prevent unintended consequences and maintain trust in their AI systems, human oversight is included as a critical aspect of the EU AI Act. Internal auditors can ensure that AI systems are designed to enhance human decision-making by verifying that measures for human control are included throughout. An important component of this includes verifying that users of AI systems are adequately trained to understand and manage these complex systems.

Latest article