The European Union has approved a ground-breaking law aiming to harmonise rules on artificial intelligence, the so-called artificial intelligence act. The flagship legislation follows a ‘risk-based’ approach, which means the higher the risk to cause harm to society, the stricter the rules. It is the first of its kind in the world and can set a global standard for AI regulation.
The new law aims to foster the development and uptake of safe and trustworthy AI systems across the EU’s single market by both private and public actors. At the same time, it aims to ensure respect of fundamental rights of EU citizens and stimulate investment and innovation on artificial intelligence in Europe. The AI act applies only to areas within EU law and provides exemptions such as for systems used exclusively for military and defence as well as for research purposes.
“The adoption of the AI act is a significant milestone for the European Union,” says Mathieu Michel, Belgian secretary of state for digitisation, administrative simplification, privacy protection, and the building regulation. “This landmark law, the first of its kind in the world, addresses a global technological challenge that also creates opportunities for our societies and economies. With the AI act, Europe emphasizes the importance of trust, transparency and accountability when dealing with new technologies while at the same time ensuring this fast-changing technology can flourish and boost European innovation.”
The new law categorises different types of artificial intelligence according to risk. AI systems presenting only limited risk would be subject to very light transparency obligations, while high-risk AI systems would be authorised, but subject to a set of requirements and obligations to gain access to the EU market. AI systems such as, for example, cognitive behavioural manipulation and social scoring will be banned from the EU because their risk is deemed unacceptable. The law also prohibits the use of AI for predictive policing based on profiling and systems that use biometric data to categorise people according to specific categories such as race, religion, or sexual orientation.
The AI act also addresses the use of general-purpose AI (GPAI) models. GPAI models not posing systemic risks will be subject to some limited requirements, for example with regard to transparency, but those with systemic risks will have to comply with stricter rules.
To ensure proper enforcement, several governing bodies are set up:
- An AI Office within the Commission to enforce the common rules across the EU
- A scientific panel of independent experts to support the enforcement activities
- An AI Board with member states’ representatives to advise and assist the Commission and member states on consistent and effective application of the AI Act
- An advisory forum for stakeholders to provide technical expertise to the AI Board and the Commission
The fines for infringements to the AI act are set as a percentage of the offending company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher. SMEs and start-ups are subject to proportional administrative fines.
Before a high-risk AI system is deployed by some entities providing public services, the fundamental rights impact will need to be assessed. The regulation also provides for increased transparency regarding the development and use of high-risk AI systems. High-risk AI systems, as well as certain users of a high-risk AI system that are public entities will need to be registered in the EU database for high-risk AI systems, and users of an emotion recognition system will have to inform natural persons when they are being exposed to such a system.