The use of artificial intelligence (AI) within the construction industry is rapidly expanding, driven by its potential to provide operational efficiencies, improve safety, and optimise resource management, write Sheena Sood and Sean O’Halloran.
The EU AI Act, recently ratified by the European Parliament and set for formal adoption by the EU shortly, introduces stringent regulations around the use of AI that are set to reshape how such technologies are implemented and used across the European Union and potentially globally.
AI utilisation in construction
In construction, AI has the scope to be transformative, notably in areas like automated risk assessment, safety monitoring, and resource allocation. AI has been used to enhance the precision of project timelines, budget forecasts, and compliance with safety regulations.
In addition, human-centred applications, particularly AI-powered surveillance systems, have already been deployed on sites to ensure health and safety compliance and to minimise the risk of theft and trespassing.
Background to the AI Act
Initially proposed by the European Commission in April 2021, the AI Act aims to standardise AI governance across EU member states. Designed to address concerns about AI risks to fundamental rights, the EU intends that the AI Act will foster responsible innovation within a well-defined legal framework without unduly hindering technological advancement.
As a regulation, the AI Act will have a direct effect across all EU member states, ensuring a unified approach to AI governance within the EU.
Definition of AI for the first time in law
The AI Act enshrines a definition of AI systems. Under the Act, an AI system is characterised as a machine-based setup, crafted to function with varying degrees of autonomy, which is not only capable of adjusting post-deployment but is also designed to process inputs to produce outputs such as content, predictions, recommendations, or decisions.
The definition’s breadth encompasses existing technologies, suggesting that many currently operational systems might need re-evaluation to ensure compliance with the new regulation.
Scope of the AI Act
The AI Act classifies AI applications into four categories based on their risk to human safety and fundamental rights:
- Unacceptable risk;
- High risk;
- Limited risk;
- Minimal risk.
Dealing with these in order:
Unacceptable risk
AI systems that pose an unacceptable risk will be banned outright due to their significant potential for harm. Examples include intrusive surveillance systems, such as real-time remote biometric identification used in CCTV on sites.
High risk
AI systems that negatively affect safety or fundamental rights will be considered high risk. The AI Act specifies that systems used in the management and operation of critical infrastructure, along with those used for worker management, will need to be registered in an EU database and assessed both before being put on the market and throughout their lifecycle.
Limited risk
AI applications posing limited risks will require specific transparency obligations to inform users they are interacting with an AI system. This might include certain AI applications in project management tools or customer interactions.
Minimal risk
The majority of AI systems will fall into this category and are subject to minimal regulatory constraints. These are typical AI applications that do not significantly impact individual or public rights, such as AI used for generating administrative workflows.
Technical and compliance standards
For high-risk AI applications, the AI Act sets out stringent technical and compliance standards, including detailed record-keeping, human oversight, and specific performance metrics. These measures are designed to ensure that high-risk AI systems are deployed in a manner that is transparent and accountable.
Governance and enforcement
Compliance with the AI Act will be subject to oversight by national authorities, supported by the AI office inside the European Commission. The AI Act also establishes the European Artificial Intelligence Board (EAIB).
Aimed at harmonising the enforcement of AI regulations across the EU, the EAIB will both advise the European Commission and facilitate the exchange of information and practices amongst national authorities.
Implementation timelines
Following the European Parliament’s approval of the AI Act, the Council of the European Union is expected to endorse the Act shortly, with it becoming law upon its publication in the Official Journal of the European Union, anticipated around May or June 2024. The overall timeline for the rollout of the AI Act is 24 months. However, compliance deadlines for certain AI uses vary from this timeline as follows:
- Unacceptable risk: AI will be phased out within six months of the commencement of the regulation (ie, likely by the end of 2024).
- High-risk: AI must be compliant with the AI Act no later than 36 months from commencement.
- General-purpose: AI must meet governance standards within 12 months.
Penalties for non-compliance
Non-compliance with the regulation may lead to steep penalties, up to as much as €35m or 7% of global turnover (whichever is higher).
Conclusion
Similar to the GDPR, the EU’s AI Act aims to set a global precedent for AI governance, aiming to balance innovation with strict regulatory oversight. Whether it meets these noble goals remains to be seen.
In any event, the broad scope and stringent penalties of the AI Act will demand careful attention to how AI technologies are deployed in construction, ensuring they are safe, reliable, and compliant. This will involve a thorough evaluation of certain AI-driven processes.
As construction professionals prepare for compliance, understanding these new regulations will be crucial. Beale & Co remain available to assist clients through this transition.
Authors: Sheena Sood and Sean O’Halloran, Beale & Co.