Wednesday, December 25, 2024

What impact could the EU’s AI Act have on jobs?

Must read

By Amanda Kavanagh

Between fear of job losses and cybersecurity concerns, artificial intelligence (AI) has prompted a slew of negative headlines over the last few years.

ADVERTISEMENT

It’s been a hot topic at the EU level too, and last month, after 36 hours of negotiations, officials reached a provisional deal on comprehensive laws regulating the use of AI.

The driving principle behind the laws is to regulate AI by its capacity to cause harm to society following a ‘risk-based’ approach, so the higher the risk, the stricter the rules.

As the first legislative proposal of its kind, these laws may set a global standard for AI regulation, just as the General Data Protection Regulation (GDPR) has done for data privacy.

The primary aspects of the preliminary agreement include rules of high impact for general-purpose AI models that may cause system risks, as well as a revised governance structure for high-risk AI systems.

It includes an extension of prohibitions but with the possibility for certain law enforcement agencies to use remote biometric identification (RBI) in public spaces – RBI systems can identify people at a distance by comparing unique biometric attributes, like faces and gait, with databases.

ADVERTISEMENT

The provisional agreement also covers better protection of rights, as deployers of high-risk AI systems must perform impact assessments on fundamental rights before deployment. AI systems that pose very little danger will be subject to very minimal transparency requirements, such as stating that content was created using AI.

Although the AI Act will be voted on early this year, the actual legislation will not be enacted until 2025 at the earliest. So while the act will have an impact on AI jobs, there’s plenty of time to prepare.

Here’s what we know, and which companies are already hiring.

1. Legal and compliance will be in demand

As AI systems are developed and distributed through complex value chains, the agreement includes clarifications on the allocation of responsibilities and the roles of various actors in those chains.

Fines for violations of the AI Act will also be hefty and will be a fixed amount or percentage of the offending company’s global yearly turnover from the preceding financial year.

This equates to €35 million or 7 per cent for using prohibited AI applications, €15 million or 3 per cent for breaking the AI Act’s requirements, and €7.5 million or 1.5 per cent for providing false information.

As such, legal minds and compliance experts will be especially valued employees in AI organisations operating within the EU.

In Paris, Goldman Sachs is hiring a Compliance, FICC Compliance VP with an excellent understanding of ACPR/AMF, European, EU and US regulations to take responsibility for providing regulatory interpretation and advice across the firm, strengthening its compliance.

2. High-risk AI systems will be limited and banned

High-risk AI systems will be subject to requirements and obligations to gain access to the EU market, and some will be banned from the EU if the risk level is deemed unacceptable.

For example, the provisional agreement bans cognitive behavioural manipulation, the untargeted scraping of facial images from the internet or CCTV, emotion recognition in education and the workplace, social scoring, biometric categorisation to infer sensitive data like sexual orientation or religious belief, and some cases of predictive policing for individuals, which is the use of mathematics and predictive analytics to identity criminal activity.

ADVERTISEMENT

Cautious job seekers in the European AI industry will focus on organisations with lower-risk AI deployment and companies which have conducted proactive risk assessments.

As part of its ambition to make SAP the market leader in business AI by 2025, the organisation is hiring for several roles, including this Berlin-based AI Strategy Senior Consultant / AI Strategy Expert who will work with cross-functional teams to identify opportunities, assess risk and benefits, and drive the adoption of AI solutions.

3. Law enforcement authorities will have some exceptions

When it comes to high-risk AI tools, there will be exceptions for certain law enforcement authorities who, subject to appropriate safeguards, may use systems that have not passed the conformity assessment procedure, in case of urgency.

Real-time remote biometric identification (RBI) systems in publicly accessible areas is a hot topic.

Under this agreement, law enforcement officials may be granted special permissions to deploy these systems in limited circumstances, for example in the prevention of genuine threats, such as terrorist attacks, and searches for suspects of the most serious crimes.

Unless working in policing or defence, strategic AI job seekers will avoid upskilling in higher-risk systems, where jobs will be more limited.

4. Overemployed people may need to go back to basics

Specific guidelines, including transparency requirements, have been agreed for foundational models, which are large systems capable of generating video, text, images, computer code, as well as lateral language conversations.

This may affect employees who are over-employed and working two or more jobs at once, due to automating tasks through AI.

5. New roles will be available at EU level

There will be a number of new positions opening at the EU level relating to governance.

A new AI Office will enforce the new rules, oversee the most advanced AI models, and will contribute to fostering standards and testing practices.

This office will be guided by a scientific panel of independent experts and an AI board comprised of member states’ representatives.

Plus, an advisory forum will be established to provide technical expertise to the AI Board, and this will include industry representatives, SMEs, start-ups, civil society and academia.

6. It aims to stimulate AI innovation

Several measures in support of innovation have been substantially modified in the agreement, including clarification that AI regulatory sandboxes should also allow for testing of innovative AI systems in real-world conditions, under specific conditions and safeguards.

Specifically, it includes a list of actions to be undertaken to alleviate the administrative burden for small companies. Innovative US multinational Intel is hiring an AI Frameworks Engineer to work onsite at its campus in Leixlip, close to Ireland’s capital city. The successful candidate will be responsible for building machine learning workflows and the infrastructure necessary to productise AI models and sustain them in production.

If you’re looking for an AI opportunity in 2024, check out the Euronews Jobs Board for companies hiring now.

Latest article