Caution and control are being overemphasised above economic and technological opportunity
Earlier this year, the European Parliament voted in favour of the EU Artificial Intelligence Act or “AI Act”. With this, the EU becomes the first jurisdiction comprehensively regulating AI. What the EU hopes is that the EU’s legislative approach will influence other jurisdictions through the so-called “Brussels effect”, whereby regulated entities, especially corporations, end up complying with EU laws even outside the EU, mostly due to the size of the EU’s single market.
Observers like the Brookings Institute’s Alex Engler are sceptical that this piece of legislation will effectively become a global standard — only expecting it to become the “only moderately shape international regulation”, because “high-risk AI systems for human services will be highly influenced if they are built into online or otherwise internationally interconnected platforms, but many AI systems that are more localized or individualized will not be significantly affected.”
Also, in 2022, Andrea Renda of centre-left think tank Foundation for European Progressive Studies (FEPS) warned that for the EU to “achieve prominence as a global regulator in the digital space, (…) unilateral rule making will not be a viable strategy in the future and that the EU will be able to retain a leading role only if it develops a coalition-building strategy”. He thereby also stressed that “the EU cannot and should not become an autarchic player in cyberspace and should continue to make of non-EU technologies to the extent that they do not compromise EU values or pose dependence, security or sustainability problems.”
Yet this is precisely the road the EU has taken when it comes to the AI Act. Jake Denton of the Heritage Foundation points out that the act is “failing to differentiate between closed source and open source AI”. As a result, it “threatens to ensnare transparent projects in the same web of regulations as their closed source competition”. Last year, when the act was being draft, more than 150 executives from companies like Renault, Heineken, Airbus, and Siemens warned in a joint letter to the EU that disproportionate compliance costs and liability risks for foundation AI systems may force AI providers to ultimately withdraw from the EU altogether. One of the signatories, Jeannette zu Fürstenberg, founding partner of La Famiglia VC, added that the legislation “has catastrophic implications for European competitiveness.”
Oddly, after an interinstitutional deal was reached on the AI Act in December 2023, French President Emmanuel Macron issued some severe criticism. He said: “We can decide to regulate much faster and much stronger than our major competitors. But we will regulate things that we will no longer produce or invent. This is never a good idea.”
He thereby made it clear that his intervention was inspired by the view that his own country, France, is “probably the first country in terms of artificial intelligence in continental Europe. We are neck and neck with the British. They will not have this regulation on foundational models. But above all, we are all very far behind the Chinese and the Americans.” Still, after some tinkering, the AI Act was ultimately adopted.
The legislation foresees that a general-purpose AI model can be classified as containing “systemic risk” on the basis of vague criteria like “high impact capabilities”. What’s more, as put by legal expert Innocenzo Genna, “we are in a field in which the [European] Commission, which has the task of identifying, through the AI Office, the systemic ‘General Purpose AI Models’ [‘GPAIs’], will have highly discretionary and therefore preponderant power. It will therefore be able to conduct a real industrial policy, simply deciding which GPAIs can be designated as systemic and which cannot.”
In other words: maybe post-Brexit Britain will not become Singapore-on-Thames because it is engaging in a bonfire of regulations, but simply because it refuses to copy the EU’s most innovation-hostile regulatory novelties.
Also in the UK, however, the debate on how to regulate AI is raging. In particular the access to extensive data of big tech companies is something the UK’s Financial Conduct Authority (FCA) is looking at. Gal Ringel, co-founder and CEO at Mine, a global data privacy management firm, thinks that “the U.K. seems to be taking a different approach to innovation than the EU”, as it is “taking the approach of working hand-in-hand with Big Tech”, instead of “[regulating] technology before it reaches the market”, as the EU does.
Access to data is ultimately key to AI innovation, and a number of firms are developing cutting-edge solutions to catalyse the next stage of AI development. France-based start-up Nfinite, for example, which creates visuals from 3D product modelling, is committed to advancing the promising field of “synthetic data”. As the name suggests, synthetic data is created artificially with the help of algorithms, and could represent a critical future fuel for the training of generative AI models like DALL-E from OpenAI. Unsurprisingly, the market for synthetic data is expected to massively expand in the years to come.
According to Nfinite’s founder, Alexandre de Vigan, synthetic data offers the possibility to create images in abundance, in an unlimited, economical, secure way, while avoiding copyright problems. The synthetic images his firm is able to create, de Vigan explained, “is particularly aimed at very large enterprise players in need of several tens of thousands of references. Our platform allows them to generate several dozen visuals per product, guaranteeing absolute consistency and respect for their brand image.”
This kind of solution has the potential to be a game-changer for AI, given the growing need for quality as well as quantity in data — but will increasingly stringent regulation unnecessarily constrain innovation in the EU and elsewhere? In Britain as well, calls for more restrictions on data sharing are on the rise.
Ultimately, with its heavy-regulation approach, which is also visible with the recently adopted Digital Serves Act (DSA), the EU is unlikely to end up exporting its innovation-hostile regulatory model, but may instead end up as a digital vassal.
In that respect, it is useful to recall the experience of the EU’s stringent General Data Protection Regulation (GDPR) data regulation, which did end up being taken over globally to a degree. Back in Europe, however, mainstream politicians are less than thrilled by it. In 2021, five years after the introduction of GDPR, German CDU MEP Axel Voss concluded that “GDPR is seriously hampering the EU’s capacity to develop new technology and desperately needed digital solutions”, warning that “Europe’s obsession with data protection is getting in the way of digital innovation”. He thereby mentioned how despite a statement by European Commission President Ursula von der Leyen that data is our new “global currency”, “yet, the vast majority of data is being stored outside the EU, which risks making it impossible for us to be competitive in any form of digital innovation, undermining our future economic prosperity.”
That was three years ago. Despite this, the EU simply continues on the same, misguided path towards overregulating the digital sphere. The only thing left for the UK and others to do is not to follow it.