Analysis

November 1, 2023

What is going on with the EU AI Act?

If lawmakers don’t pass the regulations this year, they may have to wait until after new elections


Zosia Wanat

4 min read

You’ve been hearing about it for four years — but the EU’s flagship AI law is only now taking its final shape. 

Lawmakers are currently locked in negotiations, and time is of the essence. If the talks aren’t wrapped up by the end of 2023, the final adoption of the law might have to wait until after the election of a new EU parliament and commission — so roughly until the end of next year.  The informal deadline for wrapping things up is on December 6 — but there's still much work to be done.

It usually takes years to make policy in the EU, and in the case of the AI Act, much of the work was done before the emergence of OpenAI’s ChatGPT late last year. 

Advertisement

Policymakers have had to play catch up with all the new challenges posed by the popularity and scope of generative AI, which has meant adding some last-minute tweaks and new rules. 

“Even for us as we’re sitting at the table it's not always clear where we are [in the negotiations] because the text is just extremely long and complex,” Kai Zenner, digital policy adviser at the European Parliament, said earlier this month at a conference. 

So, what are EU lawmakers talking about and where are the sticking points? 

Foundational models 

Negotiators are understood to have made good progress on many technical issues, but some key political questions are still up in the air, such as which technologies would be defined as “high risk”; to what extent AI could be used in law enforcement; and how should the EU regulate the startups which produce AI foundation models?

This last issue, on foundational models, might be especially important for startups. 

As well as imposing extra requirements on high risk use-cases of the technology — for example, facial recognition — negotiations are now turning to extra rules for companies that produce foundation models (large-scale, adaptable AI models, such as, for example, large language models) — regardless of their further use.

"Having seen the potential systemic risks that those systems could introduce, it makes sense that we demand regulation,” says Carme Artigas, Spain’s AI minister who has been involved in the negotiations on behalf of the Council of the EU. 

These requirements were not in the earlier proposals by the commission, and tech companies and European startups are not happy about them.

“We see AI as a brick. You can build hospitals [or] schools with it, but you can also throw it through a window,” says Boniface de Champris, policy manager at CCIA, a lobby group.  

“Common sense tells you you should not focus so much on the brick, but on the use of the brick."

Allied for Startups, the Brussels-based lobbying group, says that any attempt to regulate general purpose AI and foundation models will have “detrimental effects” on the startups ecosystem.

“It fuels legal ambiguity at a time when smaller players, such as startups, need regulatory stability more than ever… burdensome general purpose AI obligations affect nascent startups and their ability to scale,” the group said, in a policy paper published in October.

Two-tier approach 

When it comes to regulating businesses that work on foundation models, policymakers seem to be leaning towards introducing a threshold under which the new requirements would only apply for bigger companies. 

Advertisement

“The AI Act could have some baseline requirements for the dominant market players out there that are releasing very powerful models,” Zenner said. 

But it’s still unclear what will be the criteria for companies to pass this threshold — the number of users, downloads or investment put into development of the model.  

“What is the threshold? How it's defined?” says Cédric O, a cofounder of Mistral, a French startup which is working on large language models.

“[The law’s impact] depends, at the end of the day, on what the obligations are beyond the threshold and above the threshold.”

O adds that he’s discussing different scenarios now with the European Commission, and prefers a use-case based approach (regulating only high-risk use cases of the technology).

But that approach might also be harmful, De Champris at the CCIA warns. Separating companies by size (in terms of which ones will be regulated) could prevent them from scaling, he says, incentivising companies to remain small in order to avoid the burden of regulation. 

“Do we really want in the EU to incentivise the development of unsophisticated models? Do we have to tell our companies 'you shouldn't have too many users or too many downloads, otherwise, you will face much more stringent requirements?'” he says.

“It's a self-defeating approach and I don't think it represents or aligns with the ambition of European companies.” 

Cristina Gallardo contributed reporting.

Zosia Wanat

Zosia Wanat is a senior reporter at Sifted. She covers the CEE region and policy. Follow her on X and LinkedIn