News

December 7, 2023

EU to put extra guardrails on AI foundation models like GPT-4

High-level negotiations on the EU AI Act continue amid great expectation a final deal will be clinched Friday


Tech businesses and startups working on the most powerful AI models used in Europe — like GPT-4 — will have to comply with extra legal requirements, despite previous efforts of some European governments not to regulate such companies at all, according to the draft compromise on the EU’s AI Act seen by Sifted. Open source models, however, would be exempt.

It makes Europe one of the first regulatory areas in the world to regulate these kinds of models, which have shot to the top of policymakers’ agendas given a surge in AI innovation from companies like OpenAI, which created GPT-4. The draft compromise suggests that the companies most affected will be American, not European, as European players like Mistral are open source, which means anyone can freely use or update the model.

EU negotiators are still discussing the sticking points of its flagship policy regulating AI — but Thursday’s provisional agreement on foundation models had been one of the most contentious parts of the deal. Foundation models are AI systems like GPT-4 that are trained on large amounts of data, using large quantities of computing power (known in the industry as compute).

Advertisement

According to the draft of the compromise seen by Sifted, the new law will introduce a two-tiered approach to the AI models, dividing them into general-purpose AI with or without “systemic risk”. 

AI models bearing “systemic risk” have “high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks” — these should be models when the cumulative amount of compute used for its training measured in floating point operations (FLOPS) is greater than 10^25. 

FLOPS refers to the number of operations that a computer can perform in a second, and the 10^25 number refers to the power of the supercomputer that a model is trained on, and for how long — essentially how much raw computing power has gone into the training process.

It’s estimated that ChatGPT was trained on 10^24 FLOPs, meaning that any models significantly more powerful than GPT-3.5 will be considered to bear systemic risk.

The law says that all providers of general-purpose AI should draw up and keep up-to-date the technical documentation of the model and make it available to regulatory authorities by request. Other requirements include providing information and documentation to providers of AI systems who intend to integrate the general-purpose AI model in their AI systems, to enable them to have a good understanding of the capabilities of the AI model. 

The providers of general-purpose AI with systemic risk will also have to, for example, perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art; keep track of, document and report information about serious incidents, ensure an adequate level of cybersecurity protection, keep track of, document and report about known or estimated energy consumption of the model. 

The negotiators still have a bunch of issues to discuss — and the final deal is supposed to be announced on Friday. 

The EU AI Act  

This week is the informal deadline for wrapping up the EU AI Act negotiations. Getting a deal this week is crucial to prevent the legislation falling apart and being delayed until after the European elections next year. 

Originally, the act was to impose requirements on AI companies depending on the level of risk posed by their application, rather than the type of technology they are built on.

But after the rapid emergence of AI chatbots last autumn, members of the European Parliament wanted extra rules for companies that produce foundation models regardless of their further use. 

Advertisement

This spooked many startups and industry groups, which warned that the act could hamper innovation in Europe, and pushed France and Germany to block the talks last month in order to protect their startups, including AI giants Mistral and Aleph Alpha.

Zosia Wanat

Zosia Wanat is a senior reporter at Sifted. She covers the CEE region and policy. Follow her on X and LinkedIn

Cristina Gallardo

Cristina Gallardo is a senior reporter at Sifted based in Madrid and Barcelona. She covers Europe's tech sovereignty, deeptech and Iberia. Follow her on X and LinkedIn

Tim Smith

Tim Smith is a senior reporter at Sifted. He covers deeptech and all things taboo, and produces Startup Europe — The Sifted Podcast . Follow him on X and LinkedIn