Analysis

November 6, 2023

What do startups want from AI regulation?

Startups need clarity from regulators as they try and build products safely and responsibly, but what exactly do they want?


Tim Smith

4 min read

The Mistral AI founding team

Many will see the UK’s AI safety summit last week as a diplomatic success, but for founders building AI products the event raised more questions than it answered. 

“This kind of summit is not the type of event where policy gets decided,” said Connor Leahy, cofounder of AI safety startup Conjecture. “It’s the type of event where people get to know each other, where information is exchanged, where you learn and build connections.”

And despite the announcement that the UK and US will launch their own AI safety institutes to further probe the risks that advanced AI systems might pose, founders now want to see serious efforts being made on providing regulatory clarity for the industry.

Advertisement

Clear rules are better than no rules

Speaking to Sifted ahead of the summit, Eric Topham — cofounder of London-based startup Octaipipe, which develops data security solutions for AI products used on physical devices — said that he would welcome clearer rules in the UK.

“The [forthcoming] cyber resiliency act in Europe effectively penalises for data breaches on devices, so the IT industry is going to have to respond to that,” he explains. “The UK is far less clear about what you have to do — it's much harder to know what the standards are.”

While Topham thinks that having defined rules on data security would make it easier to sell to customers, that doesn’t mean he’s fully in favour of the EU’s approach to AI legislation.

Like many founders, he believes that a sector-specific approach to regulation is better than the EU’s more horizontal approach, which applies the same rules to a whole host of industries.

Alex Kendall — cofounder of London-based autonomous driving company Wayve — recently told the Sifted podcast that it doesn’t make sense to have one set of rules for very different sectors.

“I think having broad-based AI regulation would be shooting ourselves in the foot. The risks are very different if you're a doctor, an accountant or a driver,” he said.

Immediate and long-term

These kinds of regulatory areas — where AI is being used, or preparing to be used, in the real world — are what is referred to in the industry as “narrow” use cases, and many believe that they deserve closer attention. 

The UK summit has been criticised for focusing largely on “catastrophic” risks from super-powerful AI systems of the future, rather than more immediate issues.

But Marc Warner — CEO of enterprise AI systems builder Faculty — says that it’s about time we start paying attention to long-term risks, while continuing the conversation on nearer-term threats.

“To use an analogy with cars: I care about seatbelts, and I also care about catalytic converters, because I care about the short term risks of crashing and I also care about the long term risks and global warming,” he says. “If you ask me: ‘Do you want either seatbelts or catalytic converters?’ I just say: ‘I want both,’ and it’s the same with AI.”

Advertisement

How to regulate frontier risk?

The question of how to approach regulation of the most powerful AI systems — those that are trained on vast amounts of data and computing power — is a contentious one in the AI sector.

Some, like Meta’s Yann LeCun, say we don’t need to worry about models like GPT-4 running out of control, arguing that they aren’t truly intelligent and are fundamentally just sophisticated autocomplete systems.

Conjecture’s Leahy, meanwhile, has argued that we should set a maximum amount of compute power for training new models, to avoid risks that could come along with creating systems that surpass human capabilities.

“If you build systems that are more capable than humans at everything — science, business, politics etc — and you don't control them, then the machines will control the future, not humans,” he tells Sifted.

Paris-based generative AI startup Mistral — which is competing with the likes of OpenAI to build large language models — is arguing for big tech companies to be subject to compulsory independent oversight of their models, meaning that public research institutions would be able to study them.

Currently, companies like OpenAI, GoogleDeepMind, Anthropic and Inflection have entered into voluntary agreements to allow their models to be tested by external parties before release, but Mistral cofounder Arthur Mensch says that legislation is needed to ensure that big tech players “aren’t regulating themselves.”

Mustafa Suleyman, cofounder of DeepMind and now CEO at Inflection, told Sifted that voluntary agreements are just a “first step” but that regulation would have to bear in mind the rights that companies have to intellectual property.

So, after a big week for the AI safety debate, founders still have a lot of questions about how lawmakers will approach the tools they are building, and the right rules to govern them. But, for Warner from Faculty, the fact that the conversation is still a positive.

“Imagine, if we'd had conversations at a prime ministerial level 40 years ago about global warming, how transformationally different our planet could be right now?”

Tim Smith

Tim Smith is news editor at Sifted. He covers deeptech and AI, and produces Startup Europe — The Sifted Podcast . Follow him on X and LinkedIn