AI firms should welcome regulation – that, at least, was the message from Mira Murati. “We need a ton more input in this system, and a lot more input that goes beyond the technologies,” OpenAI’s chief technology officer told TIME, “definitely [from] regulators and governments and everyone else.” Murati didn’t specify precisely what form such oversight should take, knowing that the release of GPT-4, its newest and most powerful model, was just a month away. Instead the interview took a sharp left into the executive’s cultural preferences, beginning with Murati’s love for the Radiohead ditty ‘Paranoid Android’ – not the most uplifting song, to be sure, “but beautiful and thought provoking.”
For most companies, regulation is a necessary evil. For OpenAI to openly call for watchdogs to rain down upon them like so many investigating angels, though, has the benefit of imbuing its work with a frisson of mysticism – the implication being that its developers are dabbling with something brilliant but unpredictable, and certainly beyond the conventional understanding of the public. They might be right. GPT-4 is more complicated, agile and adept than its predecessor, GPT-3, and seems poised to disrupt everything from art and advertising, to education and legal services. With that, of course, comes the danger of the model and others like it being used for more nefarious ends – writing code-perfect malware, for example, or assisting in the spread of vile and dangerous misinformation.
Yet, regulators around the world have remained largely silent at the prospect of generative AI forever changing the shape of industry as we know it. Meanwhile, a chorus has arisen among campaigners calling for comprehensive legislative frameworks that, ideally, puts foundation models in a kind of regulatory box, tightly secured, where they can be monitored and their creators punished for any malefactions.
Brussels is listening to this particular song, says Philipp Hacker, professor for law and ethics at the European New School of Digital Studies, with a proposal by two MEPs to categorise foundation models like GPT-4 as ‘high-risk’ under the EU’s draft AI Act rapidly gaining traction. For Hacker, though, the focus on regulating the models themselves is misplaced. EU parliamentarians also seem to be unduly unsettled by the appearance of generative AI in the final stages of the law’s passage. As such, he argues, “we are starting to see, now, this kind of race to regulate something that the legislators weren’t really ready for.”
GPT-4 versus EU and US
In large part, says Hacker, the problem that the EU has with ChatGPT, and will doubtless have with GPT-4, is definitional. Currently, the AI Act has a provision for what it calls ‘general-purpose AI systems’ (GPAIAS), meaning models intended by the provider to perform, you guessed it, ‘generally applicable functions’ like pattern detection, question answering, and speech recognition.
Such models are also deemed as ‘high-risk,’ requiring their creators to subscribe to rigorous reporting requirements if they want to continue operating them in the EU. In February, two MEPs proposed that foundation models fall under this definition, which would require the likes of OpenAI, Google, Anthropic and others to report any instances where their systems are being misused and take appropriate action to stop that from happening.
This is absurd on two levels, argues Hacker. On the one hand, while there are a high number of theoretical risks that accompany the release of a foundation model, categorising a system like GPT-4 as ‘high-risk’ also makes relatively benign applications – say, generating a message for a child’s birthday card – as unusually dicey from a regulatory standpoint. Additionally, such models are adapted by a veritable army of individual developers and companies, making it extremely difficult and expensive for any one creator to monitor when or how a single LLM is being misused. Categorising GPAIS as inherently high-risk also imposes onerous requirements on developers of even basic models.
“If I write a very simple linear classifier for image recognition, that isn’t even very good at distinguishing humans from rats, that now counts – as per that definition – as, potentially, a general purpose AI system,” says Hacker.
Content from our partners
Why fashion’s future lies in the cloud
Tech’s role in addressing the logistics talent crisis
Addressing ESG to build a better, more sustainable business
In the wake of consternation and not a little confusion from big tech firms and AI think tanks, new language has been proposed that widens the circle of those organisations responsible for reporting foundation model misuse to include corporate users that substantially modify the original system. Hacker welcomes the changes, but still disagrees with the EU’s broad approach to AI governance. Rather than fixating on regulating the models so closely, Hacker recommends overarching legislation promulgating more general principles for AI governance, a law that can serve as inspiration for new technology-neutral rules applied sector by sector. That might also be complemented by technical ‘safe harbours,’ where firms can freely experiment with new LLMs without fear of instant regulatory reprisal.
View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team
There are also existing statutes that could be amended or implemented in different ways to better accommodate generative AI, argues Hacker. “Certainly I think we have to amend the DSA [Digital Services Act],” he says. “Let’s also have a look at the GDPR and good, old-fashioned non-discrimination law. That’s going to do part of the job and cover some of the most important aspects.”
That already seems to be happening in the US, albeit by default. Lacking an overarching federal legal framework for AI governance, most of the official responses to generative AI have been left to individual agencies. The Federal Trade Commission (FTC) has been particularly vocal about companies falsely advertising their own capabilities in this area, with one imaginative pronouncement from an attorney working in its advertising practices division seemingly comparing generative AI to the golem of Jewish folklore.
But while select federal agencies are thinking and talking about how best to accommodate GPT-4 and the cornucopia of generative AI services it’ll doubtless spawn, says Andrew Burt of specialist law firm BNH.AI, the likelihood of overarching legislative reform on the European model is low. “I would say the number one, most practical outcome – although I’m certainly not holding my breath for it – is some form of bipartisan privacy regulation at a national level,” says Burt, which he anticipates would contain some provisions on algorithmic decision making. Nothing else is likely to pass in this era of cohabitation between the Biden administration and the Republican-held House of Representatives.
That’s thanks, in part, because the subject seemingly goes over the head of a lot of congresspersons, notwithstanding Speaker McCarthy’s promise to provide courses for House Intelligence Committee members on AI and lobbying efforts from the US Chamber of Commerce for some kind of regulatory framework. Voices within the Capitol supporting such measures are few, but vocal. Such is Rep. Ted Lieu (D-CA-36), who introduced a non-binding resolution in January written by ChatGPT calling on Congress to support the passage of a comprehensive framework ensuring AI remains safe, ethical and privacy-friendly. “We can harness and regulate AI to create a more utopian society,” wrote Lieu in a New York Times op-ed that same month, “or risk having an unchecked, unregulated AI push us toward a more dystopian future.”
Capacity problems in regulating AI
‘Unregulated’ might be a stretch – anti-discrimination and transparency laws do exist at the state level, complemented by a growing number of privately-run AI watchdogs – but congressional inaction on AI governance has left Alex Engler continually frustrated in recent years. A recent trip to London, by contrast, left the AI expert and Brookings Institute Governance Studies Fellow comparatively buoyed.
“I walked away with the impression that the UK has a pretty clear sense of what it wants to do, and is working somewhat meaningfully towards those goals with a series of relatively well-integrated policy documents,” says Engler, referring to consultations currently happening at the new Department for Science, Innovation and Technology about fine-tuning the current AI regulatory framework. But that comes with a catch – namely, that “they just don’t actually want to regulate very much”.
Boiled down, the UK’s approach is similar to that advocated by Hacker: establishing governing best practices for the use of AI and then leaving it up to sectoral regulators to apply them as they see fit. That applies as much to self-driving cars as it does the potential applications and harms that might arise from the widespread adoption of GPT-4 – though, says Engler, “I’m not sure generative AI really came up that much” during his trip.
That might be because Number 10 is waiting to hear back from an ARIA-led task force investigating the challenges associated with foundation models. It could also be that individual regulators don’t yet have the capacity to make informed assessments about how generative AI is impacting their sector, warns Henry Ajder, an expert in synthetic media. “Given the speed at which we are seeing developments in the space, it is impossible for well-resourced teams to be fully up to scratch with what is happening, let alone underfunded watchdogs,” he says. This was seemingly confirmed during an investigation by The Alan Turing Institute last July, which found that ‘there are significant readiness gaps in both the Regulation of AI and AI for Regulation’.
That realisation is also being confronted in Brussels. “I think many are now starting to realise that, actually, you have to build these dual teams, you have to actually start hiring computer scientists,” says Hacker of EU watchdogs. The same is true in the US to a certain extent, says Engler, though we would know more about the capacity of individual federal agencies if the Biden administration bothered to enforce a 2019 executive order mandating all departments in the federal government produce a plan explaining how they would contend with AI-related challenges.
But for his part, the Brookings associate isn’t convinced yet that regulators’ work will be horribly complicated by the arrival of generative AI. While serious harms have been identified, he says, proposals for dealing with them can be adapted from older conversations about algorithmic discrimination, cybersecurity best practices and platform governance – an issue especially pertinent in the UK, where malicious deepfakes are set to be criminalised in the latest iteration of the Online Safety Bill.
Consequently, when Engler doesn’t hear anything from policymakers on how they intend to regulate generative AI specifically, “I typically think that’s a sign of responsible policy making.” It’s okay, in short, to laud the second coming for AI in GPT-4 and for policymakers to assess its implications over a much longer time span. “Generative AI is the new and shiny thing for a lot of people, and it’s sort of scary and interesting,” says Engler. “But it’s not obvious to me that we know what to do from a regulatory standpoint.”
Read more: OpenAI’s ChatGPT is giving the rest of the world AI FOMO
Topics in this article : GPT-4 , OpenAI