小程序
传感搜
传感圈

This is how GPT-4 will be regulated

2023-03-23
关注

AI firms should welcome regulation – that, at least, was the message from Mira Murati. “We need a ton more input in this system, and a lot more input that goes beyond the technologies,” OpenAI’s chief technology officer told TIME, “definitely [from] regulators and governments and everyone else.” Murati didn’t specify precisely what form such oversight should take, knowing that the release of GPT-4, its newest and most powerful model, was just a month away. Instead the interview took a sharp left into the executive’s cultural preferences, beginning with Murati’s love for the Radiohead ditty ‘Paranoid Android’ – not the most uplifting song, to be sure, “but beautiful and thought provoking.”

For most companies, regulation is a necessary evil. For OpenAI to openly call for watchdogs to rain down upon them like so many investigating angels, though, has the benefit of imbuing its work with a frisson of mysticism – the implication being that its developers are dabbling with something brilliant but unpredictable, and certainly beyond the conventional understanding of the public. They might be right. GPT-4 is more complicated, agile and adept than its predecessor, GPT-3, and seems poised to disrupt everything from art and advertising, to education and legal services. With that, of course, comes the danger of the model and others like it being used for more nefarious ends – writing code-perfect malware, for example, or assisting in the spread of vile and dangerous misinformation. 

Yet, regulators around the world have remained largely silent at the prospect of generative AI forever changing the shape of industry as we know it. Meanwhile, a chorus has arisen among campaigners calling for comprehensive legislative frameworks that, ideally, puts foundation models in a kind of regulatory box, tightly secured, where they can be monitored and their creators punished for any malefactions.

Brussels is listening to this particular song, says Philipp Hacker, professor for law and ethics at the European New School of Digital Studies, with a proposal by two MEPs to categorise foundation models like GPT-4 as ‘high-risk’ under the EU’s draft AI Act rapidly gaining traction. For Hacker, though, the focus on regulating the models themselves is misplaced. EU parliamentarians also seem to be unduly unsettled by the appearance of generative AI in the final stages of the law’s passage. As such, he argues, “we are starting to see, now, this kind of race to regulate something that the legislators weren’t really ready for.”

GPT-4
Regulators in the US, UK and EU seem to have been caught off guard by the recent release of GPT-4, amid a general wave of public curiosity in generative AI. (Photo by Tada Images/Shutterstock)

GPT-4 versus EU and US

In large part, says Hacker, the problem that the EU has with ChatGPT, and will doubtless have with GPT-4, is definitional. Currently, the AI Act has a provision for what it calls ‘general-purpose AI systems’ (GPAIAS), meaning models intended by the provider to perform, you guessed it, ‘generally applicable functions’ like pattern detection, question answering, and speech recognition.

Such models are also deemed as ‘high-risk,’ requiring their creators to subscribe to rigorous reporting requirements if they want to continue operating them in the EU. In February, two MEPs proposed that foundation models fall under this definition, which would require the likes of OpenAI, Google, Anthropic and others to report any instances where their systems are being misused and take appropriate action to stop that from happening. 

This is absurd on two levels, argues Hacker. On the one hand, while there are a high number of theoretical risks that accompany the release of a foundation model, categorising a system like GPT-4 as ‘high-risk’ also makes relatively benign applications – say, generating a message for a child’s birthday card – as unusually dicey from a regulatory standpoint. Additionally, such models are adapted by a veritable army of individual developers and companies, making it extremely difficult and expensive for any one creator to monitor when or how a single LLM is being misused. Categorising GPAIS as inherently high-risk also imposes onerous requirements on developers of even basic models.

“If I write a very simple linear classifier for image recognition, that isn’t even very good at distinguishing humans from rats, that now counts – as per that definition – as, potentially, a general purpose AI system,” says Hacker.

Content from our partners

Why fashion’s future lies in the cloud

Why fashion’s future lies in the cloud

Tech’s role in addressing the logistics talent crisis

Tech’s role in addressing the logistics talent crisis

Addressing ESG to build a better, more sustainable business 

Addressing ESG to build a better, more sustainable business 

In the wake of consternation and not a little confusion from big tech firms and AI think tanks, new language has been proposed that widens the circle of those organisations responsible for reporting foundation model misuse to include corporate users that substantially modify the original system. Hacker welcomes the changes, but still disagrees with the EU’s broad approach to AI governance. Rather than fixating on regulating the models so closely, Hacker recommends overarching legislation promulgating more general principles for AI governance, a law that can serve as inspiration for new technology-neutral rules applied sector by sector. That might also be complemented by technical ‘safe harbours,’ where firms can freely experiment with new LLMs without fear of instant regulatory reprisal.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

There are also existing statutes that could be amended or implemented in different ways to better accommodate generative AI, argues Hacker. “Certainly I think we have to amend the DSA [Digital Services Act],” he says. “Let’s also have a look at the GDPR and good, old-fashioned non-discrimination law. That’s going to do part of the job and cover some of the most important aspects.”

That already seems to be happening in the US, albeit by default. Lacking an overarching federal legal framework for AI governance, most of the official responses to generative AI have been left to individual agencies. The Federal Trade Commission (FTC) has been particularly vocal about companies falsely advertising their own capabilities in this area, with one imaginative pronouncement from an attorney working in its advertising practices division seemingly comparing generative AI to the golem of Jewish folklore. 

But while select federal agencies are thinking and talking about how best to accommodate GPT-4 and the cornucopia of generative AI services it’ll doubtless spawn, says Andrew Burt of specialist law firm BNH.AI, the likelihood of overarching legislative reform on the European model is low. “I would say the number one, most practical outcome – although I’m certainly not holding my breath for it – is some form of bipartisan privacy regulation at a national level,” says Burt, which he anticipates would contain some provisions on algorithmic decision making. Nothing else is likely to pass in this era of cohabitation between the Biden administration and the Republican-held House of Representatives. 

That’s thanks, in part, because the subject seemingly goes over the head of a lot of congresspersons, notwithstanding Speaker McCarthy’s promise to provide courses for House Intelligence Committee members on AI and lobbying efforts from the US Chamber of Commerce for some kind of regulatory framework. Voices within the Capitol supporting such measures are few, but vocal. Such is Rep. Ted Lieu (D-CA-36), who introduced a non-binding resolution in January written by ChatGPT calling on Congress to support the passage of a comprehensive framework ensuring AI remains safe, ethical and privacy-friendly. “We can harness and regulate AI to create a more utopian society,” wrote Lieu in a New York Times op-ed that same month, “or risk having an unchecked, unregulated AI push us toward a more dystopian future.”

‘Man controlling trade’ – a statue outside the US Federal Trade Commission. A recent blog post from the FTC warning about the dangers of generative AI compared ChatGPT and its ilk to the folkloric golem. (Photo by Rosemarie Mosteller/Shutterstock)

Capacity problems in regulating AI

‘Unregulated’ might be a stretch – anti-discrimination and transparency laws do exist at the state level, complemented by a growing number of privately-run AI watchdogs – but congressional inaction on AI governance has left Alex Engler continually frustrated in recent years. A recent trip to London, by contrast, left the AI expert and Brookings Institute Governance Studies Fellow comparatively buoyed.

“I walked away with the impression that the UK has a pretty clear sense of what it wants to do, and is working somewhat meaningfully towards those goals with a series of relatively well-integrated policy documents,” says Engler, referring to consultations currently happening at the new Department for Science, Innovation and Technology about fine-tuning the current AI regulatory framework. But that comes with a catch – namely, that “they just don’t actually want to regulate very much”.

Boiled down, the UK’s approach is similar to that advocated by Hacker: establishing governing best practices for the use of AI and then leaving it up to sectoral regulators to apply them as they see fit. That applies as much to self-driving cars as it does the potential applications and harms that might arise from the widespread adoption of GPT-4 – though, says Engler, “I’m not sure generative AI really came up that much” during his trip.

That might be because Number 10 is waiting to hear back from an ARIA-led task force investigating the challenges associated with foundation models. It could also be that individual regulators don’t yet have the capacity to make informed assessments about how generative AI is impacting their sector, warns Henry Ajder, an expert in synthetic media. “Given the speed at which we are seeing developments in the space, it is impossible for well-resourced teams to be fully up to scratch with what is happening, let alone underfunded watchdogs,” he says. This was seemingly confirmed during an investigation by The Alan Turing Institute last July, which found that ‘there are significant readiness gaps in both the Regulation of AI and AI for Regulation’.

That realisation is also being confronted in Brussels. “I think many are now starting to realise that, actually, you have to build these dual teams, you have to actually start hiring computer scientists,” says Hacker of EU watchdogs. The same is true in the US to a certain extent, says Engler, though we would know more about the capacity of individual federal agencies if the Biden administration bothered to enforce a 2019 executive order mandating all departments in the federal government produce a plan explaining how they would contend with AI-related challenges.

But for his part, the Brookings associate isn’t convinced yet that regulators’ work will be horribly complicated by the arrival of generative AI. While serious harms have been identified, he says, proposals for dealing with them can be adapted from older conversations about algorithmic discrimination, cybersecurity best practices and platform governance – an issue especially pertinent in the UK, where malicious deepfakes are set to be criminalised in the latest iteration of the Online Safety Bill. 

Consequently, when Engler doesn’t hear anything from policymakers on how they intend to regulate generative AI specifically, “I typically think that’s a sign of responsible policy making.” It’s okay, in short, to laud the second coming for AI in GPT-4 and for policymakers to assess its implications over a much longer time span. “Generative AI is the new and shiny thing for a lot of people, and it’s sort of scary and interesting,” says Engler. “But it’s not obvious to me that we know what to do from a regulatory standpoint.”

Read more: OpenAI’s ChatGPT is giving the rest of the world AI FOMO

Topics in this article : GPT-4 , OpenAI

参考译文
这就是GPT-4的监管方式
人工智能公司应该欢迎监管——这至少是米拉·穆拉蒂(Mira Murati)传达的信息。OpenAI的首席技术官在接受《时代》杂志采访时表示:“我们需要在这个系统中投入更多,以及更多超越技术的投入,这些投入肯定来自监管机构、政府和其他所有人。”Murati没有具体说明这种监督应该采取什么形式,因为他知道距离GPT-4的发布只有一个月了,GPT-4是该公司最新、功能最强大的车型。相反,采访转向了Murati的文化偏好,从Murati对Radiohead小调《Paranoid Android》的喜爱开始,可以肯定的是,这不是最令人振奋的歌曲,“但很美,发人深省”。对大多数公司来说,监管是必要之恶。对于OpenAI来说,公开呼吁监管机构像许多调查天使一样对他们进行监视,尽管这样做的好处是给其工作注入了一种神秘主义的兴奋感——这意味着它的开发人员正在涉足一些辉煌但不可预测的东西,当然超出了公众的传统理解。他们可能是对的。GPT-4比它的前身GPT-3更加复杂、灵活和熟练,似乎准备颠覆从艺术和广告到教育和法律服务的一切。当然,与此同时,该模型和其他类似模型被用于更邪恶的目的的危险也随之而来——例如,编写代码完美的恶意软件,或协助传播邪恶和危险的错误信息。然而,对于生成式人工智能将永远改变我们所知的行业形态的前景,世界各地的监管机构基本上保持沉默。与此同时,活动人士一致呼吁建立全面的立法框架,理想的情况是,将基金会模型置于某种监管框架中,受到严密保护,可以对它们进行监控,并对其创建者进行任何男性行为的惩罚。欧洲数字研究新学院(European New School of Digital Studies)的法律和伦理学教授菲利普·哈克(Philipp Hacker)说,布鲁塞尔正在听这首特别的歌。两名欧洲议会议员提议,根据欧盟人工智能法案草案,将GPT-4等基础模型分类为“高风险”,并迅速获得支持。不过,对Hacker来说,把重点放在规范模型本身上是错误的。欧盟议员似乎也对该法律通过的最后阶段出现的可生成人工智能感到过度不安。因此,他认为,“我们现在开始看到,这种监管立法者还没有真正准备好的东西的竞赛。”Hacker说,在很大程度上,欧盟对ChatGPT的问题,以及GPT-4的问题,都是定义性的。目前,《人工智能法案》对所谓的“通用人工智能系统”(GPAIAS)有一个规定,这意味着提供商旨在执行“通用功能”的模型,如模式检测、问题回答和语音识别。这种模式也被认为是“高风险”的,如果他们想继续在欧盟运营这些模式,就必须遵守严格的报告要求。今年2月,两名欧洲议会议员提议,基金会模型属于这一定义,这将要求OpenAI、谷歌、Anthropic等公司报告其系统被滥用的任何情况,并采取适当行动阻止这种情况发生。 Hacker认为,这在两个层面上是荒谬的。一方面,尽管基础模型的发布伴随着大量的理论风险,但将GPT-4这样的系统归类为“高风险”也会使相对良性的应用程序——比如为孩子的生日卡生成消息——从监管的角度来看异常危险。此外,这种模型是由大量的个人开发人员和公司进行调整的,这使得任何一个创建者都很难监控一个LLM何时或如何被滥用。将GPAIS归类为固有的高风险也会对基本模型的开发人员提出繁重的要求。“如果我为图像识别编写一个非常简单的线性分类器,它甚至不能很好地区分人类和老鼠,根据这个定义,它现在可以视为一个潜在的通用人工智能系统,”Hacker说。在大型科技公司和人工智能智库的惊愕和困惑之后,提出了新的语言,扩大了那些负责报告基金会模型滥用的组织的范围,包括大幅修改原始系统的企业用户。Hacker对这些变化表示欢迎,但仍不同意欧盟在人工智能治理方面的广泛做法。Hacker建议,与其如此密切地关注于对模型的监管,不如制定全面的立法,为人工智能治理颁布更普遍的原则,这部法律可以为各个行业应用的新技术中性规则提供灵感。技术上的“安全港”可能也会起到补充作用,在那里,企业可以自由地试验新的llm,而不必担心监管机构会立即采取报复措施。Hacker认为,现有的法规也可以以不同的方式进行修改或实施,以更好地适应生成式人工智能。他表示:“当然,我认为我们必须修改数字服务法案。”“我们也来看看GDPR和优秀的旧式非歧视法律。这将完成部分工作,并涵盖一些最重要的方面。这种情况似乎已经在美国发生,尽管是默认的。由于缺乏一个全面的人工智能治理联邦法律框架,官方对生成式人工智能的大部分回应都留给了各个机构。美国联邦贸易委员会(FTC)对公司在这一领域虚假宣传自己的能力尤其直言不讳,该委员会广告实践部门的一名律师发表了一份富有想象力的声明,似乎将生成式人工智能比作犹太民间传说中的傀儡。不过,专业律师事务所BNH的安德鲁•伯特(Andrew Burt)表示,尽管一些联邦机构正在思考和讨论如何最好地适应GPT-4,以及它无疑将催生的大量生成人工智能服务。人工智能方面,按照欧洲模式进行全面立法改革的可能性很低。伯特说:“我想说,最实际的结果是——尽管我肯定不会为此屏息屏息——在国家层面上制定某种形式的两党隐私监管。”他预计,该监管将包含一些关于算法决策的条款。在拜登政府和共和党控制的众议院共存的时代,其他任何事情都不太可能通过。
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

提取码
复制提取码
点击跳转至百度网盘