小程序
传感搜
传感圈

OpenAI challenged to enter ChatGPT into new AI regulatory sandbox

2023-03-15
关注

Microsoft-backed artificial intelligence start-up OpenAI has been urged to join a new regulatory sandbox to test the limits and potential guardrails on its general AI models and tools such as ChatGPT. The challenge came from ethical AI campaign group ForHumanity in an open letter, declaring “a regulatory sandbox provides the ideal process to share governance widely and fairly.”

OpenAI has been challenged to use a sandbox as a means to complying with the EU AI Act (Photo: Ascannio/Shutterstock)
OpenAI has been challenged to use a sandbox as a means to complying with the EU AI Act (Photo by Ascannio/Shutterstock)

The concept of regulatory sandboxes isn’t new and they have been widely used around the world to test ideas around data security, energy systems and most recently fintech applications. Often run by regulators, they provide a way to run a trial of a new service or product where some rules have been removed or added to test boundaries and viability.

An illustration of a regulatory sandbox in the realm of AI is the one set up by the UK’s Information Commissioner’s Office (ICO) to explore and experiment with products and services related to data protection. The ICO’s regulatory sandbox is targeted at assisting companies in developing innovative data protection solutions while managing privacy risks.

ForHumanity founder and executive director Ryan Carrier emphasised the importance of OpenAI’s engagement with a regulatory sandbox in the open letter, citing its potential to allow for the fair and broad sharing of governance. They believe that this would provide a necessary means of testing the limits and potential guardrails of OpenAI’s general AI models and tools, such as ChatGPT as well as ensure they comply with the upcoming EU AI Act.

The EU AI Act is a proposed piece of legislation aimed at regulating the use of artificial intelligence across the European Union. The act seeks to promote the ethical and trustworthy development and use of AI, while also ensuring safety, privacy, and fundamental rights. The European Commission and European Parliament are working on their own versions of the final drafting. It isn’t clear how the act will handle general AI at this stage although tech companies are lobbying for it to be based on final use, not the model itself.

Published on LinkedIn the open letter follows a research note from ChatGPT-maker OpenAI on the future of artificial intelligence and the “path to AGI” or artificial general intelligence, seen as the point where AI can think like a human across a wide range of cognitive tasks.

AI is a transformative technology

OpenAI founder Sam Altman argues that AGI has the potential to transform many aspects of society and that “careful planning and collaboration is needed to ensure its safe and beneficial development” including the need to engage with a broad community of stakeholders in its development including policymakers and civil society organisations.

Engaging with a third party on creating a regulatory sandbox would fulfil this ambition, says Carrier, including through the adoption of certification systems to demonstrate a tool is compliant with regulation. This would “enable OpenAI to build compliance capabilities for the requirements of the law,” he said.

Content from our partners

Addressing ESG to build a better, more sustainable business 

Addressing ESG to build a better, more sustainable business 

Empower finance leaders to become agents of change

Empower finance leaders to become agents of change

Why the fashion industry must leverage tech to unlock supply chain visibility 

Why the fashion industry must leverage tech to unlock supply chain visibility 

Tech Monitor has approached OpenAI for comment but had not received a response at the time of writing.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

In his research note on the “path to AGI” Altman declares the need for “careful planning and collaboration” if the value of advanced artificial intelligence technology is going to be both widely used and accepted by the public and by enterprise. “To maximise the benefits and minimise the risks of AGI, it is essential to consider its implications for society at large,” he added.

“With the exception of prohibited technologies, ForHumanity supports the beneficial and ethical use of all technology, and our work endeavours to support and enable OpenAI, and others, to maximise risk mitigation, for all stakeholders, through Independent Audit of AI Systems (IAAIS),” wrote Carrier.

Careful consideration of risk

“Engaging in Independent Audit of ChatGPT is a robust solution for navigating massive risks with the very tools that are a likely a portion of the foundation of AGI you referred to in Planning for AGI and Beyond,” the ForHumanity letter says. “In the regulatory sandbox, together, we can test and prove compliance with the EU AI Act.”

As part of its proposal for a sandbox, ForHumanity suggests there would be three key tools including a comprehensive risk management framework that fully integrates leading standards with a range of diverse inputs and multi-stakeholder feedback. This includes human risk assessors responsible for identifying risky inputs and indicators during the design, development and deployment phases of an algorithms lifecycle. It would provide a “robust beginning to governance”.

This would then lead to the second stage, the implementation of an OpenAI ethics committee that is trained in algorithm ethics and operating to a public code of ethics. This, says Carrier, is critical. “OpenAI attracts talented and expert data scientists and model developers to build its systems, but do you have a team of experts governing the ethics that are embedded in ChatGPT and other models?”

The final tool is one developed by ForHumanity known as Systemic Societal Impact Analysis (SSIA) designed to foster self-awareness when it comes to societal impact of products and developments. This is a requirement of the EU Digital Services Act which OpenAI tools have to confirm tool in addition to the upcoming AI Act and GDPR.

“These tools are examples of the comprehensive audit criteria that ForHumanity has established to provide independent auditors the ability to assure and certify compliance for High-Risk AI under the EU AI Act,” Carrier explained. “Working in a regulatory sandbox to test, research and build assured compliance with laws and regulation established by a democratic society (the EU), deploying rules established by “other organisations” that operate globally to advance safety with aligned incentives towards good outcomes seems to agree exceptionally well with your stated goals.”

Read more: Did biometric systems fail Nigeria’s democracy?

Topics in this article : AI , EU , OpenAI

参考译文
OpenAI挑战将ChatGPT引入新的AI监管沙盒
微软(microsoft)支持的人工智能初创企业OpenAI被敦促加入一个新的监管沙盒,以测试其通用人工智能模型和ChatGPT等工具的限制和潜在护栏。这一挑战来自人工智能伦理运动组织ForHumanity在一封公开信中表示,“监管沙盒提供了广泛而公平地共享治理的理想过程。”监管沙盒的概念并不新鲜,它们已在世界各地被广泛用于测试数据安全、能源系统和最近的金融科技应用方面的想法。它们通常由监管机构运营,提供了一种对新服务或产品进行试验的方法,其中一些规则被删除或增加,以测试边界和可行性。人工智能领域监管沙盒的一个例子是英国信息专员办公室(ICO)为探索和试验与数据保护相关的产品和服务而设立的沙盒。ICO的监管沙盒旨在帮助企业开发创新的数据保护解决方案,同时管理隐私风险。ForHumanity创始人兼执行董事瑞安·卡里尔(Ryan Carrier)在公开信中强调了OpenAI参与监管沙盒的重要性,称其有可能实现公平而广泛的治理共享。他们认为,这将为测试OpenAI通用AI模型和工具(如ChatGPT)的限制和潜在护栏提供必要的手段,并确保它们符合即将出台的欧盟AI法案。《欧盟人工智能法案》是一项拟议立法,旨在规范整个欧盟范围内人工智能的使用。该法案旨在促进人工智能的道德和值得信赖的发展和使用,同时确保安全、隐私和基本权利。欧盟委员会和欧洲议会正在制定各自版本的最终草案。目前尚不清楚该法案将如何处理通用人工智能,不过科技公司正在游说,希望该法案基于最终用途,而不是模型本身。在这封公开信发表在领英(LinkedIn)上之前,chatgtp制造商OpenAI发布了一份关于人工智能的未来和“通往AGI(人工通用智能)之路”的研究报告,AGI被视为人工智能可以像人类一样在广泛的认知任务中思考。OpenAI创始人Sam Altman认为,AGI有潜力改变社会的许多方面,“需要仔细的规划和合作,以确保其安全和有益的发展”,包括需要在其发展中与广泛的利益相关者社区进行接触,包括政策制定者和民间社会组织。开利表示,与第三方合作创建监管沙盒将实现这一目标,包括通过采用认证系统来证明工具符合监管要求。他说,这将“使OpenAI能够建立符合法律要求的合规能力”。Tech Monitor已联系OpenAI寻求评论,但截至发稿时尚未收到回复。奥特曼在他关于“通往AGI之路”的研究报告中宣称,如果先进人工智能技术的价值要被公众和企业广泛使用和接受,就需要“仔细的规划和合作”。他补充说:“为使AGI的效益最大化和风险最小化,必须考虑其对整个社会的影响。”开利写道:“除了被禁止的技术外,ForHumanity支持所有技术的有益和道德使用,我们的工作努力支持OpenAI和其他技术,通过人工智能系统独立审计(IAAIS),最大限度地为所有利益相关者降低风险。” “参与ChatGPT的独立审计是一种强大的解决方案,可以使用您在《为AGI及未来规划》中提到的可能是AGI基础的一部分的工具来导航大规模风险,”ForHumanity的信中写道。“在监管沙盒中,我们可以一起测试并证明符合欧盟人工智能法案。”作为沙盒提案的一部分,“人道”组织建议,将有三个关键工具,包括一个全面的风险管理框架,该框架将领先的标准与一系列不同的输入和多方利益相关者的反馈完全集成在一起。这包括人力风险评估员,负责在算法生命周期的设计、开发和部署阶段识别风险输入和指标。它将提供一个“强有力的治理开端”。然后,这将导致第二阶段,实施OpenAI伦理委员会,该委员会接受算法伦理培训,并按照公共道德准则运作。开利说,这一点至关重要。“OpenAI吸引了有才华的专家数据科学家和模型开发人员来构建它的系统,但你有没有一个专家团队来管理嵌入在ChatGPT和其他模型中的道德规范?”最后一个工具是由ForHumanity开发的系统社会影响分析(SSIA),旨在培养产品和开发对社会影响的自我意识。这是欧盟数字服务法案的要求,除了即将到来的AI法案和GDPR外,OpenAI工具还必须确认该工具。Carrier解释说:“这些工具是ForHumanity建立的全面审计标准的例子,这些标准为独立审计人员提供了确保和证明符合欧盟人工智能法案规定的高风险人工智能的能力。”“在监管沙盒中工作,以测试、研究和确保遵守民主社会(欧盟)制定的法律法规,部署由‘其他组织’制定的规则,这些组织在全球范围内开展业务,以促进安全,并以一致的激励措施实现良好结果,这似乎非常符合您所宣布的目标。”
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

提取码
复制提取码
点击跳转至百度网盘