小程序
传感搜
传感圈

Government backs UK AI regulatory sandbox

2023-03-23
关注

A new regulatory sandbox for artificial intelligence models, tools and systems is to be introduced in the UK after the government backed proposals in yesterday’s Budget. The multi-regulator sandbox will “allow innovators and entrepreneurs to experiment with new products or services under enhanced regulatory supervision without the risk of fines or liability”.

Former UK chief scientific advisor Sir Patrick Vallance recommended setting up an AI sandbox in a review of innovation-related policy (Photo by Adrian Dennis-WPA Pool/Getty Images)

This isn’t a new concept, with sandboxes widely used in other areas of the economy including finance, energy systems and data security as a way to test the boundaries of both regulation and technology. Several regulators already run some form of sandbox including the Information Commissioner’s Office (ICO) with data security and innovation.

As part of the 2023 Spring Budget announcement, Chancellor Jeremy Hunt confirmed the government would support recommendations on AI regulation made by Sir Patrick Vallance in his “pro-innovation regulation for digital technologies” review.

It is hoped a sandbox will be developed and operational across different regulators within six months, mirroring the UK approach to regulating AI by putting the emphasis on individual regulators and use case rather than taking a broad approach.

“Innovators can often face regulatory challenges in getting new, cutting-edge products to market,” the government wrote in its response to the report by Vallance. “This is particularly true when a technology’s path to market requires interaction with multiple regulators, or when regulatory guidance is emergent. Regulatory sandboxes provide a space for innovators to test their new ideas and work with regulators on the application of regulatory frameworks.”

AI regulatory sandbox: a multi-agency approach

The engagement will be through the Digital Regulation Cooperation Forum. Vallance wrote that effective AI regulation “requires a new approach from government and regulators” that is agile, expert-led and able to provide clear guidance quickly to industry. The argument for it being multi-regulator is to reduce inconsistencies in regulatory responses and create a more coherent approach.

He set out three core principles to guide the development of the sandbox including a “time-limited opportunity” for companies to test propositions on real consumers, a focus on areas where “underpinning science or technology” is at a stage where a major breakthrough is feasible and a way to solve “societal challenge” where the UK could be a world leader.

While the exact structure and design of the sandbox won’t be known until the white paper is published, Vallance set out his vision including targeted signposting both national and international with clear eligibility criteria, clear application deadlines and accountability and transparency with a consideration of ethics, privacy and protection of consumers. “A sandbox could initially focus on areas where regulatory uncertainty exists, such as generative AI, medical devices based on AI, and could link closely with the ICO sandbox on personal data applications,” he wrote.

Content from our partners

Why fashion’s future lies in the cloud

Why fashion’s future lies in the cloud

Tech’s role in addressing the logistics talent crisis

Tech’s role in addressing the logistics talent crisis

Addressing ESG to build a better, more sustainable business 

Addressing ESG to build a better, more sustainable business 

The ICO wrote in response to the government support, that it expects to have a critical role in helping innovators develop safe and trustworthy products. “But in a fast-moving area like AI, there is always more that can be done, and we welcome the focus this report will bring. We’ll continue prioritising our work in this area – including guidance we’re working on including on personal data processing relating to AI as a service – and look forward to discussing the recommendations within the report with our DRCF partners and Government.”

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

Ryan Carrier, founder and CEO of AI ethics campaign group ForHumanity told Tech Monitor: “AI regulatory sandbox are critical tools for bridging a trust gap between regulators and the providers of AI tools to the marketplace.” He added that most AI providers are multiple steps away from being compliant with even basic forms of regulation.

“These… will allow groups like ForHumanity, which have already built and submitted certification schemes to the UK government, to allow us to prove the methods, procedures and documentation that compliance requires without risk of noncompliance at the outset. It is a great step way to build towards robust compliance.”

Anita Schjøll Abildgaard, CEO and co-founder of AI platform Iris.ai welcomed the news and said it could lead to a frenzy of innovation. But she warned against leading with the technology then finding the use-case for it. “It should be the other way around – using AI, mindfully applied, to solve real-world well understood problems,” she said. “Instead of getting caught up in the generative AI craze that will dominate 2023, businesses and large tech corporations should consider the AI technologies that will drive real value, rather than driving headlines. Bigger does not always equal better.”

Clear policy on intellectual property

Vallance also called for a “clear policy position” on the relationship between intellectual property law and generative AI to ensure innovators and investors can have confidence in the technology. This includes enabling mining of available data, text and images and utilising copyright and IP law to protect IP output. “In parallel, technological solutions for ensuring attribution and recognition, such as watermarking, should be encouraged, and could be linked to the development of new international standards in due course.”

Intellectual property lawyer Cerys Wyn Davies of Pinsent Masons said there was a fine line between making development of AI easier and recognising intellectual property rights. “It is clear from the government’s endorsement of Sir Patrick Vallance’s recommendation that it is seeking to deliver certainty around the relationship between intellectual property law and generative AI,” she said.

“This is key both to encourage the development of AI and to encourage the other creative industries. Certainty that pleases everyone, however, is going to be difficult to achieve as has been highlighted by the backlash against proposals by the UK Intellectual Property Office to expand the scope of the text and data mining exception that exists in copyright law to help AI developers train their systems.”  

General partner at deep tech venture capital company OpenOcean, Ekaterina Almasque told Tech Monitor being able to access a high volume of high-quality data and access it without transgressing IP law or individual privacy is essential when training AI models. “If the new AI sandbox results in changes that make it easier for AI start-ups to train their models and bring solutions to the enterprises that need them, then that will have a positive effect on the UK start-up scene in the long run,” she said.

“However, what we require are clear commitments. Start-ups need to be able to deliver their products to the market with speed in their early stages, and while steps to clear up regulatory uncertainty are welcome, they’re not concrete yet.”

Bola Rotibi Chief of Enterprise Research at CCS Insight described the sandbox as a “welcome and in of itself smart move” given how fast AI is advancing. She added: “The EU’s AI Act sees their use as enabling a more agile approach to innovation and regulation in the fast-moving tech sector. That the UK has made explicit provisions for supporting AI Regulatory Sandboxes in the budget is recognition of the internationally competitive battle ground that AI presents. But it is also acknowledgement of the constraints to innovation that a highly regulated UK market presents.”

This allows the UK to play on its regulatory maturity, offering “opportunities that could see the sandbox delivering more appropriately innovative and supportive regulations for AI systems and applications quicker. That said, the UK is not the first off the AI Regulatory Sandbox starter block with Spain and the European Commission having been the first to put their pilot AI regulatory sandbox in the field in June 2022 reporting on its findings in the second half of 2023. An ironic outcome for those who believe the EU poses a drag on a nation’s ability to progress innovatively and take advantages of opportunities.”

Read more: This is how GPT-4 will be regulated

Topics in this article : AI , ico , Regulation

参考译文
政府支持英国人工智能监管沙盒
在政府支持昨日预算案中的提案后,英国将引入一个针对人工智能模型、工具和系统的新的监管沙盒。多监管机构的沙盒将“允许创新者和企业家在加强的监管下试验新产品或服务,而无需承担罚款或责任的风险”。这并不是一个新概念,沙盒被广泛应用于金融、能源系统和数据安全等其他经济领域,作为测试监管和技术边界的一种方式。一些监管机构已经在运行某种形式的沙盒,包括信息专员办公室(ICO)的数据安全和创新。作为2023年春季预算公告的一部分,财政大臣杰里米·亨特证实,政府将支持帕特里克·瓦伦斯爵士在“数字技术支持创新监管”审查中提出的关于人工智能监管的建议。人们希望在6个月内在不同监管机构之间开发并运行一个沙盒,模仿英国监管人工智能的方法,将重点放在单个监管机构和用例上,而不是采取广泛的方法。政府在对Vallance报告的回应中写道:“创新者在将新的尖端产品推向市场时往往会面临监管方面的挑战。”“当一项技术的市场之路需要与多个监管机构互动,或者监管指导是紧急的时候,尤其如此。监管沙盒为创新者提供了一个空间,以测试他们的新想法,并与监管机构合作应用监管框架。“双方将通过数字监管合作论坛进行接触。Vallance写道,有效的人工智能监管“需要政府和监管机构采取一种新的方法”,即灵活、由专家主导,并能够迅速为行业提供明确的指导。支持多监管机构的理由是,减少监管反应的不一致,并创建一种更连贯的方法。他提出了指导沙盒开发的三个核心原则,包括为企业提供在真实消费者身上测试主张的“限时机会”,关注“基础科学或技术”处于可能取得重大突破的阶段的领域,以及解决“社会挑战”的方法,让英国成为世界领导者。虽然在白皮书发布之前,沙盒的确切结构和设计还不得而知,但Vallance提出了他的愿景,包括在国内和国际上有针对性地设置标志,具有明确的资格标准、明确的申请截止日期、问责制和透明度,并考虑到道德、隐私和消费者保护。他写道:“沙盒最初可以专注于存在监管不确定性的领域,比如生成式人工智能、基于人工智能的医疗设备,并可以在个人数据应用方面与ICO沙盒密切联系。”ICO在回应政府的支持时写道,它希望在帮助创新者开发安全可靠的产品方面发挥关键作用。“但在人工智能这样一个快速发展的领域,总是有更多的事情可以做,我们欢迎这份报告带来的关注。我们将继续优先考虑我们在这一领域的工作——包括我们正在进行的指导,包括与人工智能作为一种服务相关的个人数据处理——并期待与我们的DRCF合作伙伴和政府讨论报告中的建议。人工智能道德运动组织ForHumanity的创始人兼首席执行官Ryan Carrier告诉Tech Monitor:“人工智能监管沙盒是弥合监管机构和人工智能工具供应商之间信任鸿沟的关键工具。”他补充说,大多数人工智能提供商距离遵守甚至是基本形式的监管都有好几个步骤。
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

提取码
复制提取码
点击跳转至百度网盘