小程序
传感搜
传感圈

UK’s ‘collaborative’ approach to AI regulation may prove complex and burdensome

2022-07-21
关注

The UK government today outlined its proposed approach to legislation for regulating artificial intelligence (AI). Unlike the EU, which is developing a new AI law, the UK’s proposed approach would ask existing regulators to apply the principles of AI governance to their respective areas of focus.

A ‘collaborative’ system of AI regulation is the preferred approach among UK regulators, according to a recent study by the Alan Turing Institute, but could be complex to deliver and may overburden their resources, experts told Tech Monitor.

The national AI strategy will take a decentralised approach, putting control in the hands of existing regulators. (Photo courtesy of DCMS)

In a policy paper published today, the Department for Digital, Culture, Media and Sport outlined an approach to AI regulation that it describes as ‘context-specific’, ‘pro-innovation and risk-based’, ‘coherent’ and ‘proportionate and adaptable’.

Under the proposals, existing agencies such as Ofcom and the Competition Markets Authority would be responsible for ensuring any AI used by industry, academia or the public sector within their areas of interest is technically secure, functions as designed, is “explainable”, and considers fairness.

Related Articles

Governance

UK finance bill includes first crypto asset and stablecoin regulation

Governance

Will new UK data laws put adequacy agreement with EU at risk?

Governance

New UK government CDO will have data sharing at the top of his agenda

Governance

Uber used ‘kill switch’ to stop authorities accessing data, leaks reveal

Regulators would have to follow core principles around AI, rather than each individual use being regulated and controlled, the policy paper says. They would apply these principles to their respective sectors and build on them with specific guidelines and regulations. Some sectors, such as healthcare and finance will have stricter rules, whereas others will be more relaxed and voluntary.

These cross-sector principles include regulating AI based on its use and the impact it will have on individuals, groups and businesses. It also has to be pro-innovation and risk-based in its regulation, focusing on addressing issues where there is clear evidence of real risk or missed opportunities. And regulation should be tailored to the distinct characteristics of AI, ensuring the overall regulations are easy to understand and follow.

AI regulation in the UK: a collaborative approach

The government’s proposed approach stands in contrast to that of the EU, whose AI Act seeks to establish a new law governing the use of AI across the bloc. “The EU is adopting a risk-based approach,” says Adam Leon Smith, CTO of AI agency Dragonfly and the UK representative in the EU’s AI standards group. “It is specifically prohibiting certain types of AI, and requiring high-risk use cases to be subject to independent conformity assessment. 

“The UK is also following a context-specific and risk-based approach, but is not trying to define that approach in primary legislation, instead, it is leaving that to individual regulators.”

Content from our partners

How AI will extend the scale and sophistication of cybercrime

Why balancing security with IT operations demands a holistic approach

The zero day vulnerability trade remains lucrative but risky

A more collaborative approach, in which regulators work together to define principles but apply them separately in their areas of focus, is the preferred approach among regulators, according to a recent study by AI think tank the Alan Turing Institute.

Regulators consulted in the study rejected the prospect of a single AI regulator, said Dr Cosmina Dorobantu, co-director of the public policy programme at The Alan Turing Institute. “Everybody shot that down because it would affect the independence of the regulators,” she explained.

The prospect of a purely voluntary system of AI regulation was also rejected. “AI is a very broad technology,” said Professor Helen Margetts, programme director for public policy at the institute. “Regulation has to be a collaborative effort.”

Data, insights and analysis delivered to you View all newsletters By The Tech Monitor team Sign up to our newsletters

Nevertheless, the government’s proposed approach is likely to be a complex undertaking, given the number of regulatory agencies in the UK. “One of the more surprising things we learned during the study is that there is no list of regulators,” said Dr Dorobantu. “Nobody keeps a central database. There are over 100, ranging from some with thousands of employees to others with just one person.”

All these regulators will need to develop AI expertise under this proposed approach, the pair explained, and how they should coordinate their activity when regulations overlap will need to be clarified.

The government’s proposed approach could also prove burdensome for the regulators, argues Leon-Smith. “It is unclear if the ICO and Ofcom will be able to handle the increased workload,” he said. “This workload is particularly important given the frequency of change that AI systems undergo, but also the expected impact of the Online Safety Bill on Ofcom.”

The UK’s proposed approach includes a provision that would require all high-risk AI applications to be “explainable”, particularly with respect to bias and potential inaccuracies. This goes further than the EU’s AI Act, Leon-Smith observes. 

“The policy paper states that regulators may deem that high-risk decisions that cannot be explained should be prohibited entirely. The EU has not gone so far, merely indicating that information about the operation of the systems should be available to users.”

The government has invited interested parties to provide feedback on the policy paper. It will set out more details of the proposed regulatory framework in a forthcoming whitepaper, it said.

Read more: MEPs are preparing to debate Europe’s AI Act. These are the most contentious issues.

Topics in this article: AI

参考译文
英国对人工智能监管的“合作”方式可能会被证明是复杂和繁重的
英国政府今天概述了其拟议的规范人工智能(AI)的立法方法。与正在制定新的人工智能法律的欧盟(EU)不同,英国拟议的做法将要求现有监管机构将人工智能治理原则应用于各自的重点领域。艾伦·图灵研究所(Alan Turing Institute)最近的一项研究显示,人工智能监管的“协作”系统是英国监管机构的首选方法,但专家们告诉《科技箴言》(Tech Monitor),该系统的实施可能会很复杂,可能会让他们的资源负担过重。在今天发布的一份政策文件中,英国数字、文化、媒体和体育部概述了一种人工智能监管方法,它将其描述为“情境特定”、“支持创新和基于风险”、“连贯”、“相称和适应”。根据提案,现有的机构,如Ofcom和竞争市场管理局,将负责确保工业、学术界或公共部门在其感兴趣的领域内使用的任何人工智能在技术上是安全的,功能是设计好的,是“可解释的”,并考虑公平。该政策文件称,监管机构将必须遵循人工智能的核心原则,而不是对每一个单独的使用进行监管和控制。它们将把这些原则应用于各自的部门,并以具体的指导方针和条例为基础。医疗和金融等一些行业将有更严格的规定,而其他行业将更加宽松和自愿。这些跨部门原则包括根据人工智能的使用及其对个人、团体和企业的影响来监管人工智能。它还必须在监管上支持创新和基于风险,专注于解决有明确证据表明存在真正风险或错失机会的问题。监管应针对人工智能的鲜明特点,确保整体监管易于理解和遵循。政府提出的做法与欧盟(EU)的做法形成了对比,后者的《人工智能法案》(AI Act)寻求建立一项新的法律,管理整个欧盟范围内人工智能的使用。“欧盟正在采用一种基于风险的方法,”人工智能机构蜻蜓(Dragonfly)的首席技术官、欧盟人工智能标准组的英国代表亚当•莱昂•史密斯(Adam Leon Smith)表示。“它特别禁止某些类型的人工智能,并要求高风险使用案例接受独立的合格评估。“英国也在遵循一种基于具体情况和风险的方法,但没有试图在主要立法中定义这种方法,相反,它把这种方法留给了各个监管机构。”人工智能智库艾伦图灵研究所(Alan Turing Institute)最近的一项研究显示,监管机构更青睐一种更具协作性的方法,即共同定义原则,但在各自关注的领域分别应用这些原则。艾伦·图灵研究所(Alan Turing Institute)公共政策项目联合主任科斯米纳·多罗班图(Cosmina Dorobantu)博士表示,研究中咨询的监管机构拒绝了单一人工智能监管机构的前景。她解释说:“每个人都反对,因为这会影响监管机构的独立性。”纯自愿的人工智能监管体系的前景也遭到了拒绝。“人工智能是一项非常广泛的技术,”该研究所公共政策项目主任海伦•马格茨(Helen Margetts)教授表示。“监管必须是一项协作努力。”尽管如此,鉴于英国监管机构的数量,政府提出的方法可能是一项复杂的任务。Dorobantu博士说:“我们在研究中了解到的更令人惊讶的事情之一是,没有监管机构的名单。”“没有人保存一个中央数据库。有100多家,有的有数千名员工,有的只有一个人。”两家公司解释说,所有这些监管机构都需要在这种拟议的方法下发展AI专业知识,而当监管重叠时,他们应该如何协调其活动将需要澄清。 Leon-Smith认为,政府提议的方法也可能成为监管机构的负担。他说:“目前还不清楚ICO和Ofcom是否能够处理增加的工作量。”“考虑到人工智能系统变化的频率,以及在线安全法案对Ofcom的预期影响,这项工作量尤其重要。”英国提出的方法包括一项规定,要求所有高风险的人工智能应用都是“可解释的”,特别是在偏见和潜在的不准确方面。莱昂-史密斯认为,这比欧盟的《人工智能法案》走得更远。政策文件指出,监管机构可能认为,无法解释的高风险决策应被完全禁止。欧盟还没有走到这一步,只是表示有关系统操作的信息应该向用户开放。”政府已邀请有兴趣人士就政策文件提供意见。它表示,将在即将发布的白皮书中公布拟议监管框架的更多细节。
您觉得本篇内容如何

1人已评

评分

评论

您需要登录才可以回复|注册

提交评论

提取码
复制提取码
点击跳转至百度网盘