小程序
传感搜
传感圈

The EU wants to make it easier to sue over harms caused by AI

2022-10-06
关注

People and companies that are harmed in any way by a drone, robot or software driven by artificial intelligence will be able to sue for compensation under new EU rules. While this offers extra protection to citizens, questions remain over where the buck stops in the supply chain.

Broken drones and faulty software that leads to personal harm fall under the new AI Liability Directive (Photo: Chetty Thomas/Shutterstock)
Broken drones and faulty software that lead to personal harm fall under the new AI Liability Directive. (Photo by Chetty Thomas/Shutterstock)

The AI Liability Directive brings together a patchwork of national rules from across all 27 member countries and is designed to “ensure that victims benefit from the same standards of protection when harmed by AI products or services, as they would if harm was caused under any other circumstances”.

Victims will be able to sue the developers, providers, and users of AI technology for compensation if they suffered harm to their life, property, health and privacy due to a fault or omission caused by AI and can also sue if discriminated against during a recruitment process that used AI but it is unclear where overall responsibility will lie under the current draft version of the directive,

Guillaume Couneson, partner at law firm Linklaters told Tech Monitor the directive “does not indicate against whom the victim of damages caused by an AI system should file its claim. It envisages that the defendant could be the provider or the user of the AI system.

Companies Intelligence

View All

Reports

View All

Data Insights

View All

“So let’s say a recruitment company uses an AI made by a third party to filter CVs and it automatically dismisses people from minority backgrounds. Would the developer or the recruitment company be at fault? The principle remains that the party which committed the fault is liable.”

It doesn’t institute a no-fault liability regime, but Couneson suggests this may come in the future – rather it aims to help the victim of an AI-caused harm to provide evidence in court. “It does so via two mechanisms, namely an obligation for the defendant to disclose evidence in certain circumstances and the presumption of a causal link between the fault of the defendant and the (failed) output of the AI system.”

EU artificial intelligence laws: the presumption of causality

This is done via the introduction of a “presumption of causality” which means victims only have to show that there was a failure to comply with certain requirements that led to the harm, then link this to the AI.

The directive covers tangible and intangible unsafe products, which includes software that is standalone or embedded and digital services that are needed to make the product work.

Content from our partners

The growing cybersecurity threats facing retailers

The growing cybersecurity threats facing retailers

Cloud-based solutions will be key to rebuilding supply chains after global stress and disruption

Cloud-based solutions will be key to rebuilding supply chains after global stress and disruption

How to integrate security into IT operations

How to integrate security into IT operations

BSA, the software alliance industry association broadly supports efforts to harmonise AI rules across Europe but warns of a need for greater clarity over responsibility if AI goes wrong – particularly whether it should fall on a developer, deployer or operator.

Data, insights and analysis delivered to you View all newsletters By The Tech Monitor team Sign up to our newsletters

“EU policymakers should clarify the allocation of responsibility along the AI value chain to make sure that responsibilities for compliance and liability are assigned to the entities best placed to mitigate harms and risks,” said Matteo Quattrocchi, BSA policy director for Europe.

“The goals of AI governance should be to promote innovation and enhance public trust. Earning trust will depend on ensuring that existing legal protections continue to apply to the use of artificial intelligence.”

Bart Willemsen, analyst at Gartner, told Tech Monitor the new amendment “puts victims of negative impact through AI-based decision making in a stronger position than when things are left in a ‘computer says no’ type of world, something we all very much should want to prevent”.

He said it also ties in with the new European Union AI Act, which is incredibly broad and addresses anyone putting an AI system into the market, those using the system within the EU but also any company outside the EU producing systems that will be used or deployed in the EU.

How tech leaders should approach the new AI rules

The impact of AI can vary from minimal to damaging if managed incorrectly and so the EU has updated legislation to make it easier to take action when it is mismanaged. The UK has similar proposals under the new AI Framework, taking a ‘risk-based’ approach to the legislation.

Willemsen says there are high-profile cases where AI and algorithms have had a dangerous impact on certain groups, citing Instagram and TikTok’s impact on mental disorders and young teenagers, and the effects of data harvesting through companies such as Cambridge Analytica on the political agenda.

“The liability clauses are therefore in line with the prohibitions with which the AI Regulation starts off to begin with,” he said. “The point of the AI liability directive here, is to empower victims of things like the above and of similar negative effects from AI usage and simplifies the legal process.”

He explained that the “presumption of causality” was particularly important as it means laymen will not have to go into too much technical detail about an AI model to prove it was involved in the harm, but so was the ability for victims to demand information from companies.

Companies must define organisational roles and assignments for managing AI trust, risk and security management, Willemsen warned, this will include privacy protection as part of preparing for the introduction of these new AI liability rules.

Its also important to “document the intentions of each AI model, including its function in the ecosystem of deployment, desired bias controls, and optimal business outcomes”.

Finally, he warned companies should avoid deploying AI models on individuals if it can be prevented, such as when using digital twin technology as it’s enough to address a “persona rather than a person” and always hold activities and technologies to “sufficient moral and societal standards”.

Overall the new liabilities are designed to modernise and reinforce existing liability rules already in place for manufacturers, but expanding them to consider automation and artificial intelligence.

Commissioner for Internal Market, Thierry Breton, said in a statement: “The new rules will reflect global value chains, foster innovation and consumer trust, and provide stronger legal certainty for businesses involved in the green and digital transition.

Read more: UK government sets out AI regulation plans

Topics in this article: AI, EU

参考译文
欧盟希望更容易起诉人工智能造成的损害
根据欧盟的新规定,受到无人机、机器人或由人工智能驱动的软件任何形式伤害的个人和公司都可以提起诉讼,要求赔偿。尽管这为公民提供了额外的保护,但在供应链的责任止于何处仍存在疑问。《人工智能责任指令》汇集了来自所有27个成员国的国家规则,旨在“确保受害者在受到人工智能产品或服务伤害时,与在任何其他情况下造成伤害时一样,受益于相同的保护标准”。如果受害者的生命、财产、健康和隐私因人工智能造成的错误或遗漏而受到伤害,他们将可以起诉人工智能技术的开发者、提供商和用户,要求赔偿,如果在使用人工智能的招聘过程中受到歧视,受害者也可以起诉,但根据目前的指令草案,总体责任在哪里尚不清楚。年利达律师事务所(Linklaters)的合伙人告诉《科技观察》(Tech Monitor),该指令“没有指明人工智能系统造成的损害的受害者应该向谁提出索赔。它设想被告可能是人工智能系统的提供者或用户。“所以,假设一家招聘公司使用第三方制造的人工智能来过滤简历,它会自动解雇少数族裔背景的人。开发商或招聘公司有过错吗?原则仍然是犯错的一方要承担责任。它没有建立无过错责任制度,但Couneson认为这可能在未来实现——相反,它旨在帮助人工智能造成的伤害的受害者在法庭上提供证据。“它通过两种机制来做到这一点,一是被告在特定情况下有义务披露证据,二是假定被告的过错与人工智能系统(失败)输出之间存在因果关系。”这是通过引入“因果关系推定”来实现的,这意味着受害者只需证明没有遵守某些要求导致了伤害,然后将其与AI联系起来。该指令涵盖了有形和无形的不安全产品,其中包括独立或嵌入式软件,以及使产品工作所需的数字服务。软件联盟行业协会BSA广泛支持在欧洲统一人工智能规则的努力,但警告称,如果人工智能出现问题,需要更明确责任——尤其是应该由开发者、部署者还是运营商承担责任。BSA欧洲政策主管马泰奥•夸特罗奇(Matteo Quattrocchi)表示:“欧盟政策制定者应明确人工智能价值链上的责任分配,以确保合规责任和责任分配给最能减轻危害和风险的实体。”“人工智能治理的目标应该是促进创新和增强公众信任。赢得信任将取决于确保现有的法律保护继续适用于人工智能的使用。高德纳(Gartner)分析师巴特•威廉森(Bart Willemsen)对《科技观察》(Tech Monitor)表示,新修正案“将基于人工智能的决策过程中受到负面影响的受害者置于更有利的地位,而不是把事情留在‘电脑说不’的世界中,这是我们都非常希望避免的事情”。他说,它还与新的欧盟人工智能法案相联系,该法案非常广泛,涉及任何将人工智能系统投入市场的人,那些在欧盟内使用该系统的人,以及任何在欧盟外生产将在欧盟使用或部署的系统的公司。如果管理不当,人工智能的影响可能很小,也可能是破坏性的。因此,欧盟已经更新了立法,以便在管理不当时更容易采取行动。英国在新的人工智能框架下也有类似的提案,采取了“基于风险”的立法方法。 威廉森表示,在一些引人注目的案例中,人工智能和算法对某些群体产生了危险的影响,比如Instagram和抖音对精神障碍和青少年的影响,以及通过剑桥分析(Cambridge Analytica)等公司收集数据对政治议程的影响。他说:“因此,责任条款与《人工智能条例》一开始的禁令是一致的。”“人工智能责任指令的要点是,赋予上述情况以及人工智能使用带来的类似负面影响的受害者权力,并简化法律程序。”他解释说,“因果推定”特别重要,因为这意味着外行不必深入了解AI模型的太多技术细节,以证明它与伤害有关,但受害者要求公司提供信息的能力也同样重要。Willemsen警告称,公司必须定义管理人工智能信任、风险和安全管理的组织角色和任务,这将包括隐私保护,作为引入这些新的人工智能责任规则的准备工作的一部分。“记录每个AI模型的意图,包括其在部署生态系统中的功能、期望的偏差控制和最佳业务结果”也很重要。最后,他警告说,如果可以防止,企业应该避免在个人身上部署人工智能模型,比如在使用数字双胞胎技术时,因为它足以解决“一个人物而不是一个人”,并始终保持活动和技术符合“足够的道德和社会标准”。总的来说,新的责任旨在现代化和加强现有的针对制造商的责任规则,但将其扩大到考虑到自动化和人工智能。内部市场专员Thierry Breton在一份声明中表示:“新规则将反映全球价值链,促进创新和消费者信任,并为参与绿色和数字转型的企业提供更强的法律确定性。”
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

techmonitor

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

燧原科技打造一站式人工智能算力中心

提取码
复制提取码
点击跳转至百度网盘