小程序
传感搜
传感圈

US government proposes guidelines for responsible AI use by military

2023-02-18
关注

The US government has proposed a set of guidelines for the way artificial intelligence (AI) and automated systems should be used by the military, and says it hopes its allies will sign up to the proposals. The news comes as attempts to regulate the use of AI in Europe appear to have hit a stumbling block, with MEPs having reportedly failed to reach an agreement on the text of the Bloc’s upcoming AI act.

AI is becoming an increasingly important weapon for governments. (Photo by Gorodenkoff/Shutterstock)

Unveiled at the Summit on Responsible AI in the Military Domain (REAIM 2023), taking place in The Hague, Netherlands, this week, the snappily titled Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy is the US’s attempt to “develop strong international norms of responsible behaviour” around the deployment of AI on the battlefield and across the defence industry more generally.

How the military can use AI responsibly

Artificial intelligence is an increasingly important part of defence strategies. In the US, the Department of Defence invested $874m in AI technology as part of its 2022 budget, while last year the UK Ministry of Defence unveiled its defence AI strategy, a three-pronged approach which will see it working closely with the private sector. The technology can be deployed as part of semi-autonomous weapons systems such as drones, and also to help military planning and logistics operations.

This increased focus, and the growing power of AI systems, means governments have a responsibility to ensure they are used ethically, within the boundaries of international law. In 2020, the US government convened the AI Partner for Defense, an initiative involving 100 officials from 13 countries, looking at the responsible use of automated systems.

Today’s declaration, presented at the summit by Bonnie Jenkins, US undersecretary of state for arms control and international security, is being billed as a further attempt to gain commitments from governments about how AI technology will be used in the military.

Today, I announced the U.S. framework for a Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. The Declaration is a first step towards building international consensus on responsible State behavior in this area. https://t.co/hTgrJsZRND

— U/S of State for Arms Control & Int’l Security (@UnderSecT) February 16, 2023

“The aim of the declaration is to build international consensus around how militaries can responsibly incorporate AI and autonomy into their operations, and to help guide states’ development, deployment, and use of this technology for defence purposes to ensure it promotes respect for international law, security, and stability,” a spokesperson for the US Department of State said.

It consists of a series of non-legally binding guidelines describing “best practices for responsible use of AI in a defence context,” the spokesperson added. These include ensuring that military AI systems are auditable, have explicit and well-defined uses, are subject to rigorous testing and evaluation across their lifecycle, and that high-consequence applications undergo senior-level review and are capable of being deactivated if they demonstrate unintended behaviour.

Content from our partners

The role of modern ERP in transforming the distribution and logistics sector

The role of modern ERP in transforming the distribution and logistics sector

How designers are leveraging tech to apply the brakes to fast fashion

How designers are leveraging tech to apply the brakes to fast fashion

Why the tech sector must embrace faster, smarter talent recruitment

Why the tech sector must embrace faster, smarter talent recruitment

“We believe that this Declaration can serve as a foundation for the international community on the principles and practices that are necessary to ensure the responsible military uses of AI and autonomy,” the spokesperson added.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

EU AI act hits the buffers?

Meanwhile in Brussels, development of the EU’s landmark AI act, which will regulate automated systems across the continent, appears to have hit a snag.

It had been hoped basic principles for the act, the text of which is expected to go before the European Parliament before the end of March, would be agreed at a meeting today. But after five hours of talks, an agreement had not been found, according to a report from Reuters, which cites four people familiar with the discussions.

The legislation is expected to take a “risk-based” approach to AI regulation, meaning systems which pose a high threat to the safety and privacy of citizens will face stringent controls, while more benign AI systems will be allowed to operate with few restrictions. There has been much speculation that generative AI chatbots like ChatGPT will be classed as high risk, meaning their use could be banned in Europe because of their ability to generate hate speech, fake news, and other dangerous material such as malware. EU commissioner Thierry Breton said last week the rules would include provisions for generative AI following the success of ChatGPT.

An EU source told Reuters that discussions are ongoing over the bill. “The file is long and complex, MEPs are working hard to reach an agreement on their mandate for negotiations,” they said. “However there is no deadline or calendar on the next steps.”

Once the text of the bill has been established, it must clear the European Parliament before going to EU member states, which can propose amendments to the legislation before it is made law.

Read more: Ministry of Defence ‘not up to the task’ of delivering digital strategy

参考译文
美国政府提出军方负责任使用人工智能的指导方针
美国政府就军方应该如何使用人工智能(AI)和自动化系统提出了一套指导方针,并表示希望其盟友能够签署这些建议。这一消息传出之际,欧洲试图规范人工智能的使用似乎遇到了绊脚石,据报道,欧洲议会议员未能就欧盟即将出台的人工智能法案的文本达成一致。本周在荷兰海牙举行的军事领域负责任人工智能峰会(REAIM 2023)上公布了题为《关于负责任军事使用人工智能和自主的政治宣言》的内容,这是美国试图围绕在战场上和整个国防工业更广泛地部署人工智能“制定负责任行为的强有力的国际规范”。人工智能在国防战略中越来越重要。在美国,国防部(Department of defense)在人工智能技术上投资了8.74亿美元,这是其2022年预算的一部分。去年,英国国防部(Ministry of Defence)公布了国防人工智能战略,这是一项三管齐下的战略,将与私营部门密切合作。该技术可以作为无人机等半自动武器系统的一部分部署,也可以帮助军事规划和后勤行动。人工智能系统日益受到重视,其力量日益强大,这意味着各国政府有责任确保在国际法范围内合乎道德地使用人工智能系统。2020年,美国政府召集了“国防人工智能伙伴计划”,该计划涉及来自13个国家的100名官员,研究如何负责任地使用自动化系统。美国负责军备控制和国际安全事务的副国务卿邦妮•詹金斯(Bonnie Jenkins)在峰会上发表了今天的声明,该声明被宣传为争取各国政府就如何将人工智能技术应用于军事做出承诺的进一步尝试。今天,我宣布了美国关于负责任地军事使用人工智能和自主的政治宣言框架。《宣言》是朝着就这一领域负责任的国家行为建立国际协商一致意见迈出的第一步。https://t.co/hTgrJsZRND美国国务院发言人表示:“该宣言的目的是就军队如何负责任地将人工智能和自主纳入其行动建立国际共识,并帮助指导各国开发、部署和将这项技术用于国防目的,以确保它促进对国际法、安全与稳定的尊重。”该发言人补充说,它由一系列不具法律约束力的指导方针组成,描述了“在国防背景下负责任地使用人工智能的最佳做法”。其中包括确保军用人工智能系统可审计,具有明确且定义明确的用途,在整个生命周期内接受严格的测试和评估,以及高后果应用程序经过高级别的审查,并在出现意外行为时能够被停用。该发言人补充说:“我们相信,这份宣言可以作为国际社会在原则和实践方面的基础,这些原则和实践是确保人工智能和自主的负责任军事使用所必需的。”与此同时,在布鲁塞尔,欧盟具有里程碑意义的人工智能法案的制定似乎遇到了障碍。该法案将对整个欧洲大陆的自动化系统进行监管。该法案的文本预计将在3月底前提交欧洲议会(European Parliament)审议。此前人们希望,该法案的基本原则将在今日的一次会议上达成一致。但路透社(Reuters)援引四名知情人士的话报道称,经过五个小时的谈判,双方仍未达成协议。 预计该立法将对人工智能监管采取“基于风险的”方法,这意味着对公民安全和隐私构成高度威胁的系统将面临严格的控制,而更良性的人工智能系统将被允许在很少的限制下运行。有很多人猜测,像ChatGPT这样的生成式人工智能聊天机器人将被列为高风险,这意味着它们的使用可能在欧洲被禁止,因为它们能够生成仇恨言论、假新闻和恶意软件等其他危险材料。欧盟专员蒂埃里·布雷顿(Thierry Breton)上周表示,在ChatGPT取得成功之后,这些规则将包括生成式人工智能的条款。一位欧盟消息人士告诉路透社,有关该法案的讨论正在进行中。他们说:“这份文件又长又复杂,欧洲议会议员们正在努力就他们的谈判授权达成协议。”“然而,下一步没有最后期限或时间表。”一旦该法案的文本确定,它必须在提交给欧盟成员国之前获得欧洲议会的批准,欧盟成员国可以在立法成为法律之前对立法提出修正案。
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

提取码
复制提取码
点击跳转至百度网盘