小程序
传感搜
传感圈

UK ICO offers advice on generative AI as more European countries mull ChatGPT bans

2023-04-07
关注

UK data watchdog the Information Commissioner’s Office (ICO) has warned businesses deploying and developing generative AI systems like ChatGPT to ensure that protecting customer information is central to their plans. The advice comes as more European countries consider whether to ban ChatGPT while its publisher OpenAI answers questions about how it collects and processes data.

The UK ICO has offered advice to businesses using generative AI systems like ChatGPT. (Photo by Giulio Benzin/Shutterstock)

In a blog post published on Monday, Stephen Almond, the ICO’s director of technology and innovation, published a list of eight questions businesses should ask themselves before incorporating AI into workflows where customer data is involved.

“It is important to take a step back and reflect on how personal data is being used by a technology that has made its own CEO ‘a bit scared’,” Almond wrote, referring to comments from OpenAI CEO Sam Altman about his own company’s systems.

He continued: “It doesn’t take too much imagination to see the potential for a company to quickly damage a hard-earned relationship with customers through poor use of generative AI. But while the technology is novel, the principles of data protection law remain the same – and there is a clear roadmap for organisations to innovate in a way that respects people’s privacy.”

Generative AI has enjoyed a boom in popularity since the launch of ChatGPT, OpenAI’s powerful natural-language chatbot which now runs on its recently released GPT-4 large language model (LLM). Microsoft has been incorporating the technology, which can answer questions with detailed and normally accurate prose, into its Office 365 suite, while other companies such as Google and Salesforce have been queuing up to launch their own AI-powered productivity tools based on LLMs.

How should businesses approach generative AI to safeguard data?

However, a backlash has already started against ChatGPT. On Friday Tech Monitor reported that Italy had blocked the chatbot from being used until OpenAI can guarantee that the way data on Italian citizens is collected and stored is compatible with the EU’s GDPR.

Italy’s data authority, Garante Privacy (GPDP), said OpenAI provides a “lack of information to users and all interested parties” over what data is collected, as well as a lack of a legal basis to justify the collection and storage of personal data that it used to train the algorithm and models that power ChatGPT.

Almond said that “organisations developing or using generative AI should be considering their data protection obligations from the outset, taking a data protection by design and by default approach”.

Content from our partners

The war in Ukraine has changed the cybercrime landscape. SMEs must beware

The war in Ukraine has changed the cybercrime landscape. SMEs must beware

Why F&B must leverage digital to unlock innovation

Why F&B must leverage digital to unlock innovation

Resilience: The power of automating cloud disaster recovery

Resilience: The power of automating cloud disaster recovery

The “data protection by design and default” approach is part of UK GDPR, and mandates businesses to “integrate or ‘bake in’ data protection into your processing activities and business practices, from the design stage right through the lifecycle”.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

Almond added: “This isn’t optional – if you’re processing personal data, it’s the law. Data protection law still applies when the personal information that you’re processing comes from publicly accessible sources.”

The blog goes on to list eight points organisations must consider should they wish to use generative AI or build their own model, covering transparency, unnecessary data processing and the impact of using AI in automated decision making.

It also encourages tech leaders using generative AI to consider their role as a data controller. “If you are developing generative AI using personal data, you have obligations as the data controller. If you are using or adapting models developed by others, you may be a controller, joint controller or a processor,” Almond says.

European countries consider ChatGPT bans

The ICO advice is in line with the UK’s general approach to regulating ChatGPT and other AI systems. Last week the government published a white paper setting out a light-touch, pro-innovation approach to AI, and said it had no plans to launch a dedicated regulator. But other European countries are considering whether to follow Italy’s lead and ban the chatbot.

France and Ireland’s privacy regulators have contacted GPDP to find out more about the basis for Italy’s ban, Reuters reported on Monday. “We are following up with the Italian regulator to understand the basis for their action and we will coordinate with all EU data protection authorities in relation to this matter,” a spokesperson for Ireland’s Data Protection Commissioner said.

Meanwhile, Germany’s data commissioner, Ulrich Kelber, told the Handelsblatt newspaper that his country could instigate a ban similar to Italy’s.

Potential privacy violations by generative AI are “just a tip of the iceberg of rapidly unfolding legal troubles,” according to Dr Ilia Kolochenko, founder of pen testing platform ImmuniWeb and a member of Europol Data Protection Experts Network.

“After the pompous launch of ChatGPT last year, companies of all sizes, online libraries and even individuals – whose online content could, or had been, used without permission for training of generative AI – started updating terms of use of their websites to expressly prohibit collecting or using their online content for AI training,” Kolochenko said.

“Even individual software developers are now incorporating similar provisions to their software licenses when distributing their open-sourced tools, restricting tech giants from stealthily using their source code for generative AI training, without paying the authors a dime.”

He added: “Contrasted to contemporary privacy legislation that currently has no clear answer whether and to what extent generative AI infringes privacy laws, website terms of service and software licenses fall under the well-established body of contract law, having an abundance of case law in most countries.

“In jurisdictions where liquidated damages in contract are permitted and enforceable, violations of website’s terms of use may trigger harsh financial consequences in addition to injunctions and other legal remedies for breach of contract, which may eventually paralyse AI vendors.”

Read more: ChatGPT is giving the rest of the world AI FOMO

参考译文
随着越来越多的欧洲国家考虑ChatGPT禁令,英国ICO提供了关于生成人工智能的建议
英国数据监管机构信息专员办公室(ICO)警告企业,部署和开发像ChatGPT这样的生成式人工智能系统,以确保保护客户信息是其计划的核心。该建议出台之际,越来越多的欧洲国家正在考虑是否禁止ChatGPT,而ChatGPT的出版商OpenAI则在回答有关该软件如何收集和处理数据的问题。在周一发表的一篇博客文章中,ICO的技术与创新总监斯蒂芬·阿尔蒙德(Stephen Almond)发布了一份清单,列出了企业在将人工智能应用到涉及客户数据的工作流程之前应该自问的8个问题。阿尔蒙德写道:“重要的是要退一步,反思一下个人数据是如何被一种让自己的首席执行官‘有点害怕’的技术所使用的。”他指的是OpenAI首席执行官萨姆·奥特曼对自己公司系统的评论。他接着说:“不需要太多想象力就能看到,一家公司有可能因为不好好使用生成式人工智能而迅速破坏与客户来之不易的关系。但是,虽然这项技术很新颖,但数据保护法的原则仍然不变——而且有一个清晰的路线图,让组织在尊重人们隐私的方式上进行创新。自OpenAI强大的自然语言聊天机器人ChatGPT推出以来,生成式人工智能受到了广泛的欢迎,ChatGPT现在运行在其最近发布的GPT-4大型语言模型(LLM)上。微软一直在将这项技术整合到其Office 365套件中,该技术可以用详细且通常准确的文字回答问题,而谷歌和Salesforce等其他公司也在排队推出他们自己的基于llm的人工智能生产力工具。然而,针对ChatGPT的反弹已经开始。周五,据Tech Monitor报道,意大利已经阻止了聊天机器人的使用,直到OpenAI能够确保意大利公民数据的收集和存储方式与欧盟的GDPR兼容。意大利数据监管机构gdp (Garante Privacy)表示,OpenAI在收集数据方面“向用户和所有相关方提供了信息”,同时也缺乏法律依据来证明其收集和存储个人数据的正当性,而OpenAI用于训练ChatGPT的算法和模型。阿尔蒙德表示:“开发或使用生成式人工智能的组织应该从一开始就考虑他们的数据保护义务,通过设计和默认方法来保护数据。”“通过设计和默认来保护数据”方法是英国GDPR的一部分,要求企业“从设计阶段到生命周期,将数据保护整合或‘嵌入’到您的处理活动和业务实践中”。阿尔蒙德补充说:“这不是可选的——如果你在处理个人数据,这是法律规定的。当你处理的个人信息来自公众可访问的来源时,数据保护法仍然适用。该博客接着列出了组织机构在希望使用生成式人工智能或建立自己的模型时必须考虑的8点,包括透明度、不必要的数据处理以及在自动化决策中使用人工智能的影响。它还鼓励使用生成式人工智能的技术领导者考虑自己作为数据控制者的角色。“如果你正在使用个人数据开发生成式人工智能,你就有作为数据控制者的义务。如果你正在使用或调整其他人开发的模型,你可能是一个控制器,联合控制器或处理器,”Almond说。ICO的建议与英国监管ChatGPT和其他人工智能系统的一般方法一致。上周,政府发布了一份白皮书,提出了一种轻触式、支持创新的人工智能方法,并表示没有设立专门监管机构的计划。但其他欧洲国家正在考虑是否效仿意大利,禁止聊天机器人。 路透社周一报道,法国和爱尔兰的隐私监管机构已经联系了GPDP,以了解更多关于意大利禁令的依据。爱尔兰数据保护专员的一位发言人表示:“我们正在跟进意大利监管机构,了解他们采取行动的依据,我们将与所有欧盟数据保护机构就此事进行协调。”与此同时,德国数据专员乌尔里希·凯尔伯(Ulrich Kelber)告诉《商报》(Handelsblatt),他的国家可能会发起类似意大利的禁令。钢笔测试平台immunweb创始人、欧洲刑警组织数据保护专家网络成员伊利亚·科洛琴科博士表示,生成式人工智能潜在的隐私侵犯“只是迅速展开的法律纠纷的冰山一角”。Kolochenko说:“在去年ChatGPT推出后,各种规模的公司、在线图书馆甚至个人——他们的在线内容可以或已经在未经许可的情况下被用于生成式人工智能培训——开始更新他们网站的使用条款,明确禁止收集或使用他们的在线内容用于人工智能培训。”“甚至个别软件开发人员在发布开源工具时,也在其软件许可证中加入了类似的条款,限制科技巨头在不向作者支付一分钱的情况下,偷偷地使用他们的源代码进行生成式人工智能培训。”他补充说:“当代隐私立法目前没有明确回答生成式人工智能是否侵犯了隐私法,以及在多大程度上侵犯了隐私法,相比之下,网站服务条款和软件许可属于完善的合同法体系,在大多数国家都有大量的判例法。”在允许并可执行合同中的违约金的司法管辖区,违反网站的使用条款可能会引发严重的经济后果,除了禁令和其他违反合同的法律补救措施外,最终可能会使人工智能供应商瘫痪。”
  • en
您觉得本篇内容如何
评分

相关产品

EN 650 & EN 650.3 观察窗

EN 650.3 version is for use with fluids containing alcohol.

Acromag 966EN 温度信号调节器

这些模块为多达6个输入通道提供了一个独立的以太网接口。多量程输入接收来自各种传感器和设备的信号。高分辨率,低噪音,A/D转换器提供高精度和可靠性。三路隔离进一步提高了系统性能。,两种以太网协议可用。选择Ethernet Modbus TCP\/IP或Ethernet\/IP。,i2o功能仅在6通道以太网Modbus TCP\/IP模块上可用。,功能

雷克兰 EN15F 其他

品牌;雷克兰 型号; EN15F 功能;防化学 名称;防化手套

Honeywell USA CSLA2EN 电流传感器

CSLA系列感应模拟电流传感器集成了SS490系列线性霍尔效应传感器集成电路。该传感元件组装在印刷电路板安装外壳中。这种住房有四种配置。正常安装是用0.375英寸4-40螺钉和方螺母(没有提供)插入外壳或6-20自攻螺钉。所述传感器、磁通收集器和壳体的组合包括所述支架组件。这些传感器是比例测量的。

TMP Pro Distribution C012EN RF 音频麦克风

C012E射频从上到下由实心黄铜制成,非常适合于要求音质的极端环境,具有非常坚固的外壳。内置的幻像电源模块具有完全的射频保护,以防止在800 Mhz-1.2 Ghz频段工作的GSM设备的干扰。极性模式:心形频率响应:50赫兹-18千赫灵敏度:-47dB+\/-3dB@1千赫

ValueTronics DLRO200-EN 毫欧表

"The DLRO200-EN ducter ohmmeter is a dlro from Megger."

Minco AH439S1N10EN 温湿度变送器

Minco空间湿度探测器组件具有温度补偿功能,结构紧凑,重量轻。它们是为直接安装在建筑内墙上而设计的。他们的特点是集成电路传感器与稳定的聚合物元件,是由烧结不锈钢过滤器封装,加上先进的微处理器,以提供准确和可重复的测量。温度输出是可选的。,用于需要:

评论

您需要登录才可以回复|注册

提交评论

techmonitor

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

LoRaWAN用于公共、私有和混合网络

提取码
复制提取码
点击跳转至百度网盘