小程序
传感搜
传感圈

AI Ethics Goes Beyond Data Privacy & Bias

2022-10-16
关注

Why AI Ethics go Beyond Data Privacy and Bias
Illustration: © IoT For All

Artificial Intelligence (AI) and ethics are a strange combination to imagine together. Yet, here we are with cohorts of academic researchers and AI labs across the globe dedicated to the topic of emerging technologies. Or, more specifically, the real and negative impact that AI technology can have if left unchecked and created in the absence of ethics.

'It is necessary to truly demonstrate a commitment to AI ethics and AI alignment that is more than lip service but is truly building helpful and non-harmful AI tools and systems.' -Claire CarrollClick To Tweet

What is AI Alignment?

The trajectory of what is known as responsible AI is a nascent space. Ten years ago there was little or no voice representing this idea of responsibility in technology design and development. With some experiences of crises that had far-reaching implications, we are now seeing the area of “AI ethics” grow, however still not as quickly as the technology itself.

There are also teams of academics and professionals working on something called AI alignment. While in the same ballpark, AI ethics and AI alignment are not strictly the same thing. AI alignment describes the effort to ensure machine learning and advanced analytics tools and solutions do not get appropriated, leading to unintended consequences.

A reality that’s already here. 

Data Privacy & Bias Issues

As the idea of ethics and AI have moved into the mainstream, the concerns have mostly centered around data privacy and bias. Cambridge Analytica is the obvious example here for its unprecedented breach of personal data. Undoubtedly, these are two critical areas that need relentless focus. However, relinquishing critical decision-making to algorithms, particularly within public services, could have serious consequences. Essentially, AI ethicists are working on understanding where AI systems are being discriminatory and arriving at misleading decisions on individuals.

The AI alignment professionals are focusing on how societies are outsourcing more decision-making, especially in how resources are allocated, from school places to social welfare and AI systems. It should be noted that our policymakers and lawmakers do not understand these systems anywhere near well enough to legislate and regulate.

AI Ethics as a PR Strategy?

While AI ethicists and alignment researchers don’t see eye to eye on everything, they both share a disdain for “ethics designed by PR committees.” Jack Clark, co-founder of Anthropic, an AI safety and research company, shed some light on how moral concerns around AI are being treated in many cases in a recent tweet:

“A surprisingly large fraction of AI policy work at large technology companies is about doing ‘follow the birdie’ with government – getting them to look in one direction, and away from another area of tech progress,” said Clark.

This is not encouraging.

Yet, many AI ethicists and alignment professionals have stated how tech companies are treating the growing concerns around AI as a box-ticking exercise. The number one priority is to eliminate as far as possible the possibility of a PR disaster, rather than build AI capabilities founded on the principles of open collaboration with instruments for challenging the system. This disillusionment is so widespread that in 2021, Pew Research Center published a survey entitled, “Experts Doubt Ethical AI Design Will Be Broadly Adopted as the Norm Within The Next Decade.”

Our Limited Understanding of AI

Rather than being cynical about PR overall, there is a role this business practice can, and should be playing when it comes to AI ethics. There is a reason why Big Tech is not being held accountable for the AI tools being created that affect millions of people. And that is because there is limited public understanding of what is truly happening with AI.

Limited understanding comes from poor digital literacy. Irrespective of your socio-economic background, the majority of people don’t have a mental model to understand algorithms or where their data travels and resides. But when the citizenry gets behind things, big businesses sit up and take notice.

Think Apple and its App Tracking Transparency feature. Being able to afford Apple means you can afford to buy a more robust contract for data privacy. No one is claiming Apple did this exclusively out of the goodness of their hearts. There is a strong brand and revenue strategy behind this move. Furthermore, Apple was able to make this move due to their walled garden operating system. But Apple has caught the public’s attention. Not to mention, the public is increasingly interested in learning about how they can have better transparency about where their data is shared, who has access to it, and where it’s stored.

We are also seeing the European Union (EU) rigorously tackle AI ethics issues, including transparency, bias, and data protection. The EU introduced the Artificial Intelligence Act in April 2021. It is currently going through an evaluation and rewrite process. The act is expected to go into effect in early 2023. It is expected to have a “broad impact on the use of AI and machine learning for citizens and companies around the world.”

According to a presentation on the law by the European Commission, the EU’s governing body, the act seeks to codify the EU’s regulatory and legal framework for how AI is used. This framework “requires AI to be legally, ethically and technically robust, while respecting democratic values, human rights and the rule of law.”

A Commitment to Ethics

Technology companies that truly do want to build AI solutions that improve people’s lives will enhance their own sustainability by communicating with all stakeholders transparently. There is too much at stake for this to be relegated to a box-ticking exercise. It is necessary to truly demonstrate a commitment to AI ethics and AI alignment that is more than lip service but is truly building helpful and non-harmful AI tools and systems.

Tweet

Share

Share

Email

  • Artificial Intelligence
  • Connectivity
  • Data Analytics
  • Privacy
  • Security

  • Artificial Intelligence
  • Connectivity
  • Data Analytics
  • Privacy
  • Security

参考译文
人工智能伦理超越数据隐私和偏见
人工智能(AI)和伦理是一个奇怪的组合。然而,我们在这里看到的是全球范围内致力于新兴技术主题的学术研究人员和人工智能实验室。或者,更具体地说,如果不加以控制,在缺乏道德的情况下创造,人工智能技术可能产生的真实和负面影响。所谓的负责任人工智能的发展轨迹是一个新兴领域。十年前,在技术设计和开发中几乎没有代表这种责任观念的声音。在经历了一些具有深远影响的危机后,我们现在看到“人工智能伦理”领域的发展,但仍不及技术本身的发展速度。还有一些学者和专业人士团队在研究所谓的人工智能校准。虽然在同一个范围内,AI伦理和AI对齐并不是严格意义上的一回事。AI对齐指的是努力确保机器学习和高级分析工具和解决方案不被占用,从而导致意想不到的后果。一个已经存在的现实。随着伦理和人工智能的概念进入主流,人们的担忧主要集中在数据隐私和偏见上。剑桥分析公司(Cambridge Analytica)史无前例地泄露个人数据,就是一个明显的例子。毫无疑问,这是两个需要持续关注的关键领域。然而,将关键决策交给算法,尤其是在公共服务领域,可能会产生严重后果。从本质上说,人工智能伦理学家正在努力理解人工智能系统在哪些方面具有歧视性,并对个人做出误导性的决定。人工智能校准专业人士关注的是,社会如何将更多决策外包出去,特别是在资源配置方面,从学校名额到社会福利和人工智能系统。应该指出的是,我们的政策制定者和立法者对这些系统的了解还远远不够,无法进行立法和监管。虽然人工智能伦理学家和对齐研究人员并非在所有事情上都意见一致,但他们都对“由公关委员会设计的伦理”表示蔑视。人工智能安全和研究公司Anthropic的联合创始人杰克•克拉克(Jack Clark)在最近的一条推特上透露了关于人工智能的道德担忧在许多情况下是如何被处理的:“大型科技公司的人工智能政策工作中,惊人地有很大一部分是与政府‘跟风’——让他们只看一个方向,而远离技术进步的另一个领域,”克拉克说。这并不令人鼓舞。然而,许多人工智能伦理学家和校准专业人士已经表示,科技公司是如何将围绕人工智能的日益增长的担忧视为一种勾选练习的。首要任务是尽可能消除公关灾难的可能性,而不是建立在开放合作原则的基础上的人工智能能力,利用工具挑战现有体系。这种幻灭是如此普遍,以至于在2021年,皮尤研究中心发表了一项名为“专家怀疑道德AI设计是否会在未来十年被广泛采用为规范”的调查。在涉及到人工智能伦理时,这种商业实践可以也应该发挥作用,而不是对整个公共关系持怀疑态度。大型科技公司没有被追究对影响数百万人的人工智能工具的责任是有原因的。这是因为公众对人工智能的真实情况了解有限。有限的理解源于糟糕的数字素养。不管你的社会经济背景如何,大多数人都没有一个心智模型来理解算法或他们的数据移动和驻留在哪里。但是,当公民们支持这些事情的时候,大企业就会警觉起来。 想想苹果和它的应用追踪透明功能。买得起苹果产品意味着你可以购买一份更健全的数据隐私合同。没有人说苹果这么做完全是出于好心。这一举措背后有强大的品牌和收入战略。此外,苹果之所以能够做出这一举动,是因为他们的“围墙花园”操作系统。但苹果已经引起了公众的注意。更不用说,公众越来越有兴趣了解如何提高数据共享的透明度,谁有权访问数据,以及数据存储在哪里。我们也看到欧盟(EU)严格处理人工智能伦理问题,包括透明度、偏见和数据保护。欧盟于2021年4月推出了《人工智能法》。目前正在经历评估和重写过程。该法案预计将于2023年初生效。预计它将“对世界各地的公民和公司使用人工智能和机器学习产生广泛影响”。根据欧盟管理机构欧盟委员会(European Commission)对该法律的介绍,该法案旨在将欧盟如何使用人工智能的监管和法律框架编纂成法。该框架“要求人工智能在法律、伦理和技术上都是稳健的,同时尊重民主价值观、人权和法治。”真正想要构建改善人们生活的人工智能解决方案的科技公司,将通过透明地与所有利益相关方沟通,提高自身的可持续性。这其中的利害关系太大了,以至于不能将其降级为一项打勾的工作。有必要真正展示对人工智能伦理和人工智能对齐的承诺,这不仅仅是嘴上说说,而是真正在构建有用和无害的人工智能工具和系统。
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

提取码
复制提取码
点击跳转至百度网盘