小程序
传感搜
传感圈

Can We Teach Morality to Artificial Intelligence?

2022-08-26
关注

In 2002, I waited more than ten minutes to download a single song using a 56k dial-up modem. Audio cassettes were still very much in vogue. Fast forward to 2022, and one can now instruct their phone or car to play their favorite tracks using their voice. We can sign into our favorite music streaming service automatically, and it shows us the music and artists that may fit depending on the mood, the time or the occasion. One can automate nearly all electrical systems in their house to operate on their schedule (remind them to get groceries, switch on lights when they enter, etc.).

In a relatively short span of two decades, we have gone from waiting for technology to respond, to machines and systems waiting on our following command. Whether we like it or are even aware, Artificial Intelligence and automation are already playing a significant role in our lives.

Related: Get Ready for How AI Will Change the Ways We Work in 2022 and Beyond

AI is in the early stages

Technology is slowly approaching a level of intelligence that can anticipate our needs. We are in the golden age of AI, and yet we have barely begun to see its applications. The following steps go from a routine to a more profound, abstract process. For instance, if you habitually drink coffee every morning, it's easy for an AI to learn your routine. But right now, it can't even begin to approach what you are thinking about while you drink your coffee. The next step in the evolution of AI could be your Google Home or Amazon Alexa, knowing that you're about to begin your day and could hand you your schedule unprompted.

AI is starting to switch gears from carrying out repetitive tasks to higher-order decision-making. We have barely begun to see AI's capabilities. In the next five to 10 years, AI will likely impact every fabric of our lives. While we are more than content to let AI work for us, what happens when we start to outsource complex thoughts and decision-making? Our capability to make decisions is based on our capacity for consciousness, empathy and the ability to take a moral stand. When we let machines think for us, do we also burden them with the complex web of human morality?

Related: Watch Out for These 5 Artificial Intelligence Problems in HR

Who decides which morals are right?

Mimicking human decision-making isn't merely a matter of logic or technology. Over the centuries of human civilization, we have developed genuinely complex moral codes and ethics. These are informed as much by societal norms as upbringing, culture, and, to a large extent, religion. The problem is that morality continues to be a nebulous notion with no agreed-upon universals.

What's perceived as moral in one society or religion could strike at the heart of everything right in another. The answer may vary depending on the context and who makes the decision. When we can barely balance our cognitive bias, how do we chart a path for machines to avoid data bias? There is mounting evidence that some of our technologies are already as flawed as we are. Facial recognition systems and digital assistants show signs of discrimination against women and people of color.

The most likely scenario is for AI to follow prescribed rules of morality per defined groups or societies. Imagine buying a basic code of ethics and upgrading it with morality packages depending on your proclivities. If you're a Christian, the morality pack will follow the standard Christian code of ethics (or as closely approximatively as possible). In this hypothetical scenario, we still have control over the moral principles that the machine will follow. The problem arises when that decision is taken by someone else. Just imagine the implications of an authoritarian government implementing its version of morality over ruthlessly monitored citizens. The debate over who could even make such a call would have far-reaching implications.

Related: Can Bedtime Stories Help Us Avoid the Robot Apocalypse?

What could a moral AI mean for the future?

The applications of a moral AI could defy belief. For instance, instead of today's version of overpopulated jails, AI could make rehabilitating criminals a real possibility. Could we dare to dream of a future where we could rewrite the morals of criminals with a chip and prevent murders? Will it be a boon to society or an ethical nightmare? Could it reflect the movie "The Last Days of American Crime?" Even minor applications such as integrated continuous glucose monitoring (iCGM) systems on wearables to optimize diet and lifestyle can have a long-term impact on our society and well-being.

As complicated as morality in AI is, it pays to remember that humans are a tenacious breed. We tend to get many things wrong in the first draft. As Shakespeare put it, "by indirections finds directions out." In other words, we tend to keep at the problem.

Almost all of our current technological advances had seemed impossible at some point in history. It will probably take us decades of trial and error, but we have already started on the first draft with projects like Delphi. Even a first iteration of an AI that attempts to be ethically informed, socially circumspect and culturally inclusive gives us a reason for hope. Perhaps technology can finally point us to that treasure map that promises an idyllic moral future that we have collectively dreamed of for centuries.

Related: Ethical Considerations of Digital Transformation

参考译文
我们能教人工智能道德吗?
2002年,我用56k的拨号调制解调器等了十多分钟才下载了一首歌曲。磁带仍然非常流行。时间快进到2022年,人们现在可以用自己的声音指令手机或汽车播放自己喜欢的歌曲。我们可以自动登录我们最喜欢的音乐流媒体服务,它会根据心情、时间或场合向我们显示适合的音乐和艺术家。一个人可以让家里几乎所有的电器系统都按照他们的时间表自动运行(提醒他们去买杂货,当他们进来时打开灯,等等)。在相对较短的20年里,我们已经从等待技术的响应,变成了等待我们以下命令的机器和系统。无论我们喜欢还是意识到,人工智能和自动化已经在我们的生活中扮演着重要的角色。相关内容:为2022年人工智能将如何改变我们的工作方式做好准备,BeyondAI正处于早期阶段,技术正慢慢接近可以预测我们需求的智能水平。我们正处于人工智能的黄金时代,但我们几乎还没有开始看到它的应用。下面的步骤从一个例行公事到一个更深刻、更抽象的过程。例如,如果你习惯每天早上喝咖啡,人工智能很容易了解你的日常生活。但是现在,它甚至不能开始接近你喝咖啡时所想的东西。人工智能进化的下一步可能是你的谷歌Home或亚马逊Alexa,知道你即将开始你的一天,可以主动给你你的时间表。人工智能正开始从执行重复性任务转向更高层次的决策。我们刚刚开始看到人工智能的能力。在未来的5到10年里,人工智能可能会影响我们生活的方方面面。虽然我们非常满足于让AI为我们工作,但当我们开始将复杂的想法和决策外包时,会发生什么?我们做决定的能力是基于我们的意识能力、同理心和采取道德立场的能力。当我们让机器替我们思考时,我们是否也在用复杂的人类道德网络来负担它们?相关:注意人力资源中的这5个人工智能问题谁来决定哪种道德是正确的?模仿人类的决策不仅仅是一个逻辑或技术问题。在几个世纪的人类文明中,我们发展出了真正复杂的道德规范和伦理。这些都受到了社会规范的影响,包括教养、文化,在很大程度上还有宗教。问题是道德仍然是一个模糊的概念,没有公认的普遍性。在一个社会或宗教中被视为道德的东西,在另一个社会或宗教中可能会击中一切正确的核心。答案可能会根据上下文以及谁做了决定而有所不同。当我们几乎无法平衡我们的认知偏见时,我们如何为机器绘制一条避免数据偏见的路径?越来越多的证据表明,我们的一些技术已经和我们一样有缺陷。面部识别系统和数字助理显示出歧视女性和有色人种的迹象。最有可能的情况是,人工智能按照特定群体或社会的规定遵守规定的道德规则。想象一下,你买了一套基本的道德准则,然后根据你的喜好用道德包装来升级它。如果你是一个基督徒,道德群体将遵循标准的基督教道德准则(或尽可能接近)。在这个假设的场景中,我们仍然可以控制机器遵循的道德原则。当这个决定是由其他人做出时,问题就出现了。想象一下,一个威权政府对被无情监控的公民实施其版本的道德会有什么影响。关于谁能打这样一个电话的争论将产生深远的影响。相关文章:睡前故事能帮助我们避免机器人末日吗? 道德人工智能的应用可能会颠覆信仰。例如,人工智能可以使罪犯改造成为现实,而不是像今天这样人口过多的监狱。我们敢梦想一个未来,用芯片改写罪犯的道德,防止谋杀吗?它是社会的福祉还是道德的噩梦?它能反映电影《美国犯罪的最后几天》吗?即使是在可穿戴设备上应用集成连续血糖监测(iCGM)系统来优化饮食和生活方式,也会对我们的社会和福祉产生长期影响。尽管人工智能中的道德问题很复杂,但我们应该记住,人类是一个顽强的物种。在初稿中,我们往往会写错很多东西。就像莎士比亚说的那样:“间接性找到方向。”换句话说,我们倾向于坚持解决问题。我们目前所有的技术进步在历史上的某个时刻似乎都是不可能的。这可能需要我们几十年的尝试和错误,但我们已经开始了像Delphi这样的项目的初稿。即使人工智能的第一次迭代尝试在道德上知情、社会上谨慎和文化上包容,也给了我们希望的理由。也许技术最终可以指引我们找到那张藏宝图,它承诺了一个我们共同梦想了几个世纪的田园诗般的道德未来。相关:数字化转型的伦理考虑
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

提取码
复制提取码
点击跳转至百度网盘