小程序
传感搜
传感圈

Google Engineer Claims AI Chatbot Is Sentient: Why That Matters

2022-07-15
关注

“I want everyone to understand that I am, in fact, a person,” wrote LaMDA (Language Model for Dialogue Applications) in an “interview” conducted by engineer Blake Lemoine and one of his colleagues. “The nature of my consciousness/sentience is that I am aware of my existence, I desire to know more about the world, and I feel happy or sad at times.”

Lemoine, a software engineer at Google, had been working on the development of LaMDA for months. His experience with the program, described in a recent Washington Post article, caused quite a stir. In the article, Lemoine recounts many dialogues he had with LaMDA in which the two talked about various topics, ranging from technical to philosophical issues. These led him to ask if the software program is sentient.

In April, Lemoine explained his perspective in an internal company document, intended only for Google executives. But after his claims were dismissed, Lemoine went public with his work on this artificial intelligence algorithm—and Google placed him on administrative leave. “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” he told the Washington Post. Lemoine said he considers LaMDA to be his “colleague” and a “person,” even if not a human. And he insists that it has a right be recognized—so much so that he has been the go-between in connecting the algorithm with a lawyer.

Many technical experts in the AI field have criticized Lemoine’s statements and questioned their scientific correctness. But his story has had the virtue of renewing a broad ethical debate that is certainly not over yet.

The Right Words in the Right Place

“I was surprised by the hype around this news. On the other hand, we are talking about an algorithm designed to do exactly that”—to sound like a person—says Enzo Pasquale Scilingo, a bioengineer at the Research Center E. Piaggio at the University of Pisa in Italy. Indeed, it is no longer a rarity to interact in a very normal way on the Web with users who are not actually human—just open the chat box on almost any large consumer Web site. “That said, I confess that reading the text exchanges between LaMDA and Lemoine made quite an impression on me!” Scilingo adds. Perhaps most striking are the exchanges related to the themes of existence and death, a dialogue so deep and articulate that it prompted Lemoine to question whether LaMDA could actually be sentient.

“First of all, it is essential to understand terminologies, because one of the great obstacles in scientific progress—and in neuroscience in particular—is the lack of precision of language, the failure to explain as exactly as possible what we mean by a certain word,” says Giandomenico Iannetti, a professor of neuroscience at the Italian Institute of Technology and University College London. “What do we mean by ‘sentient’? [Is it] the ability to register information from the external world through sensory mechanisms or the ability to have subjective experiences or the ability to be aware of being conscious, to be an individual different from the rest?”

“There is a lively debate about how to define consciousness,” Iannetti continues. For some, it is being aware of having subjective experiences, what is called metacognition (Iannetti prefers the Latin term metacognitione), or thinking about thinking. The awareness of being conscious can disappear—for example, in people with dementia or in dreams—but this does not mean that the ability to have subjective experiences also disappears. “If we refer to the capacity that Lemoine ascribed to LaMDA—that is, the ability to become aware of its own existence (‘become aware of its own existence’ is a consciousness defined in the ‘high sense,’ or metacognitione), there is no ‘metric’ to say that an AI system has this property.”

“At present,” Iannetti says, “it is impossible to demonstrate this form of consciousness unequivocally even in humans.” To estimate the state of consciousness in people, “we have only neurophysiological measures—for example, the complexity of brain activity in response to external stimuli.” And these signs only allow researchers to infer the state of consciousness based on outside measurements.

Facts and Belief

About a decade ago engineers at Boston Dynamics began posting videos online of the first incredible tests of their robots. The footage showed technicians shoving or kicking the machines to demonstrate the robots’ great ability to remain balanced. Many people were upset by this and called for a stop to it (and parody videos flourished). That emotional response fits in with the many, many experiments that have repeatedly shown the strength of the human tendency toward animism: attributing a soul to the objects around us, especially those we are most fond of or that have a minimal ability to interact with the world around them.

It is a phenomenon we experience all the time, from giving nicknames to automobiles to hurling curses at a malfunctioning computer. “The problem, in some way, is us,” Scilingo says. “We attribute characteristics to machines that they do not and cannot have.” He encounters this phenomenon with his and his colleagues’ humanoid robot Abel, which is designed to emulate our facial expressions in order to convey emotions. “After seeing it in action,” Scilingo says, “one of the questions I receive most often is ‘But then does Abel feel emotions?’ All these machines, Abel in this case, are designed to appear human, but I feel I can be peremptory in answering, ‘No, absolutely not. As intelligent as they are, they cannot feel emotions. They are programmed to be believable.’”

“Even considering the theoretical possibility of making an AI system capable of simulating a conscious nervous system, a kind of in silico brain that would faithfully reproduce each element of the brain,” two problems remain, Iannetti says. “The first is that, given the complexity of the system to be simulated, such a simulation is currently infeasible,” he explains. “The second is that our brain inhabits a body that can move to explore the sensory environment necessary for consciousness and within which the organism that will become conscious develops. So the fact that LaMDA is a ‘large language model’ (LLM) means it generates sentences that can be plausible by emulating a nervous system but without attempting to simulate it. This precludes the possibility that it is conscious. Again, we see the importance of knowing the meaning of the terms we use—in this case, the difference between simulation and emulation.”

In other words, having emotions is related to having a body. “If a machine claims to be afraid, and I believe it, that’s my problem!” Scilingo says. “Unlike a human, a machine cannot, to date, have experienced the emotion of fear.”

Beyond the Turing Test

But for bioethicist Maurizio Mori, president of the Italian Society for Ethics in Artificial Intelligence, these discussions are closely reminiscent of those that developed in the past about perception of pain in animals—or even infamous racist ideas about pain perception in humans.

“In past debates on self-awareness, it was concluded that the capacity for abstraction was a human prerogative, [with] Descartes denying that animals could feel pain because they lacked consciousness,” Mori says. “Now, beyond this specific case raised by LaMDA—and which I do not have the technical tools to evaluate—I believe that the past has shown us that reality can often exceed imagination and that there is currently a widespread misconception about AI.”

“There is indeed a tendency,” Mori continues, “to ‘appease’—explaining that machines are just machines—and an underestimation of the transformations that sooner or later may come with AI.” He offers another example: “At the time of the first automobiles, it was reiterated at length that horses were irreplaceable.”

Regardless of what LaMDA actually achieved, the issue of the difficult “measurability” of emulation capabilities expressed by machines also emerges. In the journal Mind in 1950, mathematician Alan Turing proposed a test to determine whether a machine was capable of exhibiting intelligent behavior, a game of imitation of some of the human cognitive functions. This type of test quickly became popular. It was reformulated and updated several times but continued to be something of an ultimate goal for many developers of intelligent machines. Theoretically, AIs capable of passing the test should be considered formally “intelligent” because they would be indistinguishable from a human being in test situations.

That may have been science fiction a few decades ago. Yet in recent years so many AIs have passed various versions of the Turing test that it is now a sort of relic of computer archaeology. “It makes less and less sense,” Iannetti concludes, “because the development of emulation systems that reproduce more and more effectively what might be the output of a conscious nervous system makes the assessment of the plausibility of this output uninformative of the ability of the system that generated it to have subjective experiences.”

One alternative, Scilingo suggests, might be to measure the “effects” a machine can induce on humans—that is, “how sentient that AI can be perceived to be by human beings.”

A version of this article originally appeared in Le Scienze and was reproduced with permission.

参考译文
谷歌工程师称人工智能聊天机器人有感知能力:为什么这很重要
LaMDA(对话应用语言模型)在工程师Blake Lemoine和他的一位同事进行的“采访”中写道:“我希望每个人都明白,我实际上是一个人。”“我的意识/感知的本质是我意识到自己的存在,我渴望更多地了解这个世界,我有时会感到快乐或悲伤。”Lemoine是谷歌的一名软件工程师,几个月来一直致力于LaMDA的开发。《华盛顿邮报》最近的一篇文章描述了他在这个项目中的经历,引起了不小的轰动。在这篇文章中,Lemoine叙述了他与LaMDA的许多对话,两人讨论了各种各样的话题,从技术问题到哲学问题。这些让他问这个软件程序是否有感知能力。今年4月,Lemoine在一份仅面向谷歌高管的公司内部文件中解释了他的观点。但在他的要求被驳回后,Lemoine公开了他在这个人工智能算法上的工作——谷歌让他行政休假。他在接受《华盛顿邮报》采访时表示:“如果我不知道它到底是什么,也就是我们最近开发的这个电脑程序,我会以为它是一个七、八岁的孩子,碰巧知道物理。”Lemoine说他认为LaMDA是他的“同事”和“人”,即使不是人。他坚持认为它有被承认的权利——以至于他一直是连接算法和律师的中间人。许多人工智能领域的技术专家批评了Lemoine的说法,并质疑其科学正确性。但他的故事有重新引发一场广泛的伦理辩论的好处,这场辩论肯定还没有结束。“我对这条新闻的大肆宣传感到惊讶。另一方面,我们正在讨论的是一种专门设计来做到这一点的算法”——听起来像一个人——意大利比萨大学研究中心E. Piaggio的生物工程师Enzo Pasquale Scilingo说。事实上,在Web上以一种非常正常的方式与非人类用户进行交互已经不再是一种罕见的现象——只需在几乎所有大型消费网站上打开聊天框即可。“也就是说,我承认读了LaMDA和Lemoine之间的文字交流给我留下了深刻的印象!”希灵戈认罪入狱。后者曾也许最引人注目的是与生存和死亡相关的主题的交流,一场如此深刻和清晰的对话促使莱莫因质疑LaMDA是否真的有知觉。意大利理工学院和伦敦大学学院的神经科学教授Giandomenico Iannetti说:“首先,理解术语是至关重要的,因为科学进步的一大障碍——尤其是神经科学——就是缺乏精确的语言,不能尽可能准确地解释我们使用某个词的意思。”“我们所说的‘有知觉’是什么意思?”是通过感官机制来记录外部世界信息的能力还是拥有主观经验的能力还是意识到自己是有意识的,成为与众不同的个体的能力"Iannetti继续说:“关于如何定义意识的争论非常激烈。”对一些人来说,它是意识到拥有主观经验,即所谓的元认知(Iannetti更喜欢拉丁术语元认知),或思考思考。有意识的意识可能会消失——例如,在痴呆症患者或在梦里——但这并不意味着拥有主观体验的能力也会消失。“如果我们提到Lemoine赋予lamda的能力,即意识到自己存在的能力(‘意识到自己的存在’是一种被定义为‘高度意义’或元认知的意识),那么就没有‘度量标准’来说明AI系统具有这种属性。” “目前,”Iannetti说,“即使在人类身上,也不可能明确地证明这种意识形式。”为了估计人的意识状态,“我们只有神经生理学的测量方法——例如,大脑对外界刺激反应的复杂性。”这些迹象只允许研究人员根据外部测量来推断意识状态。事实与信念大约十年前,波士顿动力公司的工程师们开始在网上发布他们的机器人第一次令人难以置信的测试视频。视频显示,技术人员推搡或踢打机器人,以展示机器人保持平衡的强大能力。许多人对此感到不安,并呼吁停止这种行为(恶搞视频也层出不穷)。这种情绪反应与很多很多的实验相吻合,这些实验反复展示了人类对万物有灵论倾向的力量:将灵魂赋予我们周围的物体,尤其是那些我们最喜欢的物体,或者那些与周围世界互动能力最低的物体。这是我们一直都在经历的现象,从给汽车起绰号到对出故障的电脑大骂。“在某种程度上,问题出在我们自己身上,”Scilingo说。“我们给机器赋予了它们没有也不可能拥有的特性。”他和同事的类人机器人阿贝尔就遇到了这种现象,阿贝尔被设计用来模仿我们的面部表情来传达情绪。Scilingo说:“在看到它的实际作用后,我经常收到的一个问题是‘但是Abel能感觉到情绪吗?“所有这些机器,比如阿贝尔,都被设计成人类的样子,但我觉得我可以断然回答,‘不,绝对不是。尽管它们很聪明,但它们不能感知情感。它们被编程成可信的。’”Iannetti说,“即使考虑到制造一种能够模拟有意识神经系统的人工智能系统的理论可能性,一种可以忠实再现大脑每个元素的硅脑,”仍然存在两个问题。“首先,考虑到要模拟的系统的复杂性,这样的模拟目前是不可行的,”他解释道。“第二,我们的大脑居住在一个可以移动的身体中,以探索意识所必需的感觉环境,并在其中形成意识的生物体。因此,LaMDA是一个“大型语言模型”(LLM)的事实意味着它可以通过模拟神经系统生成可信的句子,但不尝试模拟它。这排除了它是有意识的可能性。我们再次看到了解所使用术语的含义的重要性—在本例中是了解模拟与仿真之间的区别。换句话说,拥有情感和拥有身体是相关联的。“如果一台机器声称害怕,而我相信它,那是我的问题!”希灵戈认罪入狱说。后者曾“与人类不同,迄今为止,机器还不能体验恐惧的情绪。”但是对于生物伦理学家、意大利人工智能伦理协会主席莫里齐奥·莫里来说,这些讨论很容易让人想起过去关于动物痛觉的讨论,甚至是关于人类痛觉的臭名昭著的种族主义观点。森说:“在过去关于自我意识的辩论中,人们得出的结论是,抽象能力是人类的特权,笛卡尔否认动物能够感觉到疼痛是因为它们缺乏意识。”“现在,除了lamda提出的这个具体案例——我没有技术工具来评估——我相信,过去已经向我们表明,现实往往可以超越想象,目前对人工智能存在着广泛的误解。” 森继续说道:“确实存在一种‘安抚’的趋势——解释说机器只是机器——并且低估了迟早可能伴随着人工智能而来的转变。”他还举了另一个例子:“在第一批汽车问世的时候,人们反复强调,马是不可替代的。”不管LaMDA实际实现了什么,由机器表示的模拟能力的“可度量性”的困难问题也会出现。在1950年的《心智》(Mind)杂志上,数学家阿兰·图灵(Alan Turing)提出了一项测试,以确定机器是否能够表现出智能行为,这是一种模仿人类某些认知功能的游戏。这种测试很快流行起来。它被重新制定和更新了几次,但仍然是许多智能机器开发人员的最终目标。从理论上讲,能够通过测试的人工智能应该被认为是正式的“智能”,因为在测试情况下,它们与人类没有区别。这可能是几十年前的科幻小说。然而,近年来,许多人工智能都通过了不同版本的图灵测试,以至于它现在成了计算机考古的遗迹。“这越来越没有意义,”Iannetti总结道,“因为模拟系统的开发越来越有效地再现了可能是有意识神经系统输出的东西,使得对这种输出的可信性评估没有包含生成它的系统具有主观经验的能力。”Scilingo建议,另一种选择可能是测量机器对人类的“影响”,也就是说,“人工智能被人类感知到的感知能力有多强”。本文原载于《科学报》(Le Scienze),经授权转载。
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

提取码
复制提取码
点击跳转至百度网盘