小程序
传感搜
传感圈

We Shouldn’t Try to Make Conscious Software—Until We Should

2022-08-09
关注

Robots or advanced artificial intelligences that “wake up” and become conscious are a staple of thought experiments and science fiction. Whether or not this is actually possible remains a matter of great debate. All of this uncertainty puts us in an unfortunate position: we do not know how to make conscious machines, and (given current measurement techniques) we won’t know if we have created one. At the same time, this issue is of great importance, because the existence of conscious machines would have dramatic ethical consequences.

We cannot directly detect consciousness in computers and the software that runs on them, any more than we can in frogs and insects. But this is not an insurmountable problem. We can detect light we cannot see with our eyes using instruments that measure nonvisible forms of light, such as x-rays. This works because we have a theory of electromagnetism that we trust, and we have instruments that give us measurements we reliably take to indicate the presence of something we cannot sense. Similarly, we could develop a good theory of consciousness to create a measurement that might determine whether something that cannot speak was conscious or not, depending on how it worked and what it was made of.

Unfortunately, there is no consensus theory of consciousness. A recent survey of consciousness scholars showed that only 58 percent of them thought the most popular theory, global workspace (which says that conscious thoughts in humans are those broadly distributed to other unconscious brain processes), was promising. The top three most popular theories of consciousness, including global workspace, fundamentally disagree on whether, or under what conditions, a computer might be conscious. The lack of consensus is a particularly big problem because each measure of consciousness in machines or nonhuman animals depends on one theory or another. There is no independent way to test an entity’s consciousness without deciding on a theory.

If we respect the uncertainty that we see across experts in the field, the rational way to think about the situation is that we are very much in the dark about whether computers could be conscious—and if they could be, how that might be achieved. Depending on what (perhaps as-of-yet hypothetical) theory turns out to be correct, there are three possibilities: computers will never be conscious, they might be conscious someday, or some already are.

Meanwhile, very few people are deliberately trying to make conscious machines or software. The reason for this is that the field of AI is generally trying to make useful tools, and it is far from clear that consciousness would help with any cognitive task we would want computers to do.

Like consciousness, the field of ethics is rife with uncertainty and lacks consensus about many fundamental issues—even after thousands of years of work on the subject. But one common (though not universal) thought is that consciousness has something important to do with ethics. Specifically, most scholars, whatever ethical theory they might endorse, believe that the ability to experience pleasant or unpleasant conscious states is one of the key features that makes an entity worthy of moral consideration. This is what makes it wrong to kick a dog but not a chair. If we make computers that can experience positive and negative conscious states, what ethical obligations would we then have to them? We would have to treat a computer or piece of software that could experience joy or suffering with moral considerations.

We make robots and other AIs to do work we cannot do, but also work we do not want to do. To the extent that these AIs have conscious minds like ours, they would deserve similar ethical consideration. Of course, just because an AI is conscious doesn’t mean that it would have the same preferences we do, or consider the same activities unpleasant. But whatever its preferences are, they would need to be duly considered when putting that AI to work. Making a conscious machine do work it is miserable doing is ethically problematic. This much seems obvious, but there are deeper problems.

Consider artificial intelligence at three levels. There is a computer or robot—the hardware on which the software runs. Next is the code installed on the hardware. Finally, every time this code is executed, we have an “instance” of that code running. To which level do we have ethical obligations? It could be that the hardware and code levels are irrelevant, and the conscious agent is the instance of the code running. If someone has a computer running a conscious software instance, would we then be ethically obligated to keep it running forever?

Consider further that creating any software is mostly a task of debugging—running instances of the software over and over, fixing problems and trying to make it work. What if one were ethically obligated to keep running every instance of the conscious software even during this development process? This might be unavoidable: computer modeling is a valuable way to explore and test theories in psychology. Ethically dabbling in conscious software would quickly become a large computational and energy burden without any clear end.

All of this suggests that we probably should not create conscious machines if we can help it.

Now I’m going to turn that on its head. If machines can have conscious, positive experiences, then in the field of ethics, they are considered to have some level of “welfare,” and running such machines can be said to produce welfare. In fact, machines eventually might be able to produce welfare, such as happiness or pleasure, more efficiently than biological beings do. That is, for a given amount of resources, one might be able to produce more happiness or pleasure in an artificial system than in any living creature.

Suppose, for example, a future technology would allow us to create a small computer that could be happier than a euphoric human being, but only require as much energy as a light bulb. In this case, according to some ethical positions, humanity’s best course of action would be to create as much artificial welfare as possible—be it in animals, humans or computers. Future humans might set the goal of turning all attainable matter in the universe into machines that efficiently produce welfare, perhaps 10,000 times more efficiently than can be generated in any living creature. This strange possible future might be the one with the most happiness.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

参考译文
我们不应该尝试开发有意识的软件——直到我们应该这么做
机器人或高级人工智能“醒来”并变得有意识是思想实验和科幻小说的主要内容。这是否真的有可能仍然是一个很大的争论。所有这些不确定性将我们置于一个不幸的境地:我们不知道如何制造有意识的机器,而且(鉴于目前的测量技术)我们也不知道我们是否已经制造出了有意识的机器。与此同时,这个问题非常重要,因为有意识的机器的存在将产生戏剧性的伦理后果。我们不能直接在计算机及其上运行的软件中检测到意识,就像我们不能在青蛙和昆虫中检测到意识一样。但这并不是一个无法克服的问题。我们可以用仪器来测量不可见的光,比如x射线,来探测肉眼看不到的光。这是可行的,因为我们有一个我们信任的电磁学理论,我们有仪器给我们可靠的测量,以表明我们无法感知的东西的存在。同样地,我们也可以发展出一个关于意识的好的理论来建立一种测量方法,来确定不能说话的东西是否是有意识的,这取决于它是如何工作的以及它是由什么组成的。不幸的是,关于意识还没有一致的理论。最近对意识学者的一项调查显示,他们中只有58%的人认为最流行的理论——全球工作空间(该理论认为,人类的意识思维广泛分布于其他无意识的大脑过程)是有希望的。关于意识的三个最流行的理论,包括“全局工作空间”,在计算机是否或在什么条件下可能是有意识的这个问题上存在根本分歧。缺乏共识是一个特别大的问题,因为机器或非人类动物的每一种意识测量都依赖于一种或另一种理论。没有一个独立的方法来测试一个实体的意识而不决定一个理论。如果我们尊重该领域专家的不确定性,理性地思考这种情况的方式是,我们在很大程度上不知道计算机是否可以有意识——如果可以,如何实现意识。根据什么理论(也许是假设)是正确的,有三种可能性:计算机永远不会有意识,它们可能在某一天有意识,或者有些已经有意识。与此同时,很少有人刻意去制造有意识的机器或软件。其原因是,人工智能领域通常都在努力制造有用的工具,而意识是否会帮助我们希望计算机完成的任何认知任务,目前还远远不清楚。就像意识一样,伦理领域充满了不确定性,在许多基本问题上缺乏共识——即使在这个主题上进行了数千年的研究。但有一个普遍的(虽然不是普遍的)想法是,意识与伦理有一些重要的关系。具体来说,大多数学者,无论他们支持什么伦理理论,都相信体验愉快或不愉快的意识状态的能力是使一个实体值得道德考虑的关键特征之一。这就是为什么踢狗而不踢椅子是不对的。如果我们制造出能够体验积极和消极意识状态的计算机,那么我们对它们有什么道德义务呢?我们将不得不以道德上的考虑来对待一台能够体验快乐或痛苦的电脑或软件。 我们让机器人和其他人工智能做我们不能做的工作,也做我们不想做的工作。在某种程度上,这些人工智能拥有像我们人类一样的意识,它们应该得到类似的伦理考虑。当然,人工智能是有意识的并不意味着它会有与我们相同的偏好,或认为相同的活动令人不快。但无论它的偏好是什么,在让人工智能发挥作用时,都需要适当考虑这些偏好。让一台有意识的机器做它痛苦的工作是有道德问题的。这似乎是显而易见的,但还有更深层次的问题。考虑人工智能的三个层次。有一台计算机或机器人——软件运行的硬件。接下来是安装在硬件上的代码。最后,每次执行此代码时,我们都有该代码的一个“实例”在运行。我们的道德义务在什么程度上?可能是硬件和代码级别不相关,而有意识的代理是正在运行的代码的实例。如果有人让一台计算机运行一个有意识的软件实例,那么我们会在道德上有义务让它永远运行吗?进一步考虑,创建任何软件都主要是一项调试任务——反复运行软件实例,修复问题,并尝试使其工作。如果一个人在道德上有义务在开发过程中继续运行有意识软件的每个实例,会怎么样呢?这可能是不可避免的:计算机建模是探索和测试心理学理论的一种有价值的方法。从伦理上讲,涉足有意识的软件将很快成为一个巨大的计算和能源负担,没有任何明确的终点。所有这些都表明,如果我们有能力,我们可能不应该创造有意识的机器。现在我要把它反过来。如果机器能够拥有有意识的、积极的体验,那么在伦理学领域,它们被认为具有某种程度的“福利”,而运行这样的机器可以说能够产生福利。事实上,机器最终可能会比生物更有效地产生福利,比如幸福或快乐。也就是说,对于一定数量的资源,一个人在人工系统中可能比在任何生物中产生更多的幸福或快乐。例如,假设未来的一项技术能让我们创造出一台比快乐的人类更快乐的小型计算机,但它只需要一个灯泡那么多的能量。在这种情况下,根据某些伦理立场,人类最好的做法是创造尽可能多的人工福利——无论是动物、人类还是电脑。未来的人类可能会设定这样的目标:将宇宙中所有可获得的物质转化为能够高效生产福利的机器,其生产效率可能是任何生物的1万倍。这个奇怪的可能的未来,也许是最幸福的未来。这是一篇观点和分析文章,作者或作者所表达的观点不一定是《科学美国人》的观点。
您觉得本篇内容如何
评分

相关产品

HT Instruments EQUITEST5071 接地电阻测试仪

,附加功能:,标准配件:,可选:,标准:,多功能安装测试EQUITEST5071模型主要是用来执行地球的安全验证等电位导体与测试电流> 10符合IEC \ / EN61557 VDE100和IEC \/ EN60204-1:2006指南(医疗房间和电机的安全)。每项测试都可储存在电脑的内部记忆体内,并使用提供的Windows软件下载到电脑上。

Fluke 福禄克 FLUKE-1630-2 FC 接地电阻测试仪

Fluke Connect将夹具与智能手机或平板电脑上的应用程序无线连接。该应用程序会在你的智能手机或平板电脑上显示地面电阻的测量结果。nDisplay Type = LCD \ nSafety类别=猫三世1000 V,第四只猫600 V \ nBattery生活= 15 h \ nBattery = 1.5 V型AA碱性IEC \ / EN

PCE Instruments 5860402 声级计和噪声剂量计

,此集成精密脉冲声级计遵循所有有效的标准和指南(1级EN\/IEC 61672、ANSI S1.4-1983、ANSI S1.43-1997 EN\/IEC61260等)。此卡可在电脑中直接读取。,同样,声级计可通过USB连接到电脑,并可读取存储器。声级计的后处理软件可以方便地进行测量数据分析。

TelephoneStuff.com 6430-17-20-205 耳机

GO 6430适用于移动和软电话的蓝牙无线耳机提供了一个功能齐全的蓝牙解决方案,可将移动和软电话基于笔记本电脑的语音通道结合起来。并将耳机连接到用户喜欢的耳朵上-使用Jabra IntelliPower时能耗最小-旅行充电器和USB蓝牙适配器允许用户使用从任何电源插座、PC和汽车适配器为耳机充电-最多300英尺无线蓝牙范围-除满足EN

评论

您需要登录才可以回复|注册

提交评论

提取码
复制提取码
点击跳转至百度网盘