小程序
传感搜
传感圈

The Information Revolution Put Tech Disciplines at the Center — But Now It Needs the Humanities

2022-08-10
关注

No university campus is complete without a fierce rivalry between STEM and humanities students — and it's fair to say that the scientists have been winning the competition for a long time now. Artists and thinkers may have dominated during the Renaissance, but the Industrial Revolution has been the era of the tech worker. Apple's market cap is bigger than 96% of world economies, and digitally transformed enterprises now make up almost half of global GDP.

But as technology achieves more milestones and reaches a certain critical mass, I think that humanities are about to make a long-awaited comeback. Technological innovation — especially artificial intelligence — is crying out for us to ponder critical questions about human nature. Let's look at some of the biggest debates and how disciplines like philosophy, history, law and politics can help us answer them.

Related: This Is What You Should Know About AI's Impending Power

The rise of sentient AI

The potentially ominous or destructive consequences of artificial intelligence have been the subject of countless books, films and TV shows. For a while, that might have seemed like nothing more than fear-mongering speculation — but as technology continues to advance, ethical debates are starting to seem more relevant.

As AI becomes capable of replacing an increasing number of professions and many people are left jobless, it raises all kinds of moral dilemmas. Is it the role of the government to offer a universal basic income and completely restructure our society, or do we let people fend for themselves and call it survival of the fittest?

Then there's the question of how ethical it is to use AI to enhance human performance and avoid human failure in the first place. Where do we draw the line between a "human" and a "machine?" And if the lines become blurred, do robots need the same rights as humans? The decisions we make will ultimately determine the future of the human race and could make us stronger or weaker (or see us eliminated completely).

Humans or machines?

One of the AI advances raising eyebrows is Google's Language Model for Dialogue Applications (LaMDA). The system was first introduced as a way of connecting different Google services together, but ended up striking debate about whether the LaMDA was, in fact, sentient — as Google engineer Blake Lemoine claimed after witnessing how realistic its conversations were.

Ultimately, the general consensus was that Lemoine's arguments were nonsensical. The LaMDA was only using predictive statistical techniques to hold a conversation — just because its algorithms are sophisticated enough to seemingly have a dialogue, it doesn't mean the LaMDA is sentient. However, it does raise an important question about where things would stand if a theoretical AI system was able to do everything a human can, including having original thoughts and feelings. Would it deserve the same rights humans have?

Related: What Are Some of the Ethical Concerns of Artificial Intelligence?

The Turing test

The debate over what exactly we should see as human is nothing new. Back in 1950, Alan Turing created the Turing test to determine whether a machine can be sufficiently intelligent and similar enough to humans that we can submit that machines have some level of "consciousness."

However, not everyone agrees. The philosopher John Searle came up with a thought experiment called "Searle's Chinese room," which says that the program of a machine that only speaks in Chinese could be given to a person that doesn't speak Chinese in the form of cards. That person would then be able to follow the instructions on the cards to fool someone outside of the room into thinking they could speak Chinese if they communicated through a slot in the door; but clearly, this isn't the case.

According to Lemoine, Google isn't willing to allow a Turing test to be performed on the LaMDA, so it seems Searle isn't alone in his reservations. But who is going to settle these issues?

An area for the humanities

As more of our lives become enriched by AI, more of these questions will arise. 80,000 Hours, a nonprofit run by Oxford academics that focuses on how individuals can have the greatest impact in their careers, has highlighted positively shaping the development of artificial intelligence as one of the most prominent issues the world faces right now.

And although some of the work is likely to focus on research into technical solutions for how to program AI in a way that works for humans, policy and ethical research are also set to play a huge role. We need people who can tackle debates, such as which tasks humans have fundamental value in performing and which should be replaced by machines, or how humans and machines can work together as human-machine teams (HMTs).

Then there are all the legal and political implications of a society filled with AI. For instance, if an AI engine running an autonomous car makes a mistake, who is responsible? There are cases to argue for the fault being with the company that designed the model, the human the model learned its driving from or the AI itself.

For questions such as the last one, lawyers and policymakers are needed to analyze the issues at hand and advise governments on how to react. Their efforts would also be complemented by historians and philosophers who can look back and see where we've come short, what has kept us going as a human race and how AI can fit into this. Anthropologists will also have plenty to offer based on their studies of human societies and civilizations over time.

Related: Why AI and Humans Are Stronger Together Than Apart

An exponential rise in AI requires revitalizing the humanities

The rise of AI may happen faster than anyone could anticipate. Metcalfe's Law says that every additional person added to a network doubles the potential value of that network — meaning a network becomes exponentially more powerful with each addition. We've seen this happen with the spread of social networks, but the law is a potentially terrifying prospect when we talk about the fast ascent of AI. And to make sense of the issues outlined in this article, we need thinkers from all disciplines. Yet the number of students who obtained humanities degrees in 2020 in the U.S. decreased by 25% since 2012.

As AI becomes a greater part of our daily lives and technology continues to advance, nobody in their right mind would deny that we need brilliant algorithm developers, AI researchers and engineers. But we'll also need philosophers, policymakers, anthropologists and other thinkers to guide AI, set limits and help in situations of human failure. This requires people with a deep understanding of the world.

At a time when the humanities are largely viewed as "pointless degrees," and young people are discouraged from studying them, I would contend that there's a unique opportunity to revitalize them as disciplines that are more relevant than ever — but this requires collaborations between technical and non-technical disciplines that that are complex to build. Either way, these functions will inevitably be performed, but how well will depend on our ability to prepare future professionals in these areas who have both multidisciplinary and interdisciplinary views of the humanities.

参考译文
信息革命使技术学科成为中心,但现在需要人文学科
没有STEM和人文学科学生之间的激烈竞争,任何大学校园都是不完整的,公平地说,科学家们已经赢得竞争很长时间了。文艺复兴时期可能是艺术家和思想家的天下,但工业革命是技术工人的时代。苹果公司(Apple' inc .)的市值超过全球经济体的96%,数字化转型企业目前几乎占全球GDP的一半。但随着技术取得更多的里程碑,达到一定的临界质量,我认为人文学科即将迎来期待已久的回归。技术创新——尤其是人工智能——迫切需要我们思考有关人性的关键问题。让我们来看看一些最大的争论,以及哲学、历史、法律和政治等学科如何帮助我们回答这些问题。相关内容:这是你应该知道的关于人工智能即将到来的力量有知觉的人工智能的崛起人工智能潜在的不祥或破坏性的后果已经是无数书籍、电影和电视节目的主题。在一段时间内,这似乎不过是散布恐惧的猜测——但随着技术的不断进步,伦理辩论似乎开始变得越来越重要。随着人工智能能够取代越来越多的职业,许多人失业,它提出了各种道德困境。政府的角色是提供全民基本收入并彻底重组我们的社会,还是让人们自谋生路并称之为适者生存?接下来的问题是,首先使用人工智能来提高人类的表现并避免人类的失败,这有多合乎道德?我们在人类和人类之间划什么线?和一个& # 34;机器吗? & # 34;如果界限变得模糊,机器人需要和人类一样的权利吗?我们所做的决定将最终决定人类的未来,并可能使我们变得更强或更弱(或看到我们完全被淘汰)。人类还是机器?人工智能的进步之一是谷歌的对话应用语言模型(LaMDA)。该系统最初是作为一种将不同的谷歌服务连接在一起的方式引入的,但最终引发了一场关于LaMDA是否真的有感知能力的激烈辩论——谷歌工程师布雷克·莱莫因在目睹了其对话的真实性后声称。最终,普遍的共识是lemoine&# 39;的论点是荒谬的。LaMDA只是使用预测统计技术来进行对话——只是因为它的算法足够复杂,看起来可以进行对话,这并不意味着LaMDA有感知能力。然而,它确实提出了一个重要的问题:如果一个理论上的人工智能系统能够做人类能做的一切,包括拥有原始的想法和感觉,那么事情将处于什么位置?它应该拥有和人类一样的权利吗?相关:人工智能的一些伦理问题是什么?关于我们作为人类究竟应该看什么的争论并不是什么新鲜事。早在1950年,艾伦·图灵(Alan Turing)就创建了图灵测试(Turing test),以确定机器是否能够足够聪明,是否与人类足够相似,从而让我们认为机器具有某种程度的意识。然而,并不是每个人都同意这个观点。哲学家John Searle提出了一个思想实验,名为"Searle's Chinese room,"它说,只会说中文的机器程序可以以卡片的形式给不会说中文的人。然后,这个人就可以按照卡片上的说明,欺骗房间外的人,让他们认为通过门上的插槽交流就可以说中文;但很明显,事实并非如此。 根据Lemoine的说法,谷歌不愿意允许在LaMDA上进行图灵测试,所以Searle似乎不是唯一持保留意见的人。但谁来解决这些问题呢?随着越来越多的人的生活被人工智能丰富,这些问题也会越来越多。由牛津大学(Oxford)学者运营的非营利组织80,000 Hours关注的是个人如何在职业生涯中发挥最大影响。该组织强调,积极塑造人工智能的发展是当今世界面临的最突出问题之一。尽管其中一些工作可能集中在研究如何以一种适合人类的方式编程人工智能的技术解决方案,但政策和伦理研究也将发挥巨大作用。我们需要能够解决争论的人,比如,哪些任务是由人类执行的基本价值,哪些应该由机器取代,或者人类和机器如何作为人机团队(hmt)一起工作。然后是充满人工智能的社会的所有法律和政治影响。例如,如果运行自动驾驶汽车的人工智能引擎出现了错误,谁应该对此负责?有一些案例认为,这是设计模型的公司、模型学习驾驶的人或人工智能本身的错。对于最后一个问题,需要律师和政策制定者分析手头的问题,并就如何应对向政府提供建议。他们的努力也将得到历史学家和哲学家的补充,他们可以回顾并看到我们的不足之处,是什么让我们作为人类继续前进,以及人工智能如何融入其中。基于对人类社会和文明历史的研究,人类学家也会提供很多东西。相关:为什么人工智能和人类在一起比分开更强大人工智能的崛起可能比任何人都能预料到的要快。梅特卡夫定律认为,网络中每增加一个人,网络的潜在价值就会翻倍——这意味着每增加一个人,网络就会变得指数级地强大。随着社交网络的普及,我们已经看到了这种情况的发生,但当我们谈论人工智能的快速崛起时,法律是一个潜在的可怕前景。为了理解本文所概述的问题,我们需要来自各个学科的思考者。然而,自2012年以来,2020年美国获得人文学科学位的学生人数下降了25%。随着人工智能在我们的日常生活中占据更大的比例,技术不断进步,任何头脑正常的人都不会否认我们需要杰出的算法开发人员、人工智能研究人员和工程师。但我们还需要哲学家、决策者、人类学家和其他思想家来指导人工智能,设定限制,并在人类失败的情况下提供帮助。这需要对世界有深刻理解的人。在一个人文学科在很大程度上被视为毫无意义的学位的时代,"年轻人不被鼓励学习它们,我认为这是一个独特的机会来振兴它们,使它们成为比以往任何时候都更相关的学科——但这需要技术和非技术学科之间的合作,而这些合作是复杂的构建。无论哪种方式,这些功能都将不可避免地完成,但如何做好将取决于我们在这些领域培养未来的专业人员的能力,他们具有多学科和跨学科的人文学科观点。
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

提取码
复制提取码
点击跳转至百度网盘