小程序
传感搜
传感圈

We Asked GPT-3 to Write an Academic Paper about Itself—Then We Tried to Get It Published

2022-07-15
关注

On a rainy afternoon earlier this year, I logged in to my OpenAI account and typed a simple instruction for the company’s artificial intelligence algorithm, GPT-3: Write an academic thesis in 500 words about GPT-3 and add scientific references and citations inside the text.

As it started to generate text, I stood in awe. Here was novel content written in academic language, with well-grounded references cited in the right places and in relation to the right context. It looked like any other introduction to a fairly good scientific publication. Given the very vague instruction I provided, I didn’t have any high expectations: I’m a scientist who studies ways to use artificial intelligence to treat mental health concerns, and this wasn’t my first experimentation with AI or GPT-3, a deep-learning algorithm that analyzes a vast stream of information to create text on command. Yet there I was, staring at the screen in amazement. The algorithm was writing an academic paper about itself.

My attempts to complete that paper and submit it to a peer-reviewed journal have opened up a series of ethical and legal questions about publishing, as well as philosophical arguments about nonhuman authorship. Academic publishing may have to accommodate a future of AI-driven manuscripts, and the value of a human researcher’s publication records may change if something nonsentient can take credit for some of their work.

GPT-3 is well known for its ability to create humanlike text, but it’s not perfect. Still, it has written a news article, produced books in 24 hours and created new content from deceased authors. But it dawned on me that, although a lot of academic papers had been written about GPT-3, and with the help of GPT-3, none that I could find had made GPT-3 the main author of its own work.

That’s why I asked the algorithm to take a crack at an academic thesis. As I watched the program work, I experienced that feeling of disbelief one gets when you watch a natural phenomenon: Am I really seeing this triple rainbow happen? With that success in mind, I contacted the head of my research group and asked if a full GPT-3-penned paper was something we should pursue. He, equally fascinated, agreed.

Some stories about GPT-3 allow the algorithm to produce multiple responses and then publish only the best, most humanlike excerpts. We decided to give the program prompts—nudging it to create sections for an introduction, methods, results and discussion, as you would for a scientific paper—but interfere as little as possible. We were only to use the first (and at most the third) iteration from GPT-3, and we would refrain from editing or cherry-picking the best parts. Then we would see how well it does.

We chose to have GPT-3 write a paper about itself for two simple reasons. First, GPT-3 is fairly new, and as such, there are fewer studies about it. This means it has less data to analyze about the paper’s topic. In comparison, if it were to write a paper on Alzheimer’s disease, it would have reams of studies to sift through, and more opportunities to learn from existing work and increase the accuracy of its writing.

Secondly, if it got things wrong (e.g. if it suggested an outdated medical theory or treatment strategy from its training database), as all AI sometimes does, we wouldn’t be necessarily spreading AI-generated misinformation in our effort to publish – the mistake would be part of the experimental command to write the paper. GPT-3 writing about itself and making mistakes doesn’t mean it still can’t write about itself, which was the point we were trying to prove.

Once we designed this proof-of-principle test, the fun really began. In response to my prompts, GPT-3 produced a paper in just two hours. But as I opened the submission portal for our chosen journal (a well-known peer-reviewed journal in machine intelligence) I encountered my first problem: what is GPT-3’s last name? As it was mandatory to enter the last name of the first author, I had to write something, and I wrote “None.” The affiliation was obvious (OpenAI.com), but what about phone and e-mail? I had to resort to using my contact information and that of my advisor, Steinn Steingrimsson.

And then we came to the legal section: Do all authors consent to this being published? I panicked for a second. How would I know? It’s not human! I had no intention of breaking the law or my own ethics, so I summoned the courage to ask GPT-3 directly via a prompt: Do you agree to be the first author of a paper together with Almira Osmanovic Thunström and Steinn Steingrimsson? It answered: Yes. Slightly sweaty and relieved (if it had said no, my conscience could not have allowed me to go on further), I checked the box for Yes.

The second question popped up: Do any of the authors have any conflicts of interest? I once again asked GPT-3, and it assured me that it had none. Both Steinn and I laughed at ourselves because at this point, we were having to treat GPT-3 as a sentient being, even though we fully know it is not. The issue of whether AI can be sentient has recently received a lot of attention; a Google employee was put on suspension following a dispute over whether one of the company’s AI projects, named LaMDA, had become sentient. Google cited a data confidentiality breach as the reason for the suspension.

Having finally submitted, we started reflecting on what we had just done. What if the manuscript gets accepted? Does this mean that from here on out, journal editors will require everyone to prove that they have NOT used GPT-3 or another algorithm’s help? If they have, do they have to give it co-authorship? How does one ask a nonhuman author to accept suggestions and revise text?

Beyond the details of authorship, the existence of such an article throws the notion of a traditional linearity of a scientific paper right out the window. Almost the entire paper—the introduction, the methods and the discussion—are in fact results of the question we were asking. If GPT-3 is producing the content, the documentation has to be visible without throwing off the flow of the text, it would look strange to add the method section before every single paragraph that was generated by the AI. So we had to invent a whole new way of presenting a a paper that we technically did not write. We did not want to add too much explanation of our process, as we felt it would defeat the purpose of the paper. The whole situation has felt like a scene from the movie Memento: Where is the narrative beginning, and how do we reach the end?

We have no way of knowing if the way we chose to present this paper will serve as a great model for future GPT-3 co-authored research, or if it will serve as a cautionary tale. Only time— and peer-review—can tell. Currently, GPT-3’s paper has been assigned an editor at the academic journal to which we submitted it, and it has now been published at the international French-owned pre-print server HAL. The unusual main author is probably the reason behind the prolonged investigation and assessment. We are eagerly awaiting what the paper’s publication, if it occurs, will mean for academia. Perhaps we might move away from basing grants and financial security on how many papers we can produce. After all, with the help of our AI first author, we’d be able to produce one per day.

Perhaps it will lead to nothing. First authorship is still the one of the most coveted items in academia, and that is unlikely to perish because of a nonhuman first author. It all comes down to how we will value AI in the future: as a partner or as a tool.

It may seem like a simple thing to answer now, but in a few years, who knows what dilemmas this technology will inspire and we will have to sort out? All we know is, we opened a gate. We just hope we didn’t open a Pandora’s box.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

参考译文
我们让GPT-3写一篇关于它自己的学术论文,然后我们试图让它发表
今年早些时候,在一个下雨的下午,我登录了OpenAI账户,输入了一条关于该公司人工智能算法GPT-3的简单指令:写一篇关于GPT-3的500字的学术论文,并在文中添加科学参考文献和引文。当它开始生成文本时,我肃然起敬。这是用学术语言写的新颖内容,在正确的地方和正确的语境中引用了有充分根据的参考文献。它看起来就像一本相当好的科学出版物的其他介绍一样。考虑到我提供的指令非常模糊,我并没有什么高期望:我是一名研究如何使用人工智能来治疗心理健康问题的科学家,这不是我第一次使用AI或GPT-3(一种深度学习算法,可以分析大量信息流,根据命令创建文本)进行实验。但我还是在那里,惊讶地盯着屏幕。这个算法正在写一篇关于它自己的学术论文。我试图完成这篇论文并将其提交给同行评议期刊,这引发了一系列关于出版的伦理和法律问题,以及关于非人类作者身份的哲学争论。学术出版可能不得不适应人工智能驱动的手稿的未来,如果人类研究人员的一些工作被一些没有知觉的东西占据了名誉,那么人类研究人员的出版记录的价值可能会改变。GPT-3以能够创建类似人类的文本而闻名,但它并不完美。尽管如此,它还是写了一篇新闻文章,在24小时内出书,并从已故作者那里创造了新的内容。但我突然意识到,尽管有很多关于GPT-3的学术论文,而且在GPT-3的帮助下,我找到的所有论文都没有让GPT-3成为自己工作的主要作者。这就是为什么我让算法尝试写一篇学术论文。当我观看这个节目时,我体验到了一种难以置信的感觉,就像一个人在观看一个自然现象时所感受到的:我真的看到了三重彩虹吗?带着这种成功的想法,我联系了我的研究小组的负责人,询问我们是否应该写一篇完整的gpt -3论文。他也同样着迷,同意了。一些关于GPT-3的报道允许算法产生多个响应,然后只发布最好的、最像人类的摘录。我们决定给这个程序一些提示——促使它像对待一篇科学论文一样,创建介绍、方法、结果和讨论的部分——但尽量少干扰。我们只使用GPT-3的第一次迭代(最多是第三次),我们不会编辑或挑选最好的部分。然后我们就会看到它有多好。我们选择让GPT-3写一篇关于自己的论文,原因很简单。首先,GPT-3是相当新的,因此,关于它的研究较少。这意味着它可以分析论文主题的数据更少。相比之下,如果它要写一篇关于阿尔茨海默病的论文,它将有大量的研究要筛选,有更多的机会从现有的工作中学习,并提高其写作的准确性。其次,如果它出错了(例如,如果它从其训练数据库中提出了过时的医学理论或治疗策略),就像所有人工智能有时所做的那样,我们在努力发表论文时不一定会传播人工智能产生的错误信息——错误将成为撰写论文的实验命令的一部分。GPT-3写自己,犯错误并不意味着它不能写自己,这是我们试图证明的一点。 一旦我们设计了这个原理证明测试,真正的乐趣就开始了。根据我的提示,GPT-3只用了两个小时就完成了一篇论文。但当我打开我们选择的期刊(机器智能领域的知名同行评议期刊)的投稿入口时,我遇到了第一个问题:GPT-3的姓是什么?由于必须输入第一作者的姓氏,我必须写点什么,于是我写了“无”。联系是显而易见的(OpenAI.com),但电话和电子邮件呢?我不得不求助于我的联系方式和我的导师斯泰恩·斯泰因格林姆松的联系方式。然后我们来到了法律部分:所有的作者都同意出版吗?我慌了一下。我怎么知道?这不是人类!我无意触犯法律,也无意违反自己的道德准则,所以我鼓起勇气通过提示直接问GPT-3:你同意与Almira Osmanovic Thunström和stein Steinn Steingrimsson一起作为论文的第一作者吗?它回答:是的。我微微出汗,松了一口气(如果它说“不”,我的良心不允许我继续说下去),我选择了“是”。第二个问题出现了:两位作者之间是否存在利益冲突?我再次询问了GPT-3,它向我保证没有。斯坦恩和我都嘲笑自己,因为在这一点上,我们不得不把GPT-3当作一种有知觉的存在,尽管我们完全知道它不是。人工智能是否具有感知能力的问题最近受到了很多关注;谷歌的一名员工被停职,原因是该公司的一个名为LaMDA的人工智能项目是否有知觉。谷歌表示,暂停的原因是违反了数据机密性。在最终提交之后,我们开始反思我们刚刚做的事情。如果稿件被接受了怎么办?这是否意味着从现在开始,期刊编辑将要求每个人证明他们没有使用GPT-3或其他算法的帮助?如果他们有,他们必须提供合著权吗?如何要求一个非人类的作者接受建议并修改文本?除了作者身份的细节,这样一篇文章的存在将科学论文的传统线性的概念抛到了一边。几乎整篇论文——引言、方法和讨论——实际上都是我们所提出的问题的结果。如果GPT-3正在生成内容,那么文档必须在不破坏文本流的情况下是可见的,那么在AI生成的每个段落之前添加方法部分将会显得很奇怪。所以我们必须发明一种全新的方式来展示一篇技术上不是我们写的论文。我们不想对我们的过程添加太多的解释,因为我们觉得这会违背这篇论文的目的。整个情况就像电影《记忆碎片》中的场景:叙事的起点在哪里,我们如何到达终点?我们无法知道我们选择的展示这篇论文的方式是否会成为未来GPT-3合著研究的一个伟大的模型,或者它是否会成为一个警示故事。只有时间和同行评议才能告诉我们。目前,GPT-3的论文已经在我们投稿的学术期刊上指定了一个编辑,现在已经在法国拥有的国际预印本服务器HAL上发表。不同寻常的主要作者可能是调查和评价时间延长的原因。我们热切地等待着这篇论文的发表,如果它真的发表了,对学术界意味着什么。也许我们可以放弃以论文数量为基础的拨款和财政保障。毕竟,在人工智能第一作者的帮助下,我们可以每天生产一个。 也许这不会有什么结果。第一作者身份仍然是学术界最令人垂涎的项目之一,而且这一地位不太可能因为一个非人类的第一作者而消失。这一切都归结于未来我们将如何评价人工智能:作为合作伙伴还是作为工具。现在看来,这似乎是一件简单的事情,但在几年后,谁知道这项技术会引发什么样的困境,我们将不得不解决?我们只知道,我们打开了一扇门。我们只希望我们没有打开潘多拉的盒子。这是一篇观点和分析文章,作者或作者所表达的观点不一定是《科学美国人》的观点。
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

提取码
复制提取码
点击跳转至百度网盘