小程序
传感搜
传感圈

How to Tell If a Photo Is an AI-Generated Fake

2023-04-06
关注

You may have seen photographs that suggest otherwise, but former president Donald Trump wasn’t arrested last week, and the pope didn’t wear a stylish, brilliant white puffer coat. These recent viral hits were the fruits of artificial intelligence systems that process a user’s textual prompt to create images. They demonstrate how these programs have become very good very quickly—and are now convincing enough to fool an unwitting observer.

So how can skeptical viewers spot images that may have been generated by an artificial intelligence system such as DALL-E, Midjourney or Stable Diffusion? Each AI image generator—and each image from any given generator—varies in how convincing it may be and in what telltale signs might give its algorithm away. For instance, AI systems have historically struggled to mimic human hands and have produced mangled appendages with too many digits. As the technology improves, however, systems such as Midjourney V5 seem to have cracked the problem—at least in some examples. Across the board, experts say that the best images from the best generators are difficult, if not impossible, to distinguish from real images.

“It’s pretty amazing, in terms of what AI image generators are able to do,” says S. Shyam Sundar, a researcher at Pennsylvania State University who studies the psychological impacts of media technologies. “There’s been a giant leap in the last year or so in terms of image-generation abilities.”

Some of the factors behind this leap in ability include the ever-increasing number of images available to train such AI systems, as well as advances in data processing infrastructure and interfaces that make the technology accessible to regular Internet users, Sundar notes. The result is that artificially generated images are everywhere and can be “next to impossible to detect,” he says.

One recent experiment highlighted how well AI is able to deceive. Sophie Nightingale, a psychologist at Lancaster University in England who focuses on digital technology, co-authored a study that tested whether online volunteers could distinguish between passportlike headshots created by an AI system called StyleGAN2 and real images. The results were disheartening, even back in late 2021, when the researchers ran the experiment. “On average, people were pretty much at chance performance,” Nightingale says. “Basically, we’re at the point where it’s so realistic that people can’t reliably perceive the difference between those synthetic faces and actual, real faces—faces of actual people who really exist.” Although humans provided some help to the AI (researchers sorted through the images generated by StyleGAN2 to select only the most realistic ones), Nightingale says that someone looking to use such a program for nefarious purposes would likely do the same.

In a second test, the researchers tried to help the test subjects improve their AI-detecting abilities. They marked each answer right or wrong after participants answered, and they also prepared participants in advance by having them read through advice for detecting artificially generated images. That advice highlighted areas where AI algorithms often stumble and create mismatched earrings, for example, or blur a person’s teeth together. Nightingale also notes that algorithms often struggle to create anything more sophisticated than a plain background. But even with these additions, participants’ accuracy only increased by about 10 percent, she says—and the AI system that generated the images used in the trial has since been upgraded to a new and improved version.

Ironically, as image-generating technology continues to improve, humans’ best defense from being fooled by an AI system may be yet another AI system: one trained to detect artificial images. Experts say that as AI image generation progresses, algorithms are better equipped than humans to detect some of the tiny, pixel-scale fingerprints of robotic creation.

Creating these AI detective programs works the same way as any other machine learning task, says Yong Jae Lee, a computer scientist at the University of Wisconsin–Madison. “You collect a data set of real images, and you also collect a data set of AI-generated images,” Lee says. “Then you can train a machine-learning model to distinguish the two.”

Still, these systems have significant shortcomings, Lee and other experts say. Most such algorithms are trained on images from a specific AI generator and are unable to identify fakes produced by different algorithms. (Lee says he and a research team are working on a way to avoid that problem by training the detector to instead recognize what makes an image real.) Most detectors also lack the user-friendly interfaces that have tempted so many people to try the generative AI systems.

Moreover AI detectors will always be scrambling to keep up with AI image generators, some of which incorporate similar detection algorithms but use them as a way to learn how to make their fake output less detectable. “The battle between AI systems that generate images and AI systems that detect the AI-generated images is going to be an arms race,” says Wael AbdAlmageed, a research associate professor of computer science at the University of Southern California. “I don’t see any side winning anytime soon.”

AbdAlmageed says no approach will ever be able to catch every single artificially produced image—but that doesn’t mean we should give up. He suggests that social media platforms need to begin confronting AI-generated content on their sites because these companies are better posed to implement detection algorithms than individual users are.

And users need to more skeptically evaluate visual information by asking whether it’s false, AI-generated or harmful before sharing. “We as human species sort of grow up thinking that seeing is believing,” AbdAlmageed says. “That’s not true anymore. Seeing is not believing anymore.”

参考译文
如何判断一张照片是否是人工智能生成的假照片
你可能看到的照片表明情况并非如此,但前总统唐纳德·特朗普上周并没有被捕,教皇也没有穿一件时髦的亮白色羽绒服。这些最近的病毒式传播是人工智能系统的成果,该系统可以处理用户的文本提示来创建图像。他们展示了这些程序是如何在短时间内变得非常优秀的——而且现在足以说服一个不知情的观察者。那么,持怀疑态度的观众如何才能发现可能由DALL-E、Midjourney或Stable Diffusion等人工智能系统生成的图像呢?每个AI图像生成器——以及来自任何给定生成器的每张图像——在其说服力和可能泄露其算法的迹象方面各不相同。例如,人工智能系统一直在努力模仿人手,并产生了有太多手指的残缺附属物。然而,随着技术的进步,像Midjourney V5这样的系统似乎已经解决了这个问题——至少在一些例子中是这样。专家们表示,总体而言,最好的生成器生成的最好的图像很难(如果不是不可能的话)与真实图像区分开来。宾夕法尼亚州立大学(Pennsylvania State University)研究媒体技术对心理影响的研究员s·希亚姆·桑达尔(S. Shyam Sundar)说:“就人工智能图像生成器所能做的事情而言,这非常令人惊讶。”“在过去一年左右的时间里,图像生成能力有了巨大的飞跃。”桑达尔指出,这种能力飞跃背后的一些因素包括,用于训练这类人工智能系统的图像数量不断增加,以及数据处理基础设施和接口的进步,这些进步使普通互联网用户可以使用这项技术。他说,结果是,人工生成的图像无处不在,而且“几乎不可能被发现”。最近的一项实验突显了人工智能的欺骗能力。英国兰开斯特大学(Lancaster University)专注于数字技术的心理学家索菲·南丁格尔(Sophie Nightingale)与人合著了一项研究,测试在线志愿者能否区分由StyleGAN2人工智能系统创建的类似护照的大头照和真实图像。即使在研究人员在2021年底进行实验时,结果也令人沮丧。南丁格尔说:“平均而言,人们的表现基本上是随机的。”“基本上,我们所处的位置是如此真实,以至于人们无法可靠地感知这些合成面孔和真实的面孔之间的区别——真实存在的人的面孔。”尽管人类为人工智能提供了一些帮助(研究人员对StyleGAN2生成的图像进行了分类,只选择最真实的图像),南丁格尔说,那些希望利用这样一个程序来达到邪恶目的的人可能也会这样做。在第二次测试中,研究人员试图帮助测试对象提高他们的人工智能检测能力。在参与者回答后,他们标记出每个答案的正确或错误,他们还让参与者提前准备好,让他们阅读检测人工生成图像的建议。该建议强调了人工智能算法经常出错的领域,例如创建不匹配的耳环,或将一个人的牙齿模糊在一起。南丁格尔还指出,算法通常很难创造出比普通背景更复杂的东西。但即使添加了这些功能,参与者的准确率也只提高了10%左右,她说,而且生成试验中使用的图像的人工智能系统已经升级到一个新的改进版本。具有讽刺意味的是,随着图像生成技术的不断改进,人类防止被人工智能系统欺骗的最佳防御可能是另一个人工智能系统:一个训练有素的人工图像检测系统。专家表示,随着人工智能图像生成的发展,算法比人类更能检测出机器人创造的一些微小的像素级指纹。 威斯康星大学麦迪逊分校(University of Wisconsin-Madison)的计算机科学家李勇在(Yong Jae Lee)说,创建这些人工智能侦探程序的工作方式与任何其他机器学习任务相同。Lee说:“你收集了一组真实图像的数据集,也收集了一组人工智能生成的图像的数据集。”“然后你可以训练一个机器学习模型来区分这两者。”不过,李和其他专家说,这些系统仍有明显的缺点。大多数这样的算法都是根据特定AI生成器生成的图像进行训练的,无法识别不同算法生成的假货。(Lee表示,他和一个研究团队正在研究一种方法,通过训练检测器来识别图像的真实性,从而避免这个问题。)大多数探测器也缺乏用户友好的界面,吸引了很多人尝试生成式人工智能系统。此外,AI探测器将始终努力跟上AI图像生成器的步伐,其中一些包含类似的检测算法,但将其用作学习如何使虚假输出不易被检测的方法。南加州大学计算机科学研究副教授Wael AbdAlmageed表示:“生成图像的人工智能系统和检测人工智能生成图像的人工智能系统之间的战斗将会是一场军备竞赛。”“我认为短期内不会有任何一方获胜。”AbdAlmageed说,没有任何方法能够捕捉到每一张人工产生的图像,但这并不意味着我们应该放弃。他建议,社交媒体平台需要开始在其网站上对抗人工智能生成的内容,因为这些公司比个人用户更有能力实施检测算法。用户需要更加怀疑地评估视觉信息,在分享之前询问这些信息是虚假的、人工智能生成的还是有害的。AbdAlmageed说:“我们人类在成长过程中一直认为眼见为实。”现在已经不是这样了。眼见为实。”
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

scientific

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

机器人可以代替人类的哪些工作呢?

提取码
复制提取码
点击跳转至百度网盘