小程序
传感搜
传感圈

The Deepfake Threat is Real. Here Are 3 Ways to Protect Your Business

2022-08-22
关注

Technological improvements advance business and our societies significantly. However, progress also brings new risks that are difficult to deal with. Artificial Intelligence (AI) is at the forefront of emerging tech. It is finding its way into more applications than ever.

From automating clerical tasks to identifying hidden business drivers, AI has immense business potential. However, malicious AI use can harm enterprises and lead to an extreme loss of credibility.

The FBI recently highlighted a rising trend caused by remote work adoption where malicious actors used deepfakes to pose as interviewees for jobs in American companies. These actors stole U.S citizens' identities, intending to gain access to company systems. The implications for corporate espionage and security are immense.

How can companies combat the rising use of deepfakes even as the technology powering them grows more powerful than ever? Here are a few ways to mitigate security risks.

Related: A Successful Cybersecurity Company Isn't About Fancy Technology

Verify authenticity

Going back to the basics often works best when combating advanced technology. Deepfakes are created by stealing a person's identifying information, such as their pictures and ID information, and using an AI engine to make their digital likeness. Often, malicious actors use existing video, audio and graphics to mimic their victim's mannerisms and speech.

A recent case highlighted the extremes to which malicious actors use this technology. A series of European political leaders believed they conversed with the Mayor of Kyiv, Vitali Klitschko, only to be informed that they had interacted with a deepfake.

The office of the Mayor of Berlin eventually discovered the ploy after a phone call to the Ukrainian embassy revealed that Klitschko was engaged elsewhere. Companies would do well to study the lessons from this incident. Identity verification and seemingly simple checks can reveal deepfake use.

Companies face deepfake encounter risks when interviewing prospective candidates for remote open positions. Rolling back remote work norms is not practical if companies wish to hire top talent these days. However, asking candidates to display some form of official identification, recording video interviews and requiring new employees to visit company premises at least once immediately after hiring will mitigate the risks of hiring a deepfake actor.

While these methods won't prevent deepfake risks, they reduce the probability of a malicious actor gaining access to company secrets when deployed together. Like two-factor authentication prevents malicious access to systems, these analog methods can create roadblocks to deepfake use.

Other analog methods include verifying an applicant's references, including their picture and identity. For instance, send the applicant's picture to the authority and ask them to validate whether they know that person. Verify the reference's credentials by engaging with them on official or business domains.

Fight fire with fire

Deepfake technology leverages deep learning (DL) algorithms to mimic a person's actions and mannerisms. The result can be spooky. AI can create moving images and seemingly realistic videos of us when given just a few data points.

Analog methods can combat deepfakes, but they take time. One solution to quickly detect deepfakes is to use technology against itself. If DL algorithms can create deepfakes, why not use them to see deepfakes too?

In 2020, Maneesh Agrawala of Stanford University created a solution that allowed filmmakers to insert words into video subjects' sentences on camera. To the naked eye, nothing was amiss. Filmmakers rejoiced since they wouldn't have to reshoot scenes due to faulty audio or dialogue. However, the negative implications of this technology were immense.

Aware of this issue, Agrawala and his team countered their software with another AI-based tool that detected anomalies between lip movements and word pronunciations. Deepfakes that impose words on videos in a subject's voice cannot alter their lip movements or facial expressions.

Agrawala's solution can also be deployed to detect face impositions and other standard deepfake techniques. As with all AI applications, much depends on the data the algorithm is fed. However, even this variable reveals a connection between deepfake technology and the solution to fight it.

Deepfakes use synthetic data and datasets extrapolating from real-world occurrences to account for multiple situations. For instance, artificial data algorithms can process data from a military battlefield incident and gather that data to create even more incidents. These algorithms can change ground conditions, participant readiness variables, weaponry conditions, etc., and feed into simulations.

Companies can use synthetic data of this kind to combat deepfake use cases. By extrapolating data from current uses, AI can predict and detect edge use cases and expand our understanding of how deepfakes are evolving.

Related: A Business Leader's Beginner Guide to Cybersecurity

Accelerate digital transformation and education

Despite the sophisticated nature of technology combating deepfakes, Agrawala warns there is no long-term solution to deepfakes. This is a distressing message on the surface. However, companies can combat deepfakes by accelerating their digital postures and educating employees on best practices.

For instance, deepfake awareness will help employees analyze and understand the information. Any material circulating with information that seems outlandish or out of proportion can be called out instantly. Companies can develop processes to verify identities in remote work situations and assure their employees will follow them thanks to deepfake threats.

Once again, these methods cannot combat deepfake dangers by themselves. However, with all the previously mentioned techniques, companies can adopt a robust framework that minimizes deepfake threats.

Advanced tech calls for innovative solutions

The ultimate solution to deepfake threats lies in technological advancement. Ironically, the answer to deepfakes lies within the technology that powers them. The future will undoubtedly reveal new ways of combating this threat. Meanwhile, companies must remain aware of the risks deepfakes pose and work to mitigate them.

Related: The Importance of Training: Cybersecurity Awareness like a Human Firewall

参考译文
深度造假威胁是真实存在的。这里有3种方法来保护你的企业
技术进步极大地促进了商业和社会的发展。但是,进步也带来了一些难以应对的新风险。人工智能(AI)处于新兴技术的前沿,它正在寻求比以往任何时候都更多的应用。从文书工作自动化到识别隐藏的商业驱动力,人工智能具有巨大的商业潜力。然而,恶意使用AI会对企业造成伤害,导致信誉极度丧失。美国联邦调查局(FBI)最近强调了一种日益流行的趋势,这种趋势是由远程工作引起的,恶意行为者利用深度造假来冒充求职者,在美国公司求职。这些演员偷走了美国公民'身份,意图访问公司系统。这对企业间谍活动和安全有着巨大的影响。在支持深度造假的技术比以往任何时候都更强大的情况下,企业如何应对深度造假的日益增多?这里有一些降低安全风险的方法。相关信息:一家成功的网络安全公司对花哨的技术不感兴趣验证真实性在对抗先进技术时,返璞归真往往效果最佳。深度造假是通过窃取一个人的身份信息,比如他们的照片和身份信息,然后使用人工智能引擎来制作他们的数字肖像。通常,恶意行为者使用现有的视频、音频和图像来模仿受害者的行为和言语。最近的一个案件突出了恶意行为者使用这种技术的极端情况。一些欧洲政治领导人认为,他们与基辅市长维塔利·克利钦科(Vitali Klitschko)交谈过,结果却被告知,他们与一个深度假人进行了互动。柏林市长办公室最终发现了这一阴谋,他们给乌克兰大使馆打了一个电话,透露克里奇科在其他地方有事。企业应该好好学习这次事件的教训。身份验证和看似简单的检查可以发现深度伪造的使用。公司在面试远程开放职位的潜在候选人时面临深度造假的风险。如今,如果公司希望雇佣顶尖人才,取消远程工作规范是不现实的。然而,要求求职者出示某种形式的官方身份证件、录制面试视频,以及要求新员工在招聘后至少立即到公司参观一次,这些都可以降低雇佣深度造假演员的风险。虽然这些方法不能防止深度造假的风险,但它们在一起部署时降低了恶意行为者获取公司机密的概率。就像双因素身份验证可以防止对系统的恶意访问一样,这些模拟方法可以为深度伪造的使用创建路障。其他模拟方法包括验证申请人的参考资料,包括他们的照片和身份。例如,将应聘者的照片发送给权威机构,并请他们验证是否认识这个人。通过与他们在官方或商业领域进行接触来验证reference's凭证。Deepfake技术利用深度学习(DL)算法来模仿一个人的行为和习惯。结果可能令人毛骨悚然。只要给出几个数据点,人工智能就能创造出关于我们的动态图像和看似逼真的视频。模拟方法可以打击深度造假,但需要时间。快速检测深度造假的一个解决方案是利用技术来对付自己。如果DL算法可以创建深度造假,为什么不使用它们来查看深度造假呢?2020年,斯坦福大学的Maneesh Agrawala创造了一个解决方案,允许电影人在视频主题中插入文字'句子在镜头里。在肉眼看来,一切正常。电影制作人很高兴,因为他们不必因为错误的音频或对话而重新拍摄场景。然而,这项技术的负面影响是巨大的。 意识到这个问题后,阿格拉瓦拉和他的团队使用另一个基于人工智能的工具来检测嘴唇运动和单词发音之间的异常。在视频中强加文字的深度伪造技术无法改变拍摄对象的嘴唇动作或面部表情。阿格拉瓦拉的解决方案还可以用于检测面部强加文字和其他标准深度伪造技术。与所有AI应用一样,深度造假在很大程度上取决于算法输入的数据。然而,即使是这个变量也揭示了深度造假技术和对抗它的解决方案之间的联系。深度造假使用合成数据和数据集,从真实世界的事件中推断出多种情况。例如,人工数据算法可以处理来自军事战场事件的数据,并收集这些数据来制造更多事件。这些算法可以改变地面条件、参与者战备变量、武器条件等,并提供给模拟。公司可以使用这种合成数据来对抗深度造假的用例。通过从当前使用中推断数据,人工智能可以预测和检测边缘用例,并扩大我们对深度造假如何演变的理解。加速数字转型和教育尽管打击深度造假的技术具有复杂的本质,阿格拉瓦拉警告说,没有长期解决深度造假的方法。从表面上看,这是一个令人沮丧的信息。然而,企业可以通过加快数字化姿态和教育员工最佳实践来打击深度造假。例如,深度造假意识将帮助员工分析和理解信息。任何传播信息的材料,似乎是古怪的或不成比例的,可以立即调用。由于深度造假的威胁,公司可以开发在远程工作情况下验证身份的流程,并确保员工会遵循这些流程。再说一次,这些方法本身无法对抗深度造假的危险。然而,通过前面提到的所有技术,公司可以采用一个健壮的框架,最大限度地减少深度假威胁。深度造假威胁的最终解决方案在于技术进步。讽刺的是,深度造假的答案在于驱动它们的技术。毫无疑问,未来将揭示对抗这一威胁的新方法。与此同时,企业必须意识到深度造假带来的风险,并努力减轻风险。相关文章:培训的重要性:网络安全意识就像人类的防火墙
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

提取码
复制提取码
点击跳转至百度网盘