The use of artificial intelligence to spread malware is increasing month-by-month as platforms like YouTube and Facebook are used to propagate malicious links via AI generated content and a fake ChatGPT extension. While the rise of generative AI chatbots like ChatGPT was always likely to be accompanied by a spike in cybercrime, social media sites should be more proactive in policing their platforms for harmful content as hackers become more advanced, researchers warn.
Both YouTube and Facebook have seen their platforms abused by cybercriminals to target their users. Increasingly these malware campaigns are designed using AI and ChatGPT, making them harder to detect.
“The threat actors are getting so sophisticated that it becomes hard for even well-aware users to distinguish between what’s good and what’s bad,” said Allan Liska CSIRT at security vendor Recorded Future.
AI and ChatGPT used to propagate malware campaigns on YouTube and Facebook
A new report from security company CloudSEK states that since November 2022 there has been a 200%-300% month-on-month increase in videos containing infostealer malware being uploaded to YouTube.
The videos masquerade as step-by-step guides on how to download expensive software like Photoshop, Premiere Pro and Autodesk 3DS Max for free. Links to the malware are concealed in the content’s description, and stealers found in the malicious videos include Vidar RedLine and Racoon.
Often AI-generated videos are being used in the campaigns because footage featuring humans with certain facial features have been found to be more popular, as they are more familiar and trustworthy.
“We have observed that every hour five to 10 ‘crack software’ download videos containing malicious links are uploaded to YouTube,” the report says. “At any given time, if a user searches for a tutorial on how to download a cracked software, these malicious videos will be available.”
In a similar style of attack, cybercriminals are luring in victims using a fake ChatGPT add-on for the Chrome browser. The malicious stealer extension is called “Quick Access to Chat CPT” and is promoted on Facebook sponsored posts, advertising a quick way to access the popular chatbot. Instead it implements a malvertising campaign.
Content from our partners
Addressing ESG to build a better, more sustainable business
Empower finance leaders to become agents of change
Why the fashion industry must leverage tech to unlock supply chain visibility
The extension gives users access ro ChatGPT’s API, but also harvests huge amounts of information from the browser such as cookies and credentials.
View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team
How the bogus ChatGPT extension works
Once downloaded, the extension becomes an integral part of the browser, allowing it to send requests to any other service, as if the browser owner themselves were administering the commands. “This is crucial as the browser, in most cases, already has an active and authenticated session with almost all your day-to-day services, e.g. Facebook,” explains a report from security company Guardio.
If the victim has a Facebook business account, it will be taken over completely. “By hijacking high-profile Facebook business accounts, the threat actor creates an elite army of Facebook bots and a malicious paid media apparatus. This allows it to push Facebook paid ads at the expense of its victims in a self-propagating worm-like manner,” continues the report.
“Once the victim opens the extension windows and writes a question to ChatGPT, the query is sent to OpenAI‘s servers to keep you busy – while in the background it immediately triggers the harvest.”
Tech Monitor has contacted YouTube and Facebook for comment.
Cybercriminals using AI and ChatGPT is to be expected, says Liska, but their scams are rapidly increasing in sophistication. “Our advice is always, ‘take a minute to think about what you’re doing. Is that really a ChatGPT application or is it a scam?’,” he says.
But it’s getting harder and harder to identify the fakes, Liska adds. “We’re in a sort of ‘Wild West’ ecosystem where it can be hard to distinguish between what’s illegitimate and what’s real,” he says.
“We need to start holding both software companies and platforms accountable for the bad things that happen on their network, when they allow this kind of malware to propagate on their platform without taking steps to address it.”
Read more: Malware infects more than 14,000 WordPress sites
Topics in this article : Cybersecurity