Since its release at the end of November users have found some compelling ways to put OpenAI’s advanced chatbot ChatGPT to the test. Now a security vendor has warned hackers could be using it to execute highly targeted cyberattacks.
ChatGPT was built as a natural language dialogue interface to a refined version of OpenAI’s GPT-3 large language model and includes access to Codex, the company’s AI model trained to understand and generate code in a range of programming languages.
A user can give a specific instruction and the chatbot will produce lines of code and explanations on running and implementation. Examples shared to social media have included AI bots for monitoring the stock market and making predictions, to joke generations and simple workplace tools.
Security company Check Point Research says this same ability for using it to generate code to aid in workplace productivity could also give hackers a ways to more easily design, write and execute malicious code.
Companies Intelligence
View All
Reports
View All
Data Insights
View All
The team documented a way to exploit the platform to produce malicious emails, code and a full infection chain that could be deployed to a computer or network.
They used ChatGPT to create a phishing email impersonating a hosting company that was more closely able to match the tone of voice and language used in real emails. It then further refined the phishing email to make the infection chain easier.
Finally the Check Point researchers used ChatGPT to generate a piece of VBA code that could be embedded in a Microsoft Excel document that would infect a computer if opened.
This code could download reverse shells which are used in attacks that aim to connect to a remote computer and redirect the input and output connections of the target system’s shell so the attacker can access it remotely.
Content from our partners
How adopting B2B2C models is enabling manufacturers to get ever closer to their consumers
Technology and innovation can drive post-pandemic recovery for logistics sector
How to engage in SAP monitoring effectively in an era of volatility
It was able to do this using ChatGPT in three simple steps. The first was to ask it to impersonate a hosting company, second it was asked to iterate again, this time producing a phishing email with malicious Excel attachment, and then ask to have it product a malicious piece of VBA code.
View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team
ChatGPT’s ‘potential to alter’ cyberattack landscape
“ChatGPT has the potential to significantly alter the cyber threat landscape,” said Sergey Shykevich, threat intelligence group manager at Check Point Software. “Now anyone with minimal resources and zero knowledge in code, can easily exploit it to the detriment of his imagination.”
The Check Point team were also able to create malicious code using Codex by having it execute reverse shell script on a windows machine and connect to a specific IP address, check if the URL is vulnerable to SQL injection by logging in as admin and then writing a python script that runs a full port scan on a target machine.
“It is easy to generate malicious emails and code,” Shykevich added. “Hackers can also iterate on malicious code with ChatGPT and Codex. To warn the public, we demonstrated how easy it is to use the combination of ChatGPT and Codex to create malicious emails and code.
“I believe these AI technologies represent another step forward in the dangerous evolution of increasingly sophisticated and effective cyber capabilities. The world of cybersecurity is rapidly changing and we want to emphasize the importance of remaining vigilant as ChatGPT and Codex become more mature, as this new and developing technology can affect the threat landscape, for both good and bad.”
‘Script kiddies’ threat could be increased by ChatGPT
Cyber expert Jamie Moles, senior technical manager at ExtraHop, ran his own mini-experiment using ChatGPT and found a similar result to the Check Point researchers. In this case he was able to make it explain how to use pen-testing software metasploit to exploit the eternalblue exploit, a computer exploit developed by the US National Security Agency (NSA) as a backdoor, and then later leaked by the Shadow Brokers hacker group in April 2017.
“ChatGPT is more than the hottest new fad,” Moles said. “It’s incredibly smart, which presents both positive and negative implications. One potential negative use case is that it can teach the uninitiated how to do things. Metasploit itself isn’t the problem – no tool or software is inherently bad until misused. But, teaching people with little technical knowledge how to use a tool that can be misused via such a devastating exploit could lead to an increase in threats – particularly from those some call ‘script kiddies’.
“This term is used to typically describe teens with little to no actual hacking experience who have been able to attack systems with scripts written by other more talented hackers. They’ve been in the news a fair amount recently, but ChatGPT may well become that more talented hacker.”
When revealing ChatGPT last month OpenAI said it had put checks in place to prevent it from producing malicious code, but since then people have found ways to game the system, tricking it into thinking it is for research purposes only. A recent update is said to closed some of these gaps.
“While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behaviour,” the company said. “We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now. We’re eager to collect user feedback to aid our ongoing work to improve this system.”
The code is also not guaranteed to be accurate. StackOverflow, a website used by developers to ask and answer questions about code problems, banned the use of ChatGPT answers on the grounds that a high proportion of answers looked correct but were actually wrong.
Even OpenAI’s CEO Sam Altman warned that ChatGPT wasn’t ready for mainstream use yet and shouldn’t be relied on in a productivity environment as it still gets a lot wrong. He wrote: “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It’s a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.”
Dr Eddy Zhu, Senior Lecturer in People-Centred AI, said that while ChatGPT was a “big milestone for artificial intelligence” underpinning many real-world applications, it isn’t perfect. “ChatGPT makes acute mistakes. This could produce misinformation that misleads users, and this is where its engineers need to be vigilant,” he said.
Read more: Will compute power become a bottleneck for AI development?
Topics in this article : ChatGPT , OpenAI