ChatGPT goes bad: WormGPT is an AI tool ‘with no ethical boundaries’

WormGPT is a new AI tool with ‘no ethical boundaries’
(Image credit: Pexels)

As ChatGPT continues to rise in popularity, a new and much darker alternative AI tool has been designed specifically for criminal purposes. 

WormGPT is a malicious AI tool capable of generating highly realistic and convincing text used to create phishing emails, fake social media posts, and other forms of nefarious content.

It’s based on the GPT-J open-source language model, which was developed in 2021. As well as generating text, WormGPT can also format code, making it much more likely cybercriminals can simply use the tech to create homebrew malicious software. It puts things like viruses, trojans and large-scale phishing attacks within easy reach. 

But the scariest aspect of this technology is WormGPT can retain chat memory, which means that it can keep track of conversations and use this information to personalize its attacks. 

Access to this AI tool is currently being sold on the dark web as a tool for scammers and creators of malware for just $67 a month, or $617 for a year. 

Cybersecurity firm SlashNext gained access to the tool through an online forum associated with cybercrime. During their testing, they described WormGPT as a “sophisticated AI model” but claimed it has “no ethical boundaries or limitations.”

Writing on a blog post, they explained: “This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities,

“WormGPT was allegedly trained on a diverse array of data sources, particularly concentrating on malware-related data.”

Why is WormGPT so scary?

A hacker typing on a computer

(Image credit: Shutterstock)

As more people become savvy with phishing emails, WormGPT can be used to undertake more sophisticated attacks. In particular, Business Email Compromise (BEC) which is a form of phishing attack where criminals attempt to trick senior account holders into either transferring funds or revealing sensitive information, which can be used for further attacks.  

Other AI tools, including ChatGPT and Bard, have security measures in place to protect user data and stop criminals from misusing the technology.

In their research, SlashNext used WormGPT to generate an email intended to pressure an unsuspecting account manager into paying a fraudulent invoice. 

The results showed WormGPT produced emails that were not only cunning but also remarkably persuasive, showcasing the potential for large-scale attacks.

WormGPT can also be used to create phishing attacks by producing convincing text that encourages users to divulge sensitive information such as login details and financial data. This could increase the number of identity theft cases, financial loss, and personal security. 

As technology, particularly AI, evolves, the potential for cybercriminals to use the technology for their own gain and cause significant harm.

Posting screenshots to the hacking forum that SlashNext used to obtain the tool, the developer of WormGPT described it as the "biggest enemy of the well-known ChatGPT,” claiming that it "lets you do all sorts of illegal stuff." 

A challenge for law enforcement

Malware

(Image credit: Shutterstock)

A recent report from Europol warned that large language models (LLMs) such as ChatGPT that can process, manipulate, and generate text could be used by criminals to commit fraud and spread disinformation. 

They wrote: “As technology progresses, and new models become available, it will become increasingly important for law enforcement to stay at the forefront of these developments to anticipate and prevent abuse. 

“ChatGPT’s ability to draft highly authentic texts based on a user prompt makes it a handy tool for phishing purposes,

“Where many basic phishing scams were previously more easily detectable due to obvious grammatical and spelling mistakes, it is now possible to impersonate an organisation or individual in a highly realistic manner even with only a basic grasp of the English language.”

They warned that LLMs mean that hackers can carry out attacks faster, more authentically, and on an increased scale. 

More from Tom's Guide

TOPICS
Rachael Penn
Contributor

Rachael is a freelance journalist based in South Wales who writes about lifestyle, travel, home and technology. She also reviews a variety of products for various publications including Tom’s Guide, CreativeBloq, IdealHome and Woman&Home. When she’s not writing and reviewing products she can be found walking her Sealyham and West Highland terrier dogs or catching up on some cringe-worthy reality tv. 

Read more
ChatGPT logo on a smart phone resting on a laptop keyboard, lit with a dark purple light
OpenAI has been actively banning users if they’re suspected of malicious activities
Mobile data
Cisco study shows DeepSeek is very susceptible to attacks — here's why
Sam Altman
OpenAI takes aim at authors with a new AI model that's 'good at creative writing'
DeepSeek logo on phone
Is DeepSeek a national security threat? I asked ChatGPT, Gemini, Perplexity and DeepSeek itself
ChatGPT logo on a smartphone screen being held outside
ChatGPT just got OpenAI's most powerful upgrade yet — meet 'Deep Research'
two bots chatting
What is 'Gibberlink'? Why it's freaking out the internet after these two AIs talking to each other went viral
Latest in ChatGPT
ChatGPT on iPhone
ChatGPT was down — updates on quick outage
ChatGPT app on iPhone
I just tested ChatGPT-4.5 with 5 prompts — the good, the bad and the weird
ChatGPT app icon on mobile device
ChatGPT 4.5 — 5 big upgrades you need to know
OpenAI logo
OpenAI ChatGPT-4.5 is here and it's the most human-like chatbot yet — here's how to try it
ChatGPT app icon on mobile device
ChatGPT Plus just got a huge deep research upgrade — here's how to try it now
A person logging into LinkedIn on their phone and laptop
Looking for a job? — 7 prompts to use ChatGPT o3-mini as a job search assistant
Latest in News
Bill Gates in 2019
Bill Gates just predicted the death of every job thanks to AI — except for these three
NYTimes Connections
NYT Connections today hints and answers — Wednesday, March 26 (#654)
Gemini screenshot image
Google unveils Gemini 2.5 — claims AI breakthrough with enhanced reasoning and multimodal power
Samsung Galaxy Z Flip 6 review.
Samsung Galaxy Z Flip 7 design just teased in new cases leak — and the outer display is huge
Google Chrome
Chrome failed to install on Windows PCs, but Google has issued a fix — here's what happened
nyc spring day AI image
OpenAI just unveiled enhanced image generator within ChatGPT-4o — here's what you can do now