Slopsquatting: The worrying AI hallucination bug that could be spreading malware
A new AI trick could be infecting your computer

Software sabotage is rapidly becoming a potent new weapon in the cybercriminal arsenal, augmented by the rising popularity of AI coding.
Instead of inserting malware into conventional code, criminals are now using AI-hallucinated software packages and library names to fool unwary programmers.
It works like this: AI models, especially the smaller ones, regularly hallucinate (make up) non-existent components while they’re being used for coding.
Malicious types with coding skills study the hallucinated output from these AI models and then create malware with the same names.
The next time an AI requests the fake package, malware is served instead of an error message. At this point, the damage is done, as the malware becomes an integrated part of the final code.
Why is Slopsquatting so concerning?
A recent research report, which evaluated 16 popular large language models used for code generation, unveiled a staggering 205,474 unique examples of hallucinated package names.
These names are completely fictional, but can be used by cyber criminals as a way of inserting malware into Python and JavaScript software projects.
Sign up to get the BEST of Tom's Guide direct to your inbox.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
Perhaps unsurprisingly, the most common AI culprits for these sorts of package hallucinations are the smaller open-source models, which are used by professionals and homebrew vibe-coders (those who code via AI prompts) on their local computers, rather than using the cloud.
CodeLlama, Mistral 7B, and OpenChat 7B were some of the models that generated the most hallucinations. The worst model, CodeLlama 7B, delivered a whopping 25% hallucination rate when generating code in this way.
There is, of course, a long and storied history of inserting malware into everyday software products, using what is known as supply chain attacks.
This latest iteration follows on from the previous round of typosquatting, where misspellings of common terms are also used to fool coders into utilizing bad code.
Programmers who are on a deadline may often mistakenly use libraries, packages, and tools that have been deliberately misspelled and contain a malicious payload.
An evolving problem
One of the early examples was the use of a misspelled package called ‘electorn’, which was a twist on the Electron product, a popular application framework.
These attacks work because a large percentage of modern application programming involves downloading ready-made components to use in the project.
A recent research report, which evaluated 16 popular large language models used for code generation, unveiled a staggering 205,474 unique examples of hallucinated package names.
These components, often known as dependencies, can be downloaded and installed with a single simple command. Which makes it trivially easy for a cybercriminal to take advantage of a keyboard slip which requests the wrong name by mistake.
Because the integrated malware is extremely subtle, it can go unnoticed in the final product or application.
The end result, however, is the same - unwary users triggering malware without understanding or knowing what’s under the hood of their application.
What has made the arrival of AI more problematic in this regard is the fact that AI coding tools can and will automatically request dependencies as part of their coding process.
It may all sound a little random, because it is, but with the volume of coding that is now transitioning over to the AI arena, this type of opportunist attack is likely to rise significantly.
Security researchers are now focusing their attention on trying to mitigate this kind of attack by improving the fine-tuning of models.
New package verification tools are also coming onto the market, which can catch this type of hallucination before it enters the public arena. In the meantime, the message is, coders beware.
More from Tom's Guide
- Pika’s new AI video feature is already blowing minds — here's how to add twist endings to your favorite videos
- I write about AI for a living — here's my top 5 ChatGPT prompt tips
- I tested ChatGPT vs Midjourney V7 with 7 AI image prompts — it wasn’t even close












Nigel Powell is an author, columnist, and consultant with over 30 years of experience in the technology industry. He produced the weekly Don't Panic technology column in the Sunday Times newspaper for 16 years and is the author of the Sunday Times book of Computer Answers, published by Harper Collins. He has been a technology pundit on Sky Television's Global Village program and a regular contributor to BBC Radio Five's Men's Hour.
He has an Honours degree in law (LLB) and a Master's Degree in Business Administration (MBA), and his work has made him an expert in all things software, AI, security, privacy, mobile, and other tech innovations. Nigel currently lives in West London and enjoys spending time meditating and listening to music.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.