AI image generator’s data leak exposed thousands of prompts — and it’s a wake-up call for anyone using AI tools
Users may have a flawed perception of privacy when it comes to AI

When most people interact with AI, whether they’re typing a prompt or generating images, they assume a certain level of privacy. It feels like a conversation between you and the AI only. However, a recent report from Wired should make everyone think twice.
Security researcher Jeremiah Fowler discovered an unprotected database belonging to South Korean AI company GenNomis, which contained over 95,000 files, many of which were explicit, some of which were likely illegal.
The database revealed exactly what people had been generating with the company’s AI tool, and it was disturbing: non-consensual explicit imagery, deepfakes, and what appeared to be child sexual abuse material (CSAM).
GenNomis swiftly locked down the database once contacted, but the damage had already been done.
Assumptions about AI safety
This story is alarming for several reasons, but especially for what it reveals about AI safety and user assumptions. Many people use generative AI tools as if they’re personal assistants or private sketchbooks.
Some use them to brainstorm business ideas, write personal reflections, or even confess secrets, though the latter is among the top things you should never share with a chatbot.
But what if those prompts are stored? What if they’re accessible, not just to developers or internal teams, but potentially to hackers or researchers?
The GenNomis case isn’t an isolated incident. In fact, it highlights a much broader issue: our flawed perception of privacy regarding AI.
Sign up to get the BEST of Tom's Guide direct to your inbox.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
Many users still believe their conversations and creations with AI are private, when in reality, that data is often stored, reviewed, and in some cases, left vulnerable.
Major platforms like OpenAI’s ChatGPT or Google Gemini collect user inputs for system training and improvement. However, to be clear, your conversations are not used to train OpenAI models by default.
You can check or change this setting by going to Settings > Data Controls > Chat History & Training. If this is turned off, your conversations are not stored or used for training.
If you leave it on, then your chats may be reviewed by OpenAI to improve model performance, but even then, it’s done in a way that protects privacy (e.g., removing personal identifiers).
So, unless you explicitly allow your data to be used (by keeping chat history & training on), your chats are not used to train ChatGPT.
Why opting out isn't foolproof
As long as your input is being transmitted and stored through cloud infrastructure, there’s always a risk of exposure, whether through human error, system breach, or intentional misuse.
There are also real consequences to this. As seen in the GenNomis case, when AI data is not secured correctly, it doesn’t just represent a potential privacy violation — it can become a repository of harm.
From revenge porn and deepfakes to violent or illegal content, what users feed into these models can have ripple effects far beyond the screen.
Here are a few important things to keep in mind:
- AI prompts are not private by default. Unless you’re using a fully local or encrypted tool, assume what you write could be stored.
- Sensitive content should stay offline. Avoid sharing anything personal, confidential, or legally sensitive with AI tools.
- Your AI interactions can be part of future training data. Even if anonymized, your ideas or phrases might resurface in unexpected ways.
- Transparency varies. Not all AI companies disclose how long they keep your data or what they do with it.
- A breach doesn’t have to happen for harm to occur. Internal misuse or poor moderation standards can be just as dangerous.
Questions to ask yourself when using AI
This doesn’t mean you should stop using AI altogether. It means you should treat it like the powerful (and fallible) tool it is. Ask yourself:
Would I want this information to become public?
Is this something I’d be comfortable putting in an email or on social media?
Could this data be misused if it fell into the wrong hands?
AI is incredibly helpful — until it’s horrifying. The GenNomis breach serves as a chilling reminder that behind every prompt is a record, and behind every AI engine is a company (or multiple companies) managing your data. Before you type, consider where your words might end up.
Ultimately, the safest approach is simple: if it’s something you’d never want exposed, don’t share it with AI.
More from Tom's Guide
- I tested ChatGPT and Perplexity with 7 prompts — one of them blew me away
- I told ChatGPT something that I still regret — here's 7 things you should never share
- Back to the Future was released 40 years ago — here's all the AI they predicted that we have (and what they missed)













Amanda Caswell is an award-winning journalist, bestselling YA author, and one of today’s leading voices in AI and technology. A celebrated contributor to various news outlets, her sharp insights and relatable storytelling have earned her a loyal readership. Amanda’s work has been recognized with prestigious honors, including outstanding contribution to media.
Known for her ability to bring clarity to even the most complex topics, Amanda seamlessly blends innovation and creativity, inspiring readers to embrace the power of AI and emerging technologies. As a certified prompt engineer, she continues to push the boundaries of how humans and AI can work together.
Beyond her journalism career, Amanda is a bestselling author of science fiction books for young readers, where she channels her passion for storytelling into inspiring the next generation. A long-distance runner and mom of three, Amanda’s writing reflects her authenticity, natural curiosity, and heartfelt connection to everyday life — making her not just a journalist, but a trusted guide in the ever-evolving world of technology.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.