Watch out, students! OpenAI is about to make it impossible for you to cheat using ChatGPT — here’s how
It’s almost fool-proof, but OpenAI has concerns
Here at Tom’s Guide our expert editors are committed to bringing you the best news, reviews and guides to help you stay informed and ahead of the curve!
You are now subscribed
Your newsletter sign-up was successful
Want to add more newsletters?
Daily (Mon-Sun)
Tom's Guide Daily
Sign up to get the latest updates on all of your favorite content! From cutting-edge tech news and the hottest streaming buzz to unbeatable deals on the best products and in-depth reviews, we’ve got you covered.
Weekly on Thursday
Tom's AI Guide
Be AI savvy with your weekly newsletter summing up all the biggest AI news you need to know. Plus, analysis from our AI editor and tips on how to use the latest AI tools!
Weekly on Friday
Tom's iGuide
Unlock the vast world of Apple news straight to your inbox. With coverage on everything from exciting product launches to essential software updates, this is your go-to source for the latest updates on all the best Apple content.
Weekly on Monday
Tom's Streaming Guide
Our weekly newsletter is expertly crafted to immerse you in the world of streaming. Stay updated on the latest releases and our top recommendations across your favorite streaming platforms.
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
Coming from a family of school teachers, the one concern that keeps coming up is how students can use ChatGPT to cheat on their homework. There are tools that supposedly detect use of AI text generation, but their reliability is ropey.
And that’s why I’m sure they’re welcoming OpenAI’s sneaky update of a blog post from back in May (spotted by TechCrunch) that the company has developed a “text watermarking method” that is ready to go. However, there are concerns that have stopped the team releasing it.
Watermarking and metadata
So what are the methods OpenAI has been working on? There are multiple, but the company has detailed two:
- Watermarking: Adding some kind of secret text watermark has been “highly effective” in identified AI-generated work, and has even managed to be strong against “localized tampering, such as paraphrasing.”
- Metadata: Rather than adding a watermark that people can try to workaround and to eliminate any chance of a false positive (more on that later), OpenAI is also looking into adding metadata that is cryptographically signed.
The other method OpenAI has explored is using classifiers. You’ll see them used regularly in machine learning when it comes to email apps automatically putting messages in the spam folder or categorizing important emails into the main inbox. This could be used as a hidden classification of essays into being AI generated.
Can be problematic
These tools are basically ready to go, and OpenAI is sitting on them, according to a report from The Wall Street Journal. So what’s the hold up? Put simply, they’re not completely fool-proof and they could cause more harm than good.
I mentioned how watermarking is good against localized tampering, but it doesn’t do so great against “globalized tampering.” Certain cheeky methods like “using translation systems, rewording with another generative model, or asking the model to insert a special character in between every word and then deleting that character” will work around the watermark.
Meanwhile, the other problem is that of having a disproportionate impact on some groups. AI can be a “useful writing tool for non-native English speakers,” and in those situations, you don’t want to stigmatize the use of it — eliminating the global accessibility of these tools for education.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
More from Tom's Guide
- Meta’s SAM 2 lets you cut out and highlight anything in a video — here’s how to try it
- Is Apple Intelligence a game changer or gimmick?
- 5 Google Gemini AI prompts to get started with the chatbot

Jason brings a decade of tech and gaming journalism experience to his role as a Managing Editor of Computing at Tom's Guide. He has previously written for Laptop Mag, Tom's Hardware, Kotaku, Stuff and BBC Science Focus. In his spare time, you'll find Jason looking for good dogs to pet or thinking about eating pizza if he isn't already.










