OpenAI to launch deepfake detector as realism of AI-generated content grows
And a need to find fake content grows with it
It's no secret that AI-generated content is changing how people interact with the digital world. However, it also poses its fair share of risks as people create images and videos for potentially malicious reasons. And now, OpenAI says it's planning to address that.
OpenAI on Tuesday (May 7) launched a new image detection tool to help people identify whether an image was generated by its DALL-E 3 image-generation tool or if it was created without AI's assistance. In addition, OpenAI will make the tool available to a limited number of testers who are using its platform so they can integrate the image-detection feature into their apps.
In a blog post, OpenAI said that its tool could identify when an image was generated by AI in approximately 98 percent of cases and only returned false positions — identifying an actual image as one created by AI — in 0.5 percent of cases.
The new tool's announcement came alongside OpenAI's acknowledgment that it's been integrating metadata into the images and videos users create with its DALL-E 3 and Sora image- and video-creation tools, respectively. But as OpenAI acknowledged, with a little bit of know-how, malicious actors can remove that metadata, making its tool all the more necessary.
Deepfakes, or synthetically generated content designed to dupe users into believing it's human-generated, have become an increasingly concerning problem on the Internet. With an ever-increasing number of people turning to AI to create fake videos, images, and audio recordings, it's quickly becoming clear that verifying the veracity of content is exceptionally important.
While OpenAI's new tool is a step in the right direction, it's by no means a panacea. While it apparently works well, it's only been trained on DALL-E 3-generated images. In other words, if bad actors create images in other AI image generators, there's no guarantee the OpenAI tool will work as well—if it works at all. It's also worth noting that while OpenAI touted its image-sensing performance, deepfake videos designed to dupe users can be far more difficult to identify.
Still, at least OpenAI is doing something. In a world where bad actors are looking to fool users, companies like OpenAI need to find ways to protect users, or reality itself could fall prey to AI.
Sign up to get the BEST of Tom's Guide direct to your inbox.
Here at Tom’s Guide our expert editors are committed to bringing you the best news, reviews and guides to help you stay informed and ahead of the curve!
And in keeping with that, OpenAI also said on Tuesday that it's joined the steering committee for the Coalition for Content Provenance and Authenticity (C2Pa), a widely used standard that certifies digital content.
OpenAI stopped short of saying exactly how it'll impact C2PA, but did say that it "looks forward to contributing to the development of the standard, and we regard it as an important aspect of our approach."
More from Tom's Guide
- OpenAI’s 'superintelligent' AI leap nearly caused the company to collapse — here’s why
- OpenAI is paying researchers to stop superintelligent AI from going rogue
- OpenAI is building next-generation AI GPT-5 — and CEO claims it could be superintelligent
Don Reisinger is CEO and founder of D2 Tech Agency. A communications strategist, consultant, and copywriter, Don has also written for many leading technology and business publications including CNET, Fortune Magazine, The New York Times, Forbes, Computerworld, Digital Trends, TechCrunch and Slashgear. He has also written for Tom's Guide for many years, contributing hundreds of articles on everything from phones to games to streaming and smart home.