Will digital watermarking save the world from fake news?
The race is on to end the abuse of AI generated content
What do you do when AI fakery becomes more realistic than real life? Well, according to one recent MIT report, you label it with a watermark. But is that enough?
There’s a growing number of governments and concerned agencies who are driving a new move towards identifying fake media at source. The potential for abuse is obvious. How can people tell whether something is real or fake in today’s media-obsessed world?
The problem is not a new one. Those old enough to remember when Photoshop entered the market in 1990, will recall the shock and horror that greeted the first photographers who altered their work with the tool.
Of course, any product that can so easily remove wrinkles was never going to disappear, but for a time the retouching issue reached fever pitch across the world. And to a certain extent, it still exists. ‘Photoshopped’ is now an accepted verb everywhere.
AI just elevates the problem to a new level of potential pain. Not just images, but audio, and soon video. And for the first time, it’s not just retouching here and there, but creating new, completely non-existent, people, places and events. Where does it all stop?
How do you solve a problem like AI?
Of course, the answer is, that it doesn't stop, so we just have to deal with it. The good news is it's already started. YouTube has recently mandated that all AI-created videos must be so labeled. All creators who upload a video must ‘disclose content that is meaningfully altered or synthetically generated when it seems realistic.’
TikTok has taken it one step further by implementing technology that will automatically label all AI content uploaded to the service, even when the creator has not identified it as such.
Sign up to get the BEST of Tom's Guide direct to your inbox.
Here at Tom’s Guide our expert editors are committed to bringing you the best news, reviews and guides to help you stay informed and ahead of the curve!
We all know the driving force behind these moves, especially in an election year. Fake news has become a popular rallying call, and the whole situation threatens to careen out of control without some sort of industry or legislative policing.
These early moves by the content platforms is obviously an attempt to deflect calls for legislation, but it may be too little too late.
The problem will get worse before it is solved
SynthID can now watermark AI-created audio. 🎶Its first deployment will be through Lyria, our most advanced AI music generation model to date.How does it work? 🧵 https://t.co/Ib0ID8vJgM pic.twitter.com/g0ootgYl4aNovember 22, 2023
The problem is that the new AI tools are becoming too popular, and the majority of the material is not created for sinister purposes. Advertising, fashion, product marketing, even news services are using AI to enhance media content, often in innovative and valuable ways. And each channel comes with its own potential for abuse.
The most cohesive move against fake content has come from the Coalition for Content Provenance and Authenticity (C2PA). This is a project launched by the Joint Development Foundation, a Washington-based non-profit that aims to tackle AI-based misinformation and manipulation. Its Content Credentials initiative includes major players like Adobe, X, OpenAI, Microsoft and the New York Times.
The move follows on from a presidential Executive Order issued by President Biden late last year which aimed to “protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.”
Google and Meta also have their own initiatives in place, or coming up shortly, called SynthID and StableSignature respectively. Whether these attempts will be enough remains to be seen.
A problem that will sort itself out
The fact is the best watermarking or legislative regulation in the world will not stop something from going viral if it’s addictive enough. And by that time, the provenance of the media is a minor part of the equation.
Who reads retractions in newspapers when they admit they got something wrong in a prior report? But in the end the one thing that might possibly solve the issue is the public’s growing ‘spidey-sense’ about true and false.
In the same way that many people can spot obviously Photoshopped images from their outrageous composition or improbable subject matter, so with AI it may eventually be possible to sense strangeness about a piece of media, no matter how well it’s done.
Or as someone clever once said, perhaps we should assume that all media content is AI faked, unless it’s been incontrovertibly identified as human-made.
More from Tom's Guide
- OpenAI’s 'superintelligent' AI leap nearly caused the company to collapse — here’s why
- OpenAI is paying researchers to stop superintelligent AI from going rogue
- OpenAI is building next-generation AI GPT-5 — and CEO claims it could be superintelligent
Nigel Powell is an author, columnist, and consultant with over 30 years of experience in the technology industry. He produced the weekly Don't Panic technology column in the Sunday Times newspaper for 16 years and is the author of the Sunday Times book of Computer Answers, published by Harper Collins. He has been a technology pundit on Sky Television's Global Village program and a regular contributor to BBC Radio Five's Men's Hour.
He has an Honours degree in law (LLB) and a Master's Degree in Business Administration (MBA), and his work has made him an expert in all things software, AI, security, privacy, mobile, and other tech innovations. Nigel currently lives in West London and enjoys spending time meditating and listening to music.