5 tips for spotting AI-generated deep fakes — don't get fooled
Don't believe everything you see
Using a famous person's likeness or the voice of a well-recognized and respected authority on a topic to add legitimacy to a scam is nothing new, but AI makes this much harder to spot.
AI voice cloning tools have become so good that you can use a few minutes of someone’s voice and be able to manipulate it into saying whatever you like by typing a few words of text. The most advanced models, not available to the public, can do this from seconds of speech.
Scammers use these techniques to create commercials on Facebook and other platforms that make it look like a well-known expert is supporting that product. While some are very obviously faked, others have led to people buying items that could prove harmful.
The best tip for avoiding becoming a victim of a deep fake scam is to look at the source and ask yourself if that person is likely to be talking about the product. However, there are also a few AI tell-tale signs.
British doctors the target of deep fakes
The British Medical Journal (BMJ) investigated the issue of deep fakes and found a number of high profile doctors were having their identities used in ways they would never approve, and getting rid of the fake videos is not an easy process.
The journal found that British TV doctors such as Hilary Jones and Rangan Chatterjee had their likenesses stolen to sell everything from hemp gummies, to bouncy nutrition and eco juice. Products they do not endorse.
“It’s down to the likes of Meta, the company that owns Facebook and Instagram, to stop this happening,” Dr Jones told the BMJ. “But they’ve got no interest in doing so while they’re making money.”
Sign up to get the BEST of Tom's Guide direct to your inbox.
Here at Tom’s Guide our expert editors are committed to bringing you the best news, reviews and guides to help you stay informed and ahead of the curve!
Studies have shown that up to half of people questioned couldn't tell a deepfake from a real video and with most people seeing them while casually scrolling Facebook, it's easy to get caught up if it mirrors something you want to hear.
Tackling the deep fake problem
'If you look closely you can see it, but you really have to look.'A.I. expert Ryan Morrison explains what to look for to spot a deepfake video. pic.twitter.com/6TnRCP2uhbJuly 18, 2024
Social media companies say they remove the videos when they are reported but Dr Jones said on a Good Morning Britain segment I also appeared in that they reappear soon after they’ve been removed, if they’re removed at all.
Meta says it is investigating the specific examples given by the BMJ, adding "We don’t permit content that intentionally deceives or seeks to defraud others, and we’re constantly working to improve detection and enforcement."
The company says anyone who sees deep fake content, or any content that might violate platform policies, should report it so it can be investigated. Dr Jones and others say that doesn't go far enough and videos often just reappear.
There are also efforts from companies like Adobe and Google to properly label AI-generated content, but this won't work for offline, locally running models and relies on industry-wide buy-in even at the commercial model level. In many cases labeling of AI is still very much down to the individual platform and voluntary.
OpenAI has also developed its deep fake detector that it plans to launch. Amusingly, this will use AI to detect evidence AI has been used.
How to spot a deep fake
Spotting a deep fake video or even an image is relatively straightforward, but it's a growing problem and one that is going to get worse as the technology improves and access to it becomes cheaper.
For now, it's a case of being vigilant and watching for the signs.
1. The sound quality
Often with a deep fake, the sound quality will go one of two ways. It will be slightly jumpy as they’ve used spliced-together clips, it will be poor quality as they attempt to disguise the poor quality AI or it will be ‘too good’.
This is the most common and will have no reverb, sounding like it was recorded in a small studio rather than a room. For example, in one clip I saw Dr Hilary Jones sitting in a TV studio but the sound was more like that of a radio recording, with no space in the vocal track.
2. Stilted vocals
Even the best currently available AI video models from companies like ElevenLabs struggle slightly with emotion and inflection. A straight text-to-speech, even with a very good cloned voice will sound somewhat like you’re reading a script rather than speaking naturally.
Professionals working in the public eye regularly rarely sound like they’re reading a script when speaking directly to the camera so this is a red flag and call to look more closely.
It should be stated that ElevenLabs bans the cloning of voices without the express permission of the owner of that voice, and even then imposes restrictions on how voices can be used. Other platforms, including those available to run offline and on-device, don’t have those qualms.
3. How the lips move
Lip-synching has improved significantly over the past few years, with some AI lip-sync tools nearly indistinguishable but they’re not perfect and sometimes exaggerated, looking like the person is enthusiastic about the topic with over-the-top head movements included.
The other side of this, and the more common approach, is lip movement that doesn’t sync at all. The scammer might use microphone placements or rapid cut scenes to disguise this but it's fairly easy to spot if you look closely at the mouth.
Also, watch the mouth closely as often if the scammers have used AI lip-synching it will have a slight blur around the lips for certain words as it failed to render properly.
4. Low-quality or odd formatting
Often if a celebrity or professional is promoting a product it will be in a higher-resolution video, linked to a recognizable account and in a video layout that matches the platform.
For example, if the video is on YouTube it is likely to be in 16:9, or standard widescreen format but if it is on TikTok or Instagram Reels it will be 9:16 (or with 16:9 boxed in the middle).
Scam and spam-type videos are often square format with a lot of imagery or wording and will be relatively low quality — either because of the AI or because of editing the clip.
5. Too good to be true
This is the most important of all of them and will continue to be useful even after AI catches up and makes the previous four harder to identify — if it's too good to be true, it probably is.
The flip side is whether it's something you’d expect from the person in the video. Has this person previously spoken in favor of what they’re supposedly promoting? No? It's probably a scam. Has the person openly criticized that type of product? Yes? It's probably a scam.
Final thoughts
Treat every video, especially those not shared by a known, reputable source, with caution and don’t trust it until you’ve checked the above, including searching for the name of the celebrity and the name of the product on Google — see if reputable sites have the same quotes.
The same applies for any type of content you find online. If the source is questionable then double check the content.
More from Tom's Guide
- OpenAI’s 'superintelligent' AI leap nearly caused the company to collapse — here’s why
- OpenAI is paying researchers to stop superintelligent AI from going rogue
- OpenAI is building next-generation AI GPT-5 — and CEO claims it could be superintelligent
Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover. When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?