AI chatbots aren't the problem — we are

robot head with exploding parts
(Image credit: Shutterstock)

When ChatGPT really started to take off, there were multiple reports about "hallucinations" and its alter-ego Syndey. And there was — and still is — real reason to be concerned, with truly disturbing behavior from the chatbot that included trying to convince a New York Times reporter to leave his wife and threatening a philosophy professor that “I can blackmail you, I can threaten you, I can hack you, I can expose you, I can ruin you.”

Then there's the fact that ChatGPT, Bing and Bard chatbots are often just wrong about very important topics like health, personal finance and history. There's a real danger of people getting hurt when AI is giving inaccurate or bad advice. That's why there have been multiple calls for OpenAI to pause development, in order to prevent further harm and to keep the public safe.

But there's another sinister AI threat that has emerged: us. 

Fake AI interview with legend recovering from brain injury 

Paul Gilham/Getty Images

(Image credit: Getty Images)

Last week we learned that the family of Formula One legend Michael Schumacher plans to take legal action against the German magazine Die Aktuelle that ran a front cover story promising an exclusive interview with the seven-time champion. 

As reported by ESPN, the magazine claimed, "No meagre, nebulous half-sentences from friends. But answers from him! By Michael Schumacher, 54!" Only the strapline teases that the interview could be AI-generated, calling it "deceptively real."

Die Aktuelle describes the recovery of Schumacher from a devastating skiing accident that resulted in a serious brain injury. The magazine admits only at the end of the printed interview that it had used the chatbot Character.ai. 

This is what happens when AI tools that seem innocuous are misused by those with ill intentions. 

The good news is that the editor in chief of the publication has been fired by parent company FUNKE as a result of this controversy. So someone over there has scruples; or maybe they simply felt they needed to act because of the backlash. 

"This tasteless and misleading article should never have appeared. It in no way corresponds to the standards of journalism that we – and our readers – expect from a publisher like FUNKE,” said Bianca Pohlmann, managing director of FUNKE magazines. 

AI voice scam terrifies mother with fake kidnapping

AI voice

(Image credit: Shutterstock)

An even scarier AI scam took place this week when a mother in Arizona received a phone call from fake kidnappers who reportedly cloned her daughter's voice. Jennifer DeStefano picked up the phone because her 15-year-old was out of town skiing, and what she heard was terrifying.

“I pick up the phone and I hear my daughter’s voice, and it says, ‘Mom!’ and she’s sobbing,” said DeStefano. “I said, ‘What happened?’ And she said, ‘Mom, I messed up,’ and she’s sobbing and crying.”

Then DeStefano heard a man's voice say, "Put your head back, lie down."

The mother ultimately confirmed that her daughter was safe after one person called 911 and another called DeStefano's husband, but she was convinced that the voice she heard was her daughter's.

“It was completely her voice. It was her inflection. It was the way she would have cried,” DeStefano said. “I never doubted for one second it was her. That’s the freaky part that really got me to my core.”

Who will protect AI from us?

In the open letter from more than 1,000 tech leaders, researchers and others calling on all AI labs to pause for at least 6 months the training of AI systems more powerful than GPT-4, it asked some very important questions. 

"Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?"

Citing "profound risks to society and humanity," the letter goes on to say that A.I. developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control.”

But at least for now, I'm more concerned about how A.I. can be hijacked by evildoers than I am chatbots spreading misinformation. It's time to get just as serious about keeping A.I. and large language models safe from us as us from it. 

More from Tom's Guide

Mark Spoonauer

Mark Spoonauer is the global editor in chief of Tom's Guide and has covered technology for over 20 years. In addition to overseeing the direction of Tom's Guide, Mark specializes in covering all things mobile, having reviewed dozens of smartphones and other gadgets. He has spoken at key industry events and appears regularly on TV to discuss the latest trends, including Cheddar, Fox Business and other outlets. Mark was previously editor in chief of Laptop Mag, and his work has appeared in Wired, Popular Science and Inc. Follow him on Twitter at @mspoonauer.

Read more
ChatGPT and Deepseek side by side on smartphones
I asked DeepSeek vs ChatGPT a series of ethical questions — and the results were shocking
DeepSeek logo on phone
Is DeepSeek a national security threat? I asked ChatGPT, Gemini, Perplexity and DeepSeek itself
DeepSeek logo on phone
It doesn't matter if DeepSeek copied OpenAI — the damage has already been done in the AI arms race
ChatGPT, Gemini, Perplexity
Damning new AI study shows that chatbots make errors summarizing the news over 50% of the time — and this is the worst offender
ChatGPT logo on a smart phone resting on a laptop keyboard, lit with a dark purple light
OpenAI has been actively banning users if they’re suspected of malicious activities
ChatGPT on phone with Google logo in background
New study reveals people are ditching Google for AI tools like ChatGPT search — here's why
Latest in AI
Gmail logo on iPhone
Gmail just got a huge AI upgrade that will save you a ton of time
Robot looking at a laptop/Future AI generated image
I test AI chatbots for a living — 7 common glitches and what to do when they happen
The Action button settings in iOS 18.4 with a Visual Intelligence shortcut for the iPhone 15 Pro
iOS 18.4 adds a crucial Apple Intelligence feature to the iPhone 15 Pro — and it makes your phone more powerful
A young woman organising and decluttering her closet
I used ChatGPT Voice and Vision to spring clean — and it even told me how much some of my 'junk' was worth
Gemini Live
I tested Gemini 2.0 Flash vs Gemini 2.0 Pro — here's the winner
A mother and daughter happily browse the internet
7 AI hacks every mom needs to stop feeling exhausted all the time
Latest in Opinion
A blonde woman sleeping in bed with white bedsheets with Tom's Guide Sleep Week 2025 logo
I tried the 'alpha bridge' method to fall asleep fast and avoid the Sunday scaries — here's what happened
(From L to R) Michelle Rodriguez as Holga the Barbarian, Chris Pine as Edgin Darvis, Justice Smith as Simon the Sorcerer, and Sophia Lillis as Doric next to a Gelatinous Cube
'Dungeons and Dragons: Honor Among Thieves' just crashed Netflix's top 10 movies — and it's a hilarious, heartfelt adventure
An iPad mini 7 on a desk with a finger resting over its power button with Touch ID
I spent two weeks with the iPad mini 7 and it reignited my love for smaller tablets
A Samsung TV box on the floor of a Walmart. It is strapped shut and ready to be moved.
Here's why you should never throw out the box that came with your TV
Hugh Grant in "Heretic"
Hugh Grant’s terrifying villain turn just landed on Max — and it’s got 91% on Rotten Tomatoes
Samsung Display Bezel-less tile concept at MWC 2025
Bezel-less tile OLED TVs could be the future of large-screen displays