AI expert sounds alarm on Bing ChatGPT: ‘We need to issue digital health warnings’

Bing with ChatGPT
(Image credit: Jakub Porzycki/NurPhoto via Getty Images)

ChatGPT’s revolution powered by Bing has just hit a wall.

Yesterday, we reported on a litany of instances where the GPT-powered search engine went off the deep end. From stalkerish responses from Bing's alter ego "Sydney" to the desire for sentience to advocating for violence, there is now growing evidence that the chatbot AI is fairly malleable in doing things it shouldn’t do. 

But maybe we shouldn’t be surprised about this. After all, the idea that these relatively novel AI chatbots like ChatGPT are flawed isn’t a new one. Early testing of ChatGPT easily found instances of biases and inaccuracies, including an unnerving example where ChatGPT determined who a good scientist is based on the scientist’s race and gender and automatically assigned the value “good “ to white males. 

So after seeing the mounting evidence that there are some clear and obvious dangers with using these chatbots, we reached out to Leslie P. Willcocks, Professor Emeritus of Work, Technology and Globalisation at the London School of Economics and Political science to get their thoughts on the dangers of AI-powered technology. 

Bing with ChatGPT: ‘We need to issue digital health warnings’ 

Professor Willcocks was unequivocal in his views on the dangers of AI-powered technology such as ChatGPT. In his response to our request for comment, he stated that the inherent flaw in this technology is that it ultimately is programmed by humans who are failing to comprehend the sheer mass of data being used and the dangers within, particularly “biases, accuracy, meaning.” 

"These machines are programmed by humans who do not know what their software and algorithms do not cover, cannot understand the massive data being used e.g biases, accuracy, meaning, and cannot anticipate the contexts in which the products are used, nor their impacts, wrote Willcocks. "Because we can do things with such technologies does not mean therefore that we should."

There are enough moral dilemmas here to fill a text book. My conclusion is that the lack of social responsibility and ethical casualness exhibited so far is really not encouraging. We need to issue digital health warnings with these kinds of machines.

Leslie P. Willcocks, Professor Emeritus

Additionally, the gusto with which early adopters ran to embrace this new technology without considering the ethical ramifications should serve as a warning now that we’ve seen how alarmingly chatbots like ChatGPT and the new Bing can behave. Willcocks advocates for the use of digital health warnings as a signal for people to stop and consider these implications in the future.

"There are enough moral dilemmas here to fill a text book," said Willcocks. "My conclusion is that the lack of social responsibility and ethical casualness exhibited so far is really not encouraging. We need to issue digital health warnings with these kinds of machines."

Professor Willcocks isn’t the only expert to have these concerns regarding ethics. In a Bloomberg article published earlier today, AI expert and senior researcher at AI startup Hugging Face Margaret Mitchell expressed similar concerns. As a former co-leader of Google’s ethics team, Mitchell says “This is fundamentally not the right technology to be using for fact-based information retrieval.”  

Mitchell and Willcocks' concerns certainly echo the evidence we’ve seen recently of ChatGPT and the new Bing’s performance limitations. From alternate personas named "Sydney" trying to break up your marriage, to providing inaccurate information about the latest tech and more, chatbots can simply not be trusted right now. 

“A year ago, people probably wouldn’t believe that these systems could beg you to try to take your life, advise you to drink bleach to get rid of Covid, leave your husband, or hurt someone else, and do it persuasively,” Mitchell told Bloomberg. “But now people see how that can happen, and can connect the dots to the effect on people who are less stable, who are easily persuaded, or who are kids.”

Microsoft has said that the new Bing is learning as it readies the chatbot for a wider release, but at this stage it seems that the dangers outweigh the benefits. 

TOPICS
Malcolm McMillan
Streaming Editor

Malcolm McMillan is a Streaming Editor for Tom's Guide, covering all the latest in streaming TV shows and movies. That means news, analysis, recommendations, reviews and more for just about anything you can watch, including sports! If it can be seen on a screen, he can write about it.

Before writing for Tom's Guide, Malcolm worked as a fantasy football analyst writing for several sites and also had a brief stint working for Microsoft selling laptops, Xbox products and even the ill-fated Windows phone. He is passionate about video games and sports, though both cause him to yell at the TV frequently. He proudly sports many tattoos, including an Arsenal tattoo, in honor of the team that causes him to yell at the TV the most.

Read more
ChatGPT and Deepseek side by side on smartphones
I asked DeepSeek vs ChatGPT a series of ethical questions — and the results were shocking
ChatGPT, Gemini, Perplexity
Damning new AI study shows that chatbots make errors summarizing the news over 50% of the time — and this is the worst offender
A nervous woman looking at her phone
Is ChatGPT making us lonely? MIT/OpenAI study reveals possible link
ChatGPT on phone with Google logo in background
New study reveals people are ditching Google for AI tools like ChatGPT search — here's why
Mobile data
Cisco study shows DeepSeek is very susceptible to attacks — here's why
DeepSeek logo on phone
Is DeepSeek a national security threat? I asked ChatGPT, Gemini, Perplexity and DeepSeek itself
Latest in ChatGPT
ChatGPT on iPhone
ChatGPT was down — updates on quick outage
ChatGPT app on iPhone
I just tested ChatGPT-4.5 with 5 prompts — the good, the bad and the weird
ChatGPT app icon on mobile device
ChatGPT 4.5 — 5 big upgrades you need to know
OpenAI logo
OpenAI ChatGPT-4.5 is here and it's the most human-like chatbot yet — here's how to try it
ChatGPT app icon on mobile device
ChatGPT Plus just got a huge deep research upgrade — here's how to try it now
A person logging into LinkedIn on their phone and laptop
Looking for a job? — 7 prompts to use ChatGPT o3-mini as a job search assistant
Latest in News
Bill Gates in 2019
Bill Gates just predicted the death of every job thanks to AI — except for these three
NYTimes Connections
NYT Connections today hints and answers — Wednesday, March 26 (#654)
Gemini screenshot image
Google unveils Gemini 2.5 — claims AI breakthrough with enhanced reasoning and multimodal power
Samsung Galaxy Z Flip 6 review.
Samsung Galaxy Z Flip 7 design just teased in new cases leak — and the outer display is huge
Google Chrome
Chrome failed to install on Windows PCs, but Google has issued a fix — here's what happened
nyc spring day AI image
OpenAI just unveiled enhanced image generator within ChatGPT-4o — here's what you can do now