AI expert sounds alarm on Bing ChatGPT: ‘We need to issue digital health warnings’
Instances like Bing's Sydney persona have AI experts very concerned about chatbots
ChatGPT’s revolution powered by Bing has just hit a wall.
Yesterday, we reported on a litany of instances where the GPT-powered search engine went off the deep end. From stalkerish responses from Bing's alter ego "Sydney" to the desire for sentience to advocating for violence, there is now growing evidence that the chatbot AI is fairly malleable in doing things it shouldn’t do.
But maybe we shouldn’t be surprised about this. After all, the idea that these relatively novel AI chatbots like ChatGPT are flawed isn’t a new one. Early testing of ChatGPT easily found instances of biases and inaccuracies, including an unnerving example where ChatGPT determined who a good scientist is based on the scientist’s race and gender and automatically assigned the value “good “ to white males.
Yes, ChatGPT is amazing and impressive. No, @OpenAI has not come close to addressing the problem of bias. Filters appear to be bypassed with simple tricks, and superficially masked. And what is lurking inside is egregious. @Abebab @samatw racism, sexism. pic.twitter.com/V4fw1fY9dYDecember 4, 2022
So after seeing the mounting evidence that there are some clear and obvious dangers with using these chatbots, we reached out to Leslie P. Willcocks, Professor Emeritus of Work, Technology and Globalisation at the London School of Economics and Political science to get their thoughts on the dangers of AI-powered technology.
Bing with ChatGPT: ‘We need to issue digital health warnings’
Professor Willcocks was unequivocal in his views on the dangers of AI-powered technology such as ChatGPT. In his response to our request for comment, he stated that the inherent flaw in this technology is that it ultimately is programmed by humans who are failing to comprehend the sheer mass of data being used and the dangers within, particularly “biases, accuracy, meaning.”
"These machines are programmed by humans who do not know what their software and algorithms do not cover, cannot understand the massive data being used e.g biases, accuracy, meaning, and cannot anticipate the contexts in which the products are used, nor their impacts, wrote Willcocks. "Because we can do things with such technologies does not mean therefore that we should."
Additionally, the gusto with which early adopters ran to embrace this new technology without considering the ethical ramifications should serve as a warning now that we’ve seen how alarmingly chatbots like ChatGPT and the new Bing can behave. Willcocks advocates for the use of digital health warnings as a signal for people to stop and consider these implications in the future.
Sign up to get the BEST of Tom's Guide direct to your inbox.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
"There are enough moral dilemmas here to fill a text book," said Willcocks. "My conclusion is that the lack of social responsibility and ethical casualness exhibited so far is really not encouraging. We need to issue digital health warnings with these kinds of machines."
Professor Willcocks isn’t the only expert to have these concerns regarding ethics. In a Bloomberg article published earlier today, AI expert and senior researcher at AI startup Hugging Face Margaret Mitchell expressed similar concerns. As a former co-leader of Google’s ethics team, Mitchell says “This is fundamentally not the right technology to be using for fact-based information retrieval.”
Mitchell and Willcocks' concerns certainly echo the evidence we’ve seen recently of ChatGPT and the new Bing’s performance limitations. From alternate personas named "Sydney" trying to break up your marriage, to providing inaccurate information about the latest tech and more, chatbots can simply not be trusted right now.
“A year ago, people probably wouldn’t believe that these systems could beg you to try to take your life, advise you to drink bleach to get rid of Covid, leave your husband, or hurt someone else, and do it persuasively,” Mitchell told Bloomberg. “But now people see how that can happen, and can connect the dots to the effect on people who are less stable, who are easily persuaded, or who are kids.”
Microsoft has said that the new Bing is learning as it readies the chatbot for a wider release, but at this stage it seems that the dangers outweigh the benefits.
Malcolm McMillan is a senior writer for Tom's Guide, covering all the latest in streaming TV shows and movies. That means news, analysis, recommendations, reviews and more for just about anything you can watch, including sports! If it can be seen on a screen, he can write about it. Previously, Malcolm had been a staff writer for Tom's Guide for over a year, with a focus on artificial intelligence (AI), A/V tech and VR headsets.
Before writing for Tom's Guide, Malcolm worked as a fantasy football analyst writing for several sites and also had a brief stint working for Microsoft selling laptops, Xbox products and even the ill-fated Windows phone. He is passionate about video games and sports, though both cause him to yell at the TV frequently. He proudly sports many tattoos, including an Arsenal tattoo, in honor of the team that causes him to yell at the TV the most.