Elon Musk and other leaders call on ChatGPT and Bard to halt AI training — here’s why

A picture of a robot showcasing rogue AI
(Image credit: Shutterstock)

Just as the generative AI arms race is heating up, ChatGPT and Google Bard could be in trouble as a new open letter is calling for a halt on the development and training of new AI systems.

As reported by BleepingComputer, more than a thousand people including tech visionaries like Elon Musk and even Steve Wozniak have co-signed an open letter published by the Future of Life Institute.

The letter suggests that OpenAI, Google and other companies working in this burgeoning field should pause the development and training of AI systems that are more powerful than GPT-4 for at least six months. During this time, AI development teams will have the opportunity to come together and agree on safety protocols that will then be used when conducting audits on AI systems by external, independent experts. 

Much like how social media turned the world on its head, the letter notes that “advanced AI could represent a profound change in the history of life on Earth.” For this reason, the Future of Life Institute and the letter’s co-signers want AI development to be “planned for and managed with commensurate care and resources.”

At the moment, this isn’t the case as recent months have seen AI labs fiercely competing to “develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” The letter does make a good point here, as we’ve seen ChatGPT’s evil twin DAN go off the rails and other AI hallucinations.

It might already be too late

An AI-powered hand reaching out of a laptop

(Image credit: Shutterstock)

Another interesting tidbit gleaned from the letter is that modern AI systems are now directly competing with humans when it comes to general tasks and this will likely only get worse as more advanced AI systems are developed.

We’ve yet to fully consider and ultimately decide on the existential and ethical questions posed by the rapid advancements in AI. At the same time, governments around the world may need to come together and put the necessary regulations in place to prevent AI from taking over while protecting humanity.

If OpenAI, Google and other companies working with AI don’t agree to a pause on the development of AI systems, the letter suggests that “governments should step in and institute a moratorium.” 

While the letter isn’t calling for all AI development to be halted, it does shine a light on how the recent competition in the growing space could lead to things getting out of hand. Nobody wants to live in a world where humans are controlled by AI and hopefully the companies leading the charge may be willing to take a step back and reevaluate the rapid speed at which this new technology is progressing.

More from Tom's Guide

TOPICS
Anthony Spadafora
Managing Editor Security and Home Office

Anthony Spadafora is the managing editor for security and home office furniture at Tom’s Guide where he covers everything from data breaches to password managers and the best way to cover your whole home or business with Wi-Fi. He also reviews standing desks, office chairs and other home office accessories with a penchant for building desk setups. Before joining the team, Anthony wrote for ITProPortal while living in Korea and later for TechRadar Pro after moving back to the US. Based in Houston, Texas, when he’s not writing Anthony can be found tinkering with PCs and game consoles, managing cables and upgrading his smart home.