OpenAI insiders warn of 'human extinction' risk from AI systems — urging better whistleblower protections
A stark warning
A group of current and former OpenAI and Google DeepMind employees have claimed that AI companies “possess substantial non-public information about the capabilities and limitations of their systems” which they cannot be relied on to share voluntarily.
The claim was made in a widely reported open letter in which the group highlighted what they described as “serious risks” posed by AI.
These risks include the further entrenchment of existing inequalities, manipulation and misinformation, and the loss of control of autonomous AI systems leading to possible “human extinction." They lamented about the lack of effective oversight and called for increased protections for whistleblowers.
The letter’s authors said they believe AI can bring unprecedented benefits to society and that the risks they highlighted can be reduced with the involvement of scientists, policymakers, and the general public. However, they said that AI companies have financial incentives to avoid effective oversight.
Ordinary whistleblower protections 'insufficient'
Claiming that AI companies know about the risk levels of different kinds of harm and the adequacy of their protective measures, the group of employees said the companies only have weak obligations to share this kind of information with governments “and none with civil society." They added that broad confidentiality agreements and blocking them from voicing their concerns publicly.
“Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated,” they wrote.
They called on advanced AI companies not to retaliate against risk-related criticism and to create an anonymous system for employees to raise their concerns.
Sign up to get the BEST of Tom's Guide direct to your inbox.
Here at Tom’s Guide our expert editors are committed to bringing you the best news, reviews and guides to help you stay informed and ahead of the curve!
In May, Vox reported that former OpenAI employees are forbidden from criticizing their former employer for the rest of their lives. If they refuse to sign the agreement, they could lose all their vested equity earned during their time working for the company. OpenAI CEO Sam Altman later posted on X saying the standard exit paperwork would be changed.
in regards to recent stuff about how openai handles equity:we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement). vested equity is vested equity, full stop.there was…May 18, 2024
In response to the open letter, a spokesperson for OpenAI told the The New York Times the company is proud of its track record providing the most capable and safest AI systems and that it believes in its scientific approach to addressing risk.
“We agree that rigorous debate is crucial given the significance of this technology, and we’ll continue to engage with governments, civil society and other communities around the world,” the spokesperson said.
A Google spokesperson declined to comment.
The letter was signed by 13 current and former employees. All the current OpenAI employees who signed did so anonymously.
The AI world is no stranger to such open letters. Most famously, an open letter published by the Future of Life Institute that was signed by the likes of Elon Musk and Steve Wozniak had called for a 6-month pause in AI development — a call which went ignored.
More from Tom's Guide
- Elon Musk says all jobs will be optional in the future as AI will take care of us — if we're lucky
- Doomsday Clock is 90 seconds to midnight as experts warn ‘AI among the biggest threats’ to humanity
- OpenAI is paying researchers to stop superintelligent AI from going rogue
Christoph Schwaiger is a journalist who mainly covers technology, science, and current affairs. His stories have appeared in Tom's Guide, New Scientist, Live Science, and other established publications. Always up for joining a good discussion, Christoph enjoys speaking at events or to other journalists and has appeared on LBC and Times Radio among other outlets. He believes in giving back to the community and has served on different consultative councils. He was also a National President for Junior Chamber International (JCI), a global organization founded in the USA. You can follow him on Twitter @cschwaigermt.