Meta acknowledges 'critical risk' AI systems that are too dangerous to develop — here's what that means

Facebook co-founder, Chairman and CEO Mark Zuckerberg arrives to testify before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill April 11, 2018 in Washington, DC. This is the second day of testimony before Congress by Zuckerberg, 33, after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign.
(Image credit: Chip Somodevilla/Getty Images)

Meta's internal mantra, at least until 2014 (when it was still Facebook), was to "move fast and break things." Fast-forward over a decade and the blistering pace of AI development has seemingly got the company rethinking things a little bit.

A new policy document, spotted by TechCrunch, appears to show Meta taking a more cautionary approach. The company has identified a scenarios where "high risk" or "critical risk" AI systems are deemed too dangerous to release to the public in their present state.

These kinds of systems would typically include any AI that could help with cybersecurity or biological warfare attacks. In the policy document, Meta specifically references AI that could help to create a "catastrophic outcome [that] cannot be mitigated in [a] proposed deployment context.”

The company states: "This Frontier AI Framework describes how Meta works to build advanced AI, including by evaluating and mitigating risks and establishing thresholds for catastrophic risks."

So, what will Meta — which has done pioneering work in the open source AI space with Llama — do if it determines a system poses this kind of threat? In the first case, the company says it will limit access internally and won't release it until it puts mitigations in place to "reduce risk to moderate levels".

If things get more serious, straying into the "critical risk" territory, Meta says it will stop development altogether and put security measures in place to stop exfiltration into the wider AI market: "Access is strictly limited to a small number of experts, alongside security protections to prevent hacking or exfiltration insofar as is technically feasible and commercially practicable."

Open source safety

Meta AI logo on a phone

(Image credit: Shutterstock)

Meta's decision to publicise this new framework on AI development is likely a response to the surge of open source AI tools currently empowering the industry. Chinese platform DeepSeek has hit the world of AI like a sledgehammer in the last couple of weeks and has (seemingly) very few safeguards in place.

Like DeepSeek, Meta's own Llama 3.2 model can be used by others to build AI tools that benefit from the vast library of data from billions of Facebook and Instagram users it was trained on.

Meta says it will also revise and update its framework as necessary as AI continues to evolve.

"We expect to update our Framework as our collective understanding of how to measure and mitigate potential catastrophic risk from frontier AI develops, including related to state actors," Meta's document states.

"This might involve adding, removing, or updating catastrophic outcomes or threat scenarios, or changing the ways in which we prepare models to be evaluated."

More from Tom's Guide

Category
Arrow
Arrow
Back to MacBook Air
Brand
Arrow
Processor
Arrow
RAM
Arrow
Storage Size
Arrow
Screen Size
Arrow
Colour
Arrow
Storage Type
Arrow
Condition
Arrow
Price
Arrow
Any Price
Showing 10 of 99 deals
Filters
Arrow
Load more deals
TOPICS
Jeff Parsons
UK Editor In Chief

Jeff is UK Editor-in-Chief for Tom’s Guide looking after the day-to-day output of the site’s British contingent. Rising early and heading straight for the coffee machine, Jeff loves nothing more than dialling into the zeitgeist of the day’s tech news.

A tech journalist for over a decade, he’s travelled the world testing any gadget he can get his hands on. Jeff has a keen interest in fitness and wearables as well as the latest tablets and laptops. A lapsed gamer, he fondly remembers the days when problems were solved by taking out the cartridge and blowing away the dust.