OpenAI co-founder starts new company to build ‘safe superintelligence’ — here’s what that means
Here's what Ilya is doing next
One of OpenAI’s co-founders, who also served as its chief scientist until last month, has started a new company with the sole aim of building ‘safe superintelligence.’
Ilya Sutskever is one of the most important figures in the world of generative AI, including in the development of the models that led to ChatGPT.
In recent years his focus has been on superalignment, specifically trying to ensure superintelligent AI does our bidding not its own. He was one of the board members to fire Sam Altman earlier this year before resigning himself when Altman returned.
That is what he hopes to continue with his new company SSI Inc. This is the first AI lab to skip artificial general intelligence (AGI) and go straight for the sci-fi-inspired super brain. “Our team, investors, and business model are all aligned to achieve SSI,” the company wrote on X.
The founders are Sutskever, Daniel Gross, a former Apple AI lead turned investor in AI products and Daniel Levy a former OpenAI optimization lead and expert in AI privacy.
What is Superintelligence?
Superintelligence is within reach.Building safe superintelligence (SSI) is the most important technical problem of our time.We've started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.It’s called Safe Superintelligence…June 19, 2024
Artificial superintelligence (ASI) is AI with beyond human levels of intelligence. “At the most fundamental level, this superintelligent AI has cutting-edge cognitive functions and highly developed thinking skills more advanced than any human,” according to IBM.
Unlike AGI, which is generally as or more intelligent than humans, ASI would need to be significantly more intelligent in all areas including reasoning and cognition.
Sign up to get the BEST of Tom's Guide direct to your inbox.
Here at Tom’s Guide our expert editors are committed to bringing you the best news, reviews and guides to help you stay informed and ahead of the curve!
There is no strict definition of superintelligence and each company approaching advanced AI has different interpretations. There is also disagreement over how long it will take to achieve this level of technology with some experts predicting decades.
One aspect of superintelligence would be an AI capable of improving its own intelligence and capabilities, leading to even further distance between human and AI capabilities.
How do you ensure Superintelligence is safe?
The problem with creating an AI model more intelligent than humanity is it could be difficult to keep it controlled or stop it from outsmarting us. It could opt to destroy humanity if it isn’t properly aligned to human values and interests.
To solve this every company working on advanced AI is also developing alignment techniques. These are approaches vary from systems that work on top of the AI model and others that are trained alongside it. That is the SSI Inc approach.
SSI says that focusing exclusively on superintelligence will allow them to ensure it is developed alongside alignment and safety. “SSI is our mission, our name, and our entire product roadmap, because it is our sole focus,” they wrote on X.
“We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs,” the company added. “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.”
More from Tom's Guide
- OpenAI’s 'superintelligent' AI leap nearly caused the company to collapse — here’s why
- OpenAI is building next-generation AI GPT-5 — and CEO claims it could be superintelligent
- Governments unveil new AI security rules to prevent superintelligence from taking over the world
Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover. When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?