Sam Altman claims AGI is coming in 2025 and machines will be able to 'think like humans' when it happens

Sam Altman
(Image credit: ANDREW CABALLERO-REYNOLDS/AFP via Getty Images)

Society has barely adapted to artificial intelligence but OpenAI CEO Sam Altman says things are about to step up a notch with Artificial General Intelligence coming as early as next year.

AGI is a form of AI that is as capable as, if not more capable than, all humans across almost all areas of intelligence. It has been the ‘holy grail’ for every major AI lab, and many predicted it would be a decade or more before it was reached.

Altman claimed that AGI could be achieved in 2025 during an interview for Y Combinator, declaring that it is now simply an engineering problem. He said things were moving faster than expected and that the path to AGI was "basically clear."

Not everyone agrees, and the definition of AGI is still very much undecided. Altman also spoke about the path to Artificial Superintelligence (ASI) and that even that, where AI can unlock the secrets of the universe, is 'thousands of days away'.

What is AGI and why does it matter?

How To Build The Future: Sam Altman - YouTube How To Build The Future: Sam Altman - YouTube
Watch On

Artificial General Intelligence has no rigid definition. If you type "what is AGI" into Google it will trigger an AI Overview as it's a controversial topic in the AI community. I used a general definition above — be as capable as humans across all areas, but that isn’t the only approach.

In some definitions AGI also has to be able to learn, adapt, and perform tasks similar to human intelligence, going beyond just "knowledge". This would require it to create output not based on human input, moving beyond its training data.

A new benchmark, FrontierMath, found that some models are hitting a wall when it comes to reasoning. It looks at how the models handle problems not in their training data and GPT-4o and Gemini 1.5 Pro solved fewer than 2% of the problems in the benchmark.

So, if we have to consider being able to move beyond training data as a criteria for AGI then the current crop of models are a long way off. That said, I’ve been told by OpenAI insiders that the full version of o1 is a significant step up on the preview in terms of reasoning, and rumors point to the next generation of Gemini models as also performing better on math problems.

We have yet to see the ‘big’ versions of the leading models from Google and Anthropic. Anthropic CEO Dario Amodei recently confirmed Claude 3.5 Opus was “still coming” and he predicts we’ll hit AGI by 2026/2027 — so the next gen models may be a huge step up.

Why is Sam Altman so confident?

OpenAI has a vested interest in declaring AGI has been reached. The OpenAI deal with Microsoft comes to an end once AGI is achieved, forcing MSFT to sign a fresh agreement and potentially pay more to use the OpenAI models in Copilot. That is according to a New York Times report on the “fraying” relationship between the tech companies.

The AI lab defines AGI as “AI systems that are generally smarter than humans,” and explains that it will be reached over 5 levels of AI, with AGI sitting at level 5.

Swipe to scroll horizontally
LevelNameDescription
Level 1ChatbotsAI with natural conversation language abilities
Level 2ReasonersAI's with human-levels of problem solving across a broad range of topics
Level 3AgentsAI systems that can take actions independently or from human instruction
Level 4InnovatorsAI that can aid in the invention of new ideas and contribute to human knowledge
Level 5 (AGI)OrganizationsAI that is capable of doing all of the work of an organization independently

Level 1 is chatbots, the systems we’ve been using for two years are simple text-generation tools that can simulate human conversation. Level 2 is the reasoner and we’re seeing those systems emerge through models like OpenAI’s o1.

Then, and happening at about the same time, is level 3 where ‘agents’ emerge capable of performing tasks on their own. Google’s rumored Jarvis and Claude with Computer Use are examples of very early agent-like systems.

The final two levels are a big step up, but Altman says models like o1 will help build the next generation. For example, level 4 are innovators capable of helping with inventions and providing new ideas not created by humans. This is where the FrontierMath benchmark comes in.

Finally, according to OpenAI, we will hit AGI when AI models can do the work of an entire organization. This is a point where the model is smart enough to reason, carry out tasks alone, create new ideas and implement them.

Final thoughts

In reality, AGI will be a gradual thing. It won’t be a lightning bolt from the sky changing everything in one shot, it will happen in much the same way generative AI has emerged — by slowly improving over time until it is in and part of everything we do.

Let's just hope the people building it are more aware of the potential implications than Miles Dyson was when he was creating Skynet in the Terminator universe.

More from Tom's Guide

Category
Arrow
Arrow
Back to MacBook Air
Brand
Arrow
RAM
Arrow
Storage Size
Arrow
Screen Size
Arrow
Colour
Arrow
Storage Type
Arrow
Condition
Arrow
Price
Arrow
Any Price
Showing 10 of 76 deals
Filters
Arrow
Load more deals
Ryan Morrison
AI Editor

Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover. When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?

  • midkay
    Article body: “Altman claimed that AGI could be achieved in 2025.”
    Article headline: “Sam Altman claims AGI is coming in 2025.”

    Boy, this clickbait is irritating.
    Reply
  • xor0
    AGI won't happen until we work out how to do multilayer unsupervised learning. Until then all objective functions must be specified (by humans).
    Reply