How do you test AI that’s getting smarter than us? A new group is creating ‘humanity’s toughest exam’
How smart is an AI, really?
As AI gets smarter and smarter (including breaking rules to prove how capable it is), it's getting a little trickier to stump. Tests that work to push GPT-4o to its limits are proving easy for o1-preview — and it is only going to improve.
There's an understandable train of thought that AI could get too clever for humanity's own good, and while we're perhaps some way off of Skynet-level catastrophe, the thought has clearly crossed the minds of some technology experts.
A non-profit called The Center for AI Safety (or CAIS) has sent out a call for some of the trickiest questions for AI to answer. The idea is that these difficult questions will form "Humanity's Last Exam", a more difficult bar for AI to reach.
Every major AI lab and big tech company with an AI research division also has an AI safety board or equivalent. Many have also signed up for external oversight of new frontier models before release. Finding questions and challenges that properly test them is an important part of that safety picture.
OpenAI's new model breaks the rules to show how far AI has come
Have a question that is challenging for humans and AI?We (@ai_risks + @scale_AI) are launching Humanity's Last Exam, a massive collaboration to create the world's toughest AI benchmark.Submit a hard question and become a co-author.Best questions get part of $500,000 in… pic.twitter.com/2l821IfW2fSeptember 16, 2024
The submission form says "Together, we are collecting the hardest and broadest set of questions ever." It asks users to "think of something you know that would stump current artificial intelligence (AI) systems." Which could then be used better to evaluate the capabilities of AI systems in the years to come.
As per Reuters, existing models are struggling with many of the questions included already, and the answers between them are scattershot at best. For example, the question "How many positive integer Coxeter-Conway friezes of type G2 are there?" has resulted in answers of 14, 1, or 3 from three different AI models.
OpenAI's o1 family of models, currently in a preview and mini version, have demonstrated an IQ of around 120 and solve PhD-level problems relatively easily. Other models are going to catch up; this is the 'lightest' o1 model' with better to come next year, so finding challenging problems is a high priority for the AI safety community.
Sign up to get the BEST of Tom's Guide direct to your inbox.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
According to Dan Hendrycks , Director of the Center for AI Safety, the questions will be used to create a new AI benchmark to test new models. The authors of those questions will be co-authors of the benchmark. The deadline is November 1 and the best questions get part of a $500,000 prize fund.
More from Tom's Guide
A freelance writer from Essex, UK, Lloyd Coombes began writing for Tom's Guide in 2024 having worked on TechRadar, iMore, Live Science and more. A specialist in consumer tech, Lloyd is particularly knowledgeable on Apple products ever since he got his first iPod Mini. Aside from writing about the latest gadgets for Future, he's also a blogger and the Editor in Chief of GGRecon.com. On the rare occasion he’s not writing, you’ll find him spending time with his son, or working hard at the gym. You can find him on Twitter @lloydcoombes.