No, ChatGPT did not pass the Turing test — but here’s when it could
ChatGPT could be indistinguishable from a human with GPT-5
ChatGPT has been impressive since its launch in late 2022. However, its tone can often struggle to resemble a human’s — we even recommend teaching it how to write in your style in our ChatGPT tips guide. But that could be changing soon.
According to Siqi Chen, CEO of the fintech company Runway Financial, OpenAI could launch the next version of its GPT large language model in December 2023 — shocking given that GPT-4 only came out this March. But what’s even more surprising is that this new model could make ChatGPT's responses indistinguishable from those of a real human.
i have been told that gpt5 is scheduled to complete training this december and that openai expects it to achieve agi.which means we will all hotly debate as to whether it actually achieves agi.which means it will.March 27, 2023
This is because, according to Chen’s tweet, GPT-5 is tipped to achieve artificial general intelligence (AGI). Currently, ChatGPT and other chatbots like the new Bing with ChatGPT and Google Bard are “weak AI” or “narrow AI.” These are terms to refer to AI that are designed to solve only one problem and cannot experience consciousness or sentience. But if ChatGPT becomes an AGI, it would no longer be a narrow AI and could even pass the famed Turing test.
Strong AI vs Weak AI: Which does ChatGPT fall under?
At the moment, all AI are weak or narrow AI. Even when Bing goes off the deep end and proclaims its love for a The New York Times reporter, it’s still a narrow AI. It’s just experiencing a hallucination that causes it to pull from the information it’s trained on to create an alarming alter ego. It’s not experiencing sentience.
Similarly, ChatGPT is a weak or narrow AI. Even though there are a ton of things that you can do with ChatGPT, technically it is still an AI designed to solve many individual problems and is therefore still a weak AI.
The Turing test and why it matters for ChatGPT — and you
So then the question becomes, how could GPT-5 make the leap to a “strong AI” or an AGI? The answer is a hotly debated topic, but a lot of people would point to the Turing test.
The Turing test was developed by Alan Turing to determine if a machine can exhibit intelligent behavior. In this test, there are three participants: a human, a machine and a judge (who is also human). The judge would evaluate the text-only conversation between the human and the machine and try to determine which participant was the human and which was the machine. If the judge cannot correctly determine which is the machine and which is the human, then the machine is considered to have passed the test. This would mean it can think, and is therefore an AGI.
Sign up to get the BEST of Tom's Guide direct to your inbox.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
Given ChatGPT’s chatbot design, it would be a perfect candidate for the Turing test. ChatGPT is conversational by nature and if you cannot determine whether the text you are reading is created by a GPT-5 powered AI or a human, then that is a game-changer — for better or worse.
On the one hand, this would greatly improve what people can do with the AI chatbot, creating more engaging and realistic written content with ease. On the other hand, the acceleration into post-truth would go into overdrive, and it would become reasonable to question whether anything is written by humans or by an AI.
We are already seeing these concerns play out in front of us. Midjourney, the popular AI art generator, has already halted free trials over concerns of people abusing the platform to generate deepfake images that have the potential to be treated as real images and go viral.
If this has you concerned, you’re not alone. Elon Musk and other leaders in the tech world recently called on ChatGPT and Bard to halt AI training due to concerns that AI will continue to advance at a breakneck pace without proper safety protocols in place, and Italy has recently banned the service, meaning a ChatGPT VPN is necessary to access it in the country.
While the open letter smacks slightly of hypocrisy — where were these concerns when AI experts were calling for digital health warnings — they do still have merit.
Time will tell if we hit AGI by the end of the year as Chen predicts OpenAI will achieve with GPT-5, or if concerns put guardrails in place to prevent machines from gaining sentience a la The Terminator. Either way, AI is not going anywhere anytime soon, and it will likely only get smarter, even if it never achieves AGI or passes the Turing test.
More from Tom's Guide
Malcolm McMillan is a senior writer for Tom's Guide, covering all the latest in streaming TV shows and movies. That means news, analysis, recommendations, reviews and more for just about anything you can watch, including sports! If it can be seen on a screen, he can write about it. Previously, Malcolm had been a staff writer for Tom's Guide for over a year, with a focus on artificial intelligence (AI), A/V tech and VR headsets.
Before writing for Tom's Guide, Malcolm worked as a fantasy football analyst writing for several sites and also had a brief stint working for Microsoft selling laptops, Xbox products and even the ill-fated Windows phone. He is passionate about video games and sports, though both cause him to yell at the TV frequently. He proudly sports many tattoos, including an Arsenal tattoo, in honor of the team that causes him to yell at the TV the most.