OpenAI GPT-4o is coming — top 5 new features you need to know
The new GPT-4o model is free to all ChatGPT users
The makers of ChatGPT showed off some new upgrades during this week's OpenAI Spring Update.
Between the more human-like, natural-sounding voice and Google Lens-esque vision capabilities, a lot of impressive features were revealed in a surprisingly fast series of live demos.
There's a lot happening this week, including the debut of the new iPad Pro 2024 and iPad Air 2024, so you may have missed some of the features that OpenAI announced. Read on to discover the 5 biggest updates to ChatGPT that you maybe missed.
GPT-4o
There's a new model in town, and OpenAI calls it GPT-4o. This isn’t ChatGPT-5, but it is a significant update to OpenAI’s existing model.
During the OpenAI Spring Update, CTO Mira Murati said that the GPT-4o model is able to reason across voice, text and vision. This omnimodel is supposed to be much faster and more efficient than the current ChatGPT-4.
Based on some of the live demos, the system sure seemed to be moving at speed, especially in the conversational voice mode, but more on that below.
Free to all
GPT-4o isn’t locked behind the $20 a month premium Plus service. In fact, OpenAI is making GPT-4o available to all users.
Sign up to get the BEST of Tom's Guide direct to your inbox.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
Beyond just the native tools and updates that GPT-4o is bringing to the table, it makes other tools to free users. These include custom chatbots and access to the ChatGPT store which has models and tools built by users.
Free users also get access to advanced data analysis tools, vision (or image analysis) and Memory, which lets ChatGPT remember previous conversations.
You might be wondering, what does a paid user get now? According to OpenAI, paid users will continue to get up to 5x the capacity and queries that free users do.
4. Generate a wide range of emotion-based voices pic.twitter.com/aIcBWWv7DGMay 13, 2024
Conversational speech
The most intriguing part of OpenAI’s live demos involved vocal conversation with ChatGPT.
The new voice assistant is capable of real-time conversational speech, which includes the ability for you to interrupt the assistant, ask it to change tone, and have it react to user emotions.
During live demos, OpenAI presenters asked the voice assistant to make up a bed time story. Through the demo they interrupted it and had it demonstrate the ability to sound not just natural but dramatic and emotional. They also had the voice sound robotic, sing and tell the story with more intensity.
It was all quite impressive.
Live translation tool
Many of the displayed voice assistant capabilities were impressive but the live translation tool really seemed take it up a notch.
During the demos, Murati spoke in Italian to the voice assistant. Mark Chen asked the assistant to translate English to Italian and Italian to English. It seemed to work pretty well. This could be real a boon to travelers.
1. Real time translation pic.twitter.com/cPGByaQwmNMay 13, 2024
Tutoring
ChatGPT is getting a new native vision capability, similar to Google Lens. Essentially, the capability allows ChatGPT to “see” using the camera on your phone.
The demo team showed ChatGPT an equation and asked it to help solve the problem. The AI voice assistant walked through the math problem without giving the answer.
This is insane.GPT-4o will change the education industry. pic.twitter.com/QOuQrn82Y0May 13, 2024
It also appeared able to see changes made.
Combined with the new desktop app, vision seems to include the ability to view desktops as well. In one demo ChatGPT was also able to view code, analyze it, as well as describe potential issues and what the code is supposed to do.
Could ChatGPT with the GPT-4o model be the perfect tutor?
There were more updates and tools mentioned during the OpenAI Spring Update like the desktop app that is available. There was also face detection and emotion perception.
It should be noted that as of this writing, not every feature appears available yet. The features are gradually launching over the coming weeks, but we don’t know when specific features will become available.
The voice assistant, for example, does not appear to be available. We tested it and it’s still the old version for the moment.
The new model will need some hands-on testing and we're already starting to see what it can do on our end. Check back with us as we put GPT-4o through the paces.
More from Tom's Guide
Scott Younker is the West Coast Reporter at Tom’s Guide. He covers all the lastest tech news. He’s been involved in tech since 2011 at various outlets and is on an ongoing hunt to build the easiest to use home media system. When not writing about the latest devices, you are more than welcome to discuss board games or disc golf with him.