ChatGPT will be ‘much less lazy’ in future claims OpenAI CEO Sam Altman
The chatbot wasn't responding fully
ChatGPT will be “much less lazy” in the future, OpenAI CEO Sam Altman declared on X. His proclamation comes after reports last year that its underlying model GPT-4 had started refusing to respond to some queries, or responded less fully than it could.
The artificial intelligence lab revealed in December that there had been a deterioration in the large language model's performance. It had effectively become "lazy" after an update.
Altman told his followers on X that “GPT-4 had a slow start on its new year's resolutions,” after updates designed to fix the problem didn’t immediately lead to improvements, but added that it “should now be much less lazy now!”
Why was GPT-4 being lazy?
gpt-4 had a slow start on its new year's resolutions but should now be much less lazy now!February 4, 2024
AI, particularly large language models like ChatGPT are particularly good at automating dull and repetitive tasks. We use them to answer questions from the mundane to the complex and rely on the fact they will — mostly — give us a useful response.
Last year reports started to circulate that ChatGPT had become lazy. In that it wasn’t responding as fully as it once would, giving snippets instead of fully formed functions or explaining how to write a poem rather than simply writing the poem.
Some of this was likely in response to updates to the underlying AI model designed to combat misuse and add guardrails against illegal use cases. There was also efforts from OpenAI to reduce the cost of running the expensive model.
What has OpenAI done to fix the problem?
How OpenAI tackled the laziness problem isn’t obvious, but there have been updates to the underlying model released since the holiday season. This includes new versions of the Turbo model designed to speed up responses without sacrificing quality.
Sign up to get the BEST of Tom's Guide direct to your inbox.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
OpenAI says the new GPT-4-Turbo can complete tasks like code generation more thoroughly and a new GPT-3.5-Turbo reduces the overall cost of completing tasks or responding to queries.
There was some speculation that the AI models were are “winding down” for the holiday season, which is why they became slower and less responsive. OpenAI denied this and they didn’t immediately bounce back to normal in January.
Altman’s quip that ChatGPT had a “slow start on its new year resolutions” was a fun way of saying the updates were still filtering through. After all, AI is simply responding to our input, working from pre-trained data and isn’t sentient enough to make its own choices — yet!
More from Tom's Guide
- I put Google Search AI image generator to the test on adding text to images — it was better than expected
- Google Lens just got Gen AI search for iPhone and Android — here’s what you can do now
- Google Gemini: Everything we know about the advanced AI model
Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover. When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?