Llama 4 will be Meta's next-generation AI model — here's what to expect
Agentic AI is on the horizon

One of the largest open-source large language models is poised to roll out its most impressive update yet.
Meta’s upcoming Llama 4, with a release date likely later this year, is widely expected to feature reasoning capabilities and allow for AI agents to use a web browser and other tools.
Meta released the latest model of Llama 3.3 70B in December, and it came with an impressive decrease in cost and boost in performance and capability. Llama users can expect another monumental upgrade in its fourth iteration.
What’s new with Llama 4?
Expect Llama 4 to be brimming with power. Meta CEO Mark Zuckerberg said on a second-quarter earnings call that to train Llama 4, it will need 10 times more compute than what was needed to train Llama 3.
He added, “It’s hard to predict how this will trend multiple generations out into the future. But at this point, I’d rather risk building capacity before it is needed rather than too late, given the long lead times for spinning up new inference projects.”
Llama 4 will “have agentic capabilities, so it’s going to be novel and it’s going to unlock a lot of new use cases.”
Mark Zuckerberg, Meta CEO
In April 2024, Meta released Llama 3 with 8 billion parameters, and an August upgrade of the model loaded Llama with 405 billion parameters.
Zuckerberg added how Llama 4 will, “have agentic capabilities, so it’s going to be novel and it’s going to unlock a lot of new use cases.”
Sign up to get the BEST of Tom's Guide direct to your inbox.
Get instant access to breaking news, the hottest reviews, great deals and helpful tips.
This open-source LLM model could effectively mimic an engineer, instead of responding to inputs. Agentic AI has the ability to conduct multi-step tasks on its own.
Clara Shih, Meta’s head of business AI, has shared with media that the company recognizes how more businesses will use AI agents to automate complex tasks.
“We already have these trusted relationships with 200 million small businesses around the world. Very soon, each of those businesses are going to have these AIs that represent them and help automate redundant tasks, help speak in their voice, help them find more customers and provide almost like a concierge service to every single one of their customers, 24/7.”
But Zuckerberg cautions against this idea of an autonomous agent right away, saying it won't become a reality until perhaps 2026.
He said, “I don't think you're going to see this year an AI engineer that is extremely widely deployed, changing all of development. I think this is going to be the year where that really starts to become possible and lays the groundwork for a much more dramatic change in 2026 and beyond.”
Zuckerberg also pointed to the economic benefits of leveraging Llama. “As Llama becomes more used, it's more likely, for example, that silicon providers and others — other APIs and developer platforms — will optimize their work more for that and basically drive down the costs of using it and drive improvements that we can, in some cases, use too,” he added.
Building a bigger boat
Meta's AI tool, which is integrated with Facebook and other apps, has been widely popular, averaging around 700 million users per month. But that high usage comes with a price.
Infrastructure investment is essential for any AI giant seeking to compete globally. Meta announced it will build a new 2-gigawatt AI data center which will help the company build capacity to train future AI models.
It’s estimated Meta plans to spend as much as $65 billion US this year to expand its AI infrastructure.
Llama 4 outlook
Scaling AI models to be autonomous and embrace goal-oriented behavior will be a vital evolution from today’s current tools.
For Llama 4 to have coding and problem-solving abilities will level up the competition, and it’s likely we’ll see Alphabet, OpenAI and others seeking to bring to their systems a similar agentic feature.
Proactive AI is the future Meta would like to see, and they’re investing billions to make that vision a reality.
More from Tom's Guide
- I didn't think I'd have any use for ChatGPT Deep Research — 7 ways it's improved my daily life
- Google Assistant is losing features to make way for Gemini — here's what's just been axed
- I tested Gemini vs. Mistral with 5 prompts to crown a winner












David Silverberg is a freelance journalist who covers AI and digital technology for BBC News, Fast Company, MIT Technology Review, The Toronto Star, The Globe & Mail, Princeton Alumni Weekly, and many more. For 15 years, he was editor-in-chief of online news outlet Digital Journal, and for two years he led the editorial team at B2B News Network. David is also a writing coach assisting both creative and non-fiction writers. Find out more at DavidSilverberg.ca
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.

















