Apple's new OpenELM teases the future of AI on the iPhone
New small language models from Apple
Apple has released a new family of large language models, made it fully open source and offered it up on popular AI platform Hugging Face for other developers to play with and adapt.
The iPhone maker has become very active in the open source artificial intelligence space over the past few months and with its latest release the company hopes to help shape the direction of apps built using these on-device language models.
OpenELM is a framework designed to work well on edge devices such as smartphones or laptops. This is important for Apple as running AI locally is more secure and better for privacy.
There is no indication of whether these models will form part of Apple’s plans for on-device AI in iOS 18 or upgrades to Siri, but they do show the direction the company is going with its AI.
What is Apple OpenELM?
The full title is Open Source Efficient LLMs and they are instruct models designed to be re-trained, adapted and integrated in other projects by third-party developers and researchers.
These new models have been designed to be more accurate and efficient. Initially Apple's focus is in providing support for the research community as OpenLEM can be used for investigating model biases, risks and levels of trustworthiness.
There are four models in the family pre-trained on the open source CoreNet data library. All are small language models with the largest at 3 billion parameters. This is a similar size to Microsoft's new Phi-3 small language model.
Sign up to get the BEST of Tom's Guide direct to your inbox.
Here at Tom’s Guide our expert editors are committed to bringing you the best news, reviews and guides to help you stay informed and ahead of the curve!
What makes OpenELM different?
The big differentiator is getting similar performance to other open soruce language models but on a much smaller training dataset. This makes it perfect for niche use cases and research.
Apple researchers wrote in a paper on the new models: “With a parameter budget of approximately one billion parameters, OpenELM exhibits a 2.36% improvement in accuracy” compared to other similar sized models.
With the release of the new models Apple also offered code to use the MLX library. This is the toolkit Apple uses to run AI models like Stable Diffusion on its own chipsets.
Being able to deploy models to edge devices running Apple's own chips could also be a game changes for wearable tech. We could see future Apple AR glasses using an onboard AI to offer information on surroundings even when offline.
What does this mean for the future of the iPhone?
OpenELM is primarily a research project, a way for data scientists and people investigating the safety and accuracy of AI models to run code more efficiently.
However, it further shows Apple's commitment to creating AI models able to run on devices like iPhones, iPads and MacBooks efficiently without compromising capability.
One reason Siri has always been seen as not as good as other legacy AI chatbots like Alexa and Google Assistant is because Apple had much of its functionality running on device, meaning it couldn't draw on as much compute powering for complex tasks.
A lot of Apple's recent work on AI, including research into improving efficiency of memory usage, running models that use the neural engine and new language models that work from a single prompt has been towards this goal and OpenELM is no different. It could even lead to a framework developers could use for AI in apps.
More from Tom's Guide
- I can’t stop thinking about the Nothing Phone 2a Micro
- Apple Watch X — 5 upgrades Apple needs to make
- Google Photos is making it easier to free up space for your pictures and videos
Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover. When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?