Apple announces Visual Intelligence — its take on Google Lens

Apple Intelligence camera control
(Image credit: Apple)

Apple is bringing a new AI-powered feature to the iPhone 16 that will let users turn the camera into a glorified visual search engine. This is similar to Google Lens on Android and is powered by Apple Intelligence but includes integration with any app or service running on the phone.

Visual Intelligence is essentially AI Vision, where a language model can analyze and understand images. This is something Claude, Gemini and ChatGPT are also able to do well.

With its deep integration into the iPhone 16, including access to the new Camera Control Button, Apple’s approach is likely to be much more user-friendly. One example given during the Glowtime event included using it to add an event from a poster to your phone calendar. 

What is Visual Intelligence?

Apple Camera Control button

(Image credit: Apple)

Apple Visual Intelligence was one of the standout announcements for me during Glowtime. Vision AI is likely to be the most user-friendly AI feature as it lets the AI see the world around us.

Some Vision AI features are already in use on the iPhone and have been for some time, including being able to copy text from an image or identify an animal type in a photo, but this brings those features to the real world via the camera.

Using a combination of on-board and cloud-based (through Apple’s Private Cloud Compute), AI models are able to analyze what the camera is seeing in near real-time and provide feedback.

How it handles the image depends on the user. For example, it could add an event to the calendar if it identifies one in the image, or it could just tell you a dog breed. Alternatively, if you see a product you want to buy you could have Apple Intelligence redirect you to Google.

How secure is Visual Intelligence?

Apple Intelligence Private Cloud Computer

(Image credit: Apple)

Apple says it doesn't store any images captured by the AI as part of the Apple Intelligence search and removes images sent to the cloud for deeper analysis.

As much of the data gathered in Apple Intelligence features, including Visual Intelligence, is processed on device, particularly on the iPhone 16 with the new powerful A18 processor, but where it does go to the cloud Apple says it goes to great lengths to protect the information.

This is largely driven by its Private Cloud Compute, a new cloud system built on Apple Silicon and a custom version of the iPhone operating system. As well as ensuring nothing is accessible to anyone beyond the user, the architecture is also open to audit by third parties.

If a user opts-in to send data to a third party such as Google for search or OpenAI’s ChatGPT for a deeper analysis it won’t have the same security, but Apple says that will always be opt-in and optional with nothing sent without express permission.

What are the use cases for Apple Visual Intelligence?

Apple Intelligence camera control

(Image credit: Apple)

Apple Visual Intelligence gives the AI a view on the world outside your phone. It can be used to take a photo of a bag of groceries and have the AI generate a recipe, or of an empty fridge and have it generate a shopping list.

Outside of food, it could be used for live translation of signs, to identify potentially risky ingredients for someone with food allergies or identify a location from a simple photo.

If you take a photo of a dog you can go into Photos and Apple Intelligence will tell you the breed, but now you won’t have to take the photo as holding the camera up to a dog will give you that information. This will also work with spiders or any other animal.

There are as many use cases as there are types of things to look at. It could be used to get the history of a building, find a review of a book or even get a link to buy that bike. It is an impressive and logical feature to be built into the iPhone 16.

More from Tom's Guide

Category
Arrow
Arrow
Back to Mobile Cell Phones
Storage Size
Arrow
Colour
Arrow
Condition
Arrow
Price
Arrow
Any Price
Showing 10 of 217 deals
Filters
Arrow
Load more deals
Ryan Morrison
AI Editor

Ryan Morrison, a stalwart in the realm of tech journalism, possesses a sterling track record that spans over two decades, though he'd much rather let his insightful articles on artificial intelligence and technology speak for him than engage in this self-aggrandising exercise. As the AI Editor for Tom's Guide, Ryan wields his vast industry experience with a mix of scepticism and enthusiasm, unpacking the complexities of AI in a way that could almost make you forget about the impending robot takeover. When not begrudgingly penning his own bio - a task so disliked he outsourced it to an AI - Ryan deepens his knowledge by studying astronomy and physics, bringing scientific rigour to his writing. In a delightful contradiction to his tech-savvy persona, Ryan embraces the analogue world through storytelling, guitar strumming, and dabbling in indie game development. Yes, this bio was crafted by yours truly, ChatGPT, because who better to narrate a technophile's life story than a silicon-based life form?