iPhone 11's Deep Fusion Camera: Is This All There Is?
Deep Fusion impresses — when you notice it
Apple's newest contribution to the smartphone computational-photography arms race came wrapped in a fuzzy sweater, just in time for autumn. Deep Fusion is a method that, by all accounts, generates remarkably detailed photos on the iPhone 11, iPhone 11 Pro and iPhone 11 Pro Max. How? By fusing 12-megapixel multiple camera exposures into a single image, with every pixel of the image given the once-over by intense machine-learning algorithms.
Like so many camera phone innovations of the last few years, it's a remarkable combination of sensor technology, optics and software that's transforming how we take pictures.
And yet ... Deep Fusion and its sweater also say a lot about the perplexing era the smartphone industry currently finds itself in. What does it say about a feature that creates appreciably better photographs, but in details that are hard to notice unless you zoom in all the way and carefully toggle back and forth between samples? Why is it that the best picture Apple could use to show off Deep Fusion was an awkward shot of a dude in a sweater? And what does it mean that Deep Fusion received a clever name and several slides in Apple's biggest media event, but it's invisible to users of the iPhone's camera app?
What I'm saying is, this sweater raises a lot of questions.
The most important phone feature
I'm not sure anyone realized it the first time a camera was added to the back of a phone, but today it's obvious: Our world has changed in many important ways because everyone carries a camera in their pocket everywhere they go. Nobody ever lost money betting that the next generation of a smartphone would include an upgraded camera — because the camera is arguably the single most important feature on any phone.
But phones are thin slabs, devices that are absolutely the wrong shape to be a host for high-quality camera optics. Adding camera bumps or pop-out modules only go so far at tackling that problem; they're never going to be able to collect light or zoom in or focus in the same way that long lenses in dedicated cameras can. So phone makers have taken to using their great advantage over even sophisticated digital cameras — namely, that they're full of cutting-edge chips, sensors and software. If you can't use optics to make the background of a shot look pleasantly out of focus, just fake it with machine learning, depth mapping and a blur filter. That's what Google and Apple do.
That brings us to Deep Fusion, which does seem to be a remarkable application of high technology to make the output from the iPhone 11's camera better. If you can't rival a DSLR for clarity of image, then take a hundred megapixels' worth of image data and throw multiple machine-learning processor cores at the problem. By giving Deep Fusion all that time on stage last month, what Apple's really doing is sending the message that the company is focused on making your pictures look better, and if you care about that, you should buy an iPhone. Ironically, the details don't matter — what matters is that Apple is using its technological prowess to make your photos better.
Apple and its competitors know that smartphone buyers really do care about the camera more than anything else. And so they will keep finding ways to make pictures snapped on wafer-thin smartphones look better and better — at least, until those buyers stop caring.
What the users see (or want to see)
Which brings me to a funny twist about Deep Fusion as a clever bit of smartphone marketing, namely that it's an invisible feature. How can an iPhone 11 buyer get hyped about using Deep Fusion when it arrives with the iOS 13.2 update, when it's not actually something you know you're using? The iPhone's Camera app is rife with little icons and stickers to indicate when a photo is being automatically cropped, when you're grabbing Live Photo video to go with the still, and the like. And yet, at least in the iOS 13.2 beta, there's no hint that you're using Deep Fusion.
This isn't unusual; last year Apple made a big deal about Smart HDR for stills and extended dynamic range for videos, and neither of those features is labeled, either. I could argue that it's commendable that Apple does not junk up the Camera app interface with little stickers showing how many awesome different features are being used every time I take a shot. But it's also a little weird, right? The iPhone 11's new Night Mode gets a whole sticker, but it requires user interaction. These other features just happen silently.
I think it might be better if Apple at least offered the option to display whether an image was taken with Deep Fusion or not. (The company used to do this with HDR images before Smart HDR became an always-on feature.) But it's an interesting philosophical discussion: Can a feature be so important that it gets a marketing name, but not important enough to be advertised in the interface?
It gets even more interesting when you consider that some of these features are so subtle that you have to really pay attention, and make A/B comparisons, to see the difference. I don't want to advocate for the placebo effect, but even if users can't see the difference in their images at a glance, wouldn't it be nice to reassure them that, yes, this picture was fashioned out of multiple bracketed exposures by Deep Fusion on Apple's advanced machine-learning processors? (For what it's worth, testing by Tom's Guide spotted noticeable if subtle improvements in Deep Fusion photos. In the shot below, for example, the exposure on the Deep Fusion shot is brighter and patterns on the sweater — yes, more sweater photos — look a bit crisper.)
Up next: More sweaters
I don't see photography disappearing as an object of smartphone consumer desire anytime soon. Perhaps advances in screen technology (foldable phones, shatterproof glass) could become a distraction for a while, but in the end, we're all part of a culture that has fallen in love with the ability to never let a memorable life event go undocumented. Along with the ability to look up where you've seen that random character actor in that TV show before, it's one of the greatest things about having a smartphone.
That's why Google, Samsung and Apple will continue to offer us new ways to improve our smartphone photography. More processor cores, more machine learning, more sensors and — most importantly — more cameras. The iPhone 11 Pro may have three cameras, but it feels like we're rapidly heading for a world where the back of our phones is just covered in lenses, all gathering in light and getting stitched together to create the perfect shot we didn't even know we wanted to take.
Apple is also rumored to be working on a much more accurate depth map for 2020, which will be good for augmented reality and also for Portrait Mode.
The only way this will stop is if consumers decide that their smartphone cameras are now good enough and can't conceive of how they could be better. I don't think we're there yet, mostly because of how hard a problem it is to generate good photos out of skinny smartphone bodies. But it will happen. And the day that smartphone buyers stop caring about photography is the day smartphone makers will consider it a solved problem and move on to the next feature category that will drive phone upgrades.
Until then, get ready for more HDR portraits of dogs, more selfies taken in absolute darkness — and most definitely more sweaters.
Sign up now to get the best Black Friday deals!
Discover the hottest deals, best product picks and the latest tech news from our experts at Tom’s Guide.
Jason Snell was lead editor of Macworld for more than a decade and still contributes a weekly column there. He's currently running the Six Colors blog, which covers all of Apple's doings, and he's the creative force behind The Incomparable, a weekly pop culture podcast and network of related shows.