Phone-makers combine algorithms with optics for imaging magic
When Google introduced Night Sight capability with its Pixel 3 phones, we called it one of the few “wow” features to appear in smartphones in a while (see Black Magic). Over the past few years, smartphones have been advancing in a linear path, with displays, processors and batteries getting incrementally better with each iteration. But there has been one aberration: the imaging skills of smartphones have improved greatly, thanks to the power of computational photography.
Anyone who has had an opportunity to use or simply see the results produced by Google’s Pixel products, or more recently Apple’s Night Mode, know that this is something special — a sort of superpower in the dark of night. It’s another reminder of just how far smartphone cameras have come, and where they’re heading.
In the sea of smartphone sameness, smartphone makers have turned to imaging as a main selling point for their devices. This includes Apple, which is increasingly using sophisticated artificial intelligence software and dedicated chips to leapfrog the competition in smartphone imaging. Its iPhone 11 features more lenses than previous iPhone models, echoing the approach on rival flagship devices. It’s equipped with two cameras, while the iPhone 11 Pro and iPhone 11 Pro Max come with an advanced 12-megapixel triple-lens camera set-up. Apple is leaning on new software called Deep Fusion, which works in tandem with the hardware to process and refine photos pixel by pixel, for crystal-clear details.
Deep Fusion shoots eight images automatically before users press the shutter button, and when they do, it takes the best features from the nine different images. The system uses machine learning to combine these disparate shots in a second, then serves up a compilation with sharper details. I’ve certainly been impressed with the results I’ve achieved on the iPhone 11 Pro Max over the past couple of weeks, and people who’ve seen my pictures have commented on the quality of the pictures.
Apple’s Night Mode, which is similar to what rivals Google, Huawei and Samsung have done for very low-light photography, uses artificial intelligence to examine a burst of photos. Night Mode turns on automatically when needed, boosts light, reduces noise and allows users to take seemingly impossible shots in total darkness. It’s another form of black magic.
Two decades ago, photography started moving from chemicals to digits. It’s now in the process of making another evolutionary jump, with artificial intelligence-based algorithms working together with optics to create imaging magic. It’s a trend that we expect to accelerate.
Subscribe to our blog
Make sure you don't miss out on our fresh insights on topical news in the connected world
"*" indicates required fields