18 October 2017The Verge:
Google attempts to do the same thing with a single lens that other cameras do with two: detect depth data and blur the background. Most phones do this by combining computer recognition with a little bit of depth data — and Google is no different in that regard.
What is different is that Google is much better at computer recognition, and it’s gathering depth data from the dual-pixel system, where the half-pixels are literally less than a micron apart on a single image sensor. Google’s proficiency at machine learning means portrait images from the Pixel 2 do a better job of cropping around hair than either the iPhone 8 or the Note 8.
The Pixel 2 cameras are very impressive. The sample photos are very sharp and the automatic HDR+ effects make most of the images look hyper-real, probably not the most accurate depiction of the real-life scene but they look good.
It’s also fascinating to see others do Portrait mode features with a single lens. As everything in technology follows a path towards miniaturisation, the Apple approach — a dual camera system — will eventually be obsolete. One day. As it stands, the duo component enables another camera feature that no single lens phone offers: optical zoom.
The 2x zoom of the telephoto camera is a huge feature. In fact, when the iPhone 7 Plus first launched, the only function the dual cameras served was higher-quality zooming. The depth effect Portrait camera didn’t ship until a month after the phone was released.
The ability to zoom without digital cropping is a big deal. It justifies having two ugly holes poking about the back of the phone, rather than one. It doesn’t matter that Apple can ‘only’ achieve Portrait mode by using two lenses until Google (or another prominent phone manufacturer) can do optical zoom with a single lens.