Apple vs. Google: Two Visions for the Smartphone Camera

FacebookTwitterGoogle+Share

Apple and Google are famous tech antagonists. Even without the late Steve Jobs’ “thermonuclear” patent war, the two giants would still be locked in a titanic struggle across a number of arenas.

In the past several weeks, the two combatants have jousted over one of technology’s central battlefields: the smartphone. Both companies announced new flagships—Apple’s iPhone 8 Plus and iPhone X and Google’s Pixel 2. In doing so, they’ve revealed an interesting divergence in how the two tech giants are approaching this critical beachhead.

Both smartphone launches focused heavily on photography, with Apple and Google executives touting how much better the imaging capabilities were on their updated products.

Apple has taken, for lack of a better word, a more traditional approach to smartphone photography. The company is leveraging immense amounts of hardware and software engineering in service to what is a rather mundane (if laudable) goal: creating higher quality images.

Consider Portrait Mode, originally introduced in the iPhone 7 Plus. By using two cameras instead of one, the iPhone 7 Plus (and 8 Plus and X) can better capture depth and color information about a scene. This information is then fed into advanced processors where object recognition software emphasizes subjects in the foreground by artificially blurring the background.

In the new iPhone 8 Plus and iPhone X, Apple is expanding on Portrait Mode with Portrait Lighting. Now, software won’t just artificially blur the background, but also artificially introduce lighting effects to mimic the look of professional portrait lighting. Apple made other photographic tweaks as well, such as increasing the pixel size on the iPhone’s sensor and enhancing its noise reduction algorithms to improve low light performance. The end results are a pair of iPhones that take higher quality images than their predecessor.

Google, too, has pursued similar quality ends, albeit using just a single camera instead of a pair of them. Its Pixel 2 offers improved dynamic range, a portrait mode and several other quality enhancements that landed it on the top of the independent test lab DxO Mark’s smartphone image quality ratings.

But Google isn’t simply focused on maximizing image quality. They’re also maximizing image utility.

With the launch of the Pixel 2, Google is offering a beta preview of a new app called Lens. Lens leverages Google’s progress in machine learning and image recognition to detect objects in a photo and provide information about them. As Frederic Lardinois writes, if you snap an image of a flower, Google Lens will tell you the type of flower you’ve just photographed alongside information about it. Objects like movie posters, artwork, landmarks and restaurants can also be recognized.

Lens will do more than just provide rich information about an image’s contents. If you photograph text, the Lens app will let you highlight and copy the text to extract it from an image. If you photograph a wireless router’s SSID and password, your phone will automatically connect to its network.

With Lens, Google is reaffirming an important truth about smartphone photography—it’s not simply about preserving moments in time. It’s about gathering and communicating information. Open your camera roll and take a look at its contents. I did recently and it was an interesting exercise.

My camera roll contains about what you’d expect from a digital photo album: photos of my kids, my cat, my vacations. But there were also photos of a spice rack in the supermarket—an image I texted to my wife because I didn’t understand the difference between dill seed and dill weed (embarrassing, I know). There was also a picture of a fifth grade math assignment—my daughter forgot hers and we needed another parent to send it to us (at least it was blank). Then there was an image of a beer can—a friend wanted to know what delicious beer I had recently served him at my house (it’s called Dr. Citra).

Aside from an incriminating record of my ignorance and vices, this journey through my camera roll reminded me that I use my smartphone camera to communicate the mundane about as much as I use it to memorialize the monumental.

Google appears to understand that. The Pixel 2 is very much a two-prong pitch: It promises to improve the quality of the memories users want to preserve alongside the promise of pulling information from photos you’re taking simply to gather or communicate something whose importance may be fleeting. Because it is sitting on vast amounts of data, Google is uniquely positioned to exploit this kind of visual communication. Eventually, it should be able to automatically segregate those images so that users who want to mine their camera roll for fond memories don’t have to be subjected to spice racks and homework assignments.

FacebookTwitterGoogle+Share