Will the Edge Computing Revolution Capture Cameras?

Smartphones have done more than cannibalize digital camera sales. They have, in many ways, one-upped them in the innovation department, at least when it comes to the growing trend of computational photography.

Of course, what smartphone makers tout as innovative advancements in imaging are usually just ways of overcoming the inherent disadvantages of using small lenses and image sensors. Indeed, the phrase “computational photography” underscores the point that it has fallen to software to accomplish what smartphone hardware cannot.

Still, the gains have been impressive. Google has leveraged computation to improve the dynamic range of images taken with its Pixel 2, while Huawei has built its own AI chip (a so-called Neural Processing Unit or NPU) to enable scene recognition and longer battery life on its Mate 10 Pro. Indeed, in their attempts to transcend the physical limitations of their hardware, smartphone makers and chip suppliers have mined a rich vein of computational advancements—and those advancements won’t stay confined to smartphones forever.

One reason smartphones have been able to harness AI algorithms in their devices is improvements in so-called edge computing. In edge computing, some data is processed on local hardware devices, not the cloud. While AI algorithms typically require huge amounts of processing power only available on cloud networks, improvements in chip design and the use of dedicated NPU chips can tackle some of the computational workload at the smartphone level, bringing AI-powered photography features into reach.

There’s loads of potential for this technology in a traditional camera. For one thing, photo enthusiasts take tons of photos and managing them after the fact is a mounting challenge. An AI object recognition system working in the camera could serve up metadata tags to help classify images as they’re recorded. Such a system could alleviate a lot of the grunt work of sorting photos and assigning tags to them manually, so they can be retrieved later. Images could arrive out of your camera pre-tagged and or even pre-sorted by tag (versus sorting at the time they’re captured, as is typically the case today).

Object recognition could also be paired with facial recognition algorithms—already a staple in camera autofocusing systems—to key in on particular people in a frame and then tag them by name. Taking a page from Google, AI-powered bracketing could improve camera dynamic range by sampling a wide range of light and dark pixels and selecting the best for a given image.

It will be some time yet before such AI-powered features are implemented directly in hardware. To start, it will likely emerge in the form of accessories like the forthcoming Arsenal, a tiny dongle that connects to the camera and performs scene recognition and camera setting optimization on its own CPU and GPU. But as camera processors improve, as they inevitably do, these AI capabilities will eventually make their way to traditional cameras.

Yet even when these capabilities migrate to traditional cameras, traditional camera makers will still be playing catch up with their smartphone peers. Unfortunately, given the slowing pace of camera introductions and the lengthening of the product replacement cycle, it will be hard for camera makers to actually take the lead in delivering innovative, computationally-driven features. Hard, but not impossible. There’s plenty of color science and image processing knowledge in companies like Canon, Nikon, Sony, et. al. that could be unleashed on these challenges.