Fans of any good spy drama know the scene well. Our hero is out in the streets, frantically chasing down some goons. Back at headquarters, an army of analysts sit at their consoles monitoring security camera feeds and offering real-time guidance to our hero as she navigates the cluttered crossways.
In the dramatic telling, it’s human eyes and ears that vacuum in the signals from security cameras to keep our hero on track. In 2017, it’s artificially intelligent software.
Chipmaker NVIDIA announced a new platform this month called Metropolis that not only ingests security camera video, but also uses machine learning to analyze its contents in real time to detect incidents, manage traffic and “optimize resources.” According to NVIDIA, there should be 1 billion surveillance cameras active around the world by 2020—a firehose of data that no conceivable collection of civil servants could adequately monitor. Armed with Metropolis, urban planners can monitor traffic flows, and first responders could be quickly alerted to crimes and track fleeing suspects with rapid precision.
Metropolis is actually a constellation of NVIDA technologies, including technologies offered to streaming video services for video categorization. Some of the intelligent surveillance technology lives on the “edge” or the device deployed in the field. Other Metropolis components rely on data centers and cloud processing. NVIDIA has an obvious interest in pushing machine learning deep into as many sectors as possible, given that deep learning algorithms run better on Graphics Processing Units (GPUs), a NVIDIA specialty. But given the volume of data generated by connected cameras, it’s still very fertile ground.
While Metropolis is geared toward municipalities, similar AI-driven technologies are trickling down into the consumer home security camera market. The Kickstarter-funded Flare security system, for instance, incorporates machine learning to enable its on-board microphone to distinguish between normal household sounds and those that could signal danger. Its camera can also distinguish people from pets and friends from strangers. Google’s Nest has taken a similar approach with its Nest Cam Outdoor.
Late last year, Qualcomm threw its hat into the ring with the Snapdragon 625, a chip that could perform “on-camera deep learning” and only begin streaming video when triggered by a qualifying event (not simply the usual motion triggers that are already widespread in the industry). Qualcomm said that devices using the chip would be “conscious cameras.” That’s obviously a stretch, but it does serve as a useful reminder that smart devices powered by AI are indeed becoming smart enough to extract meaningful and actionable information from the world around them.