Each year, the fine folks at IFA bring in journalists from around the globe (including yours truly) for a preview of the annual IFA confab in Berlin (1-6 September) and to take in a series of presentations from prognosticators and product makers. Nearly every trending tech topic is discussed.
Among all the industry forecasts and product demos, different takes on three long-discussed overarching technology issues piqued my interest:
- Growing pervasiveness (and relative lack of intelligence )of AI
- Privacy and security issues surrounding smart/connected products
AI: More Than Just Voice
An AI panel started with recognizing how voice command devices and apps such as Amazon Echo, Google Home, Cortana, Bixby and, of course, Siri, represent merely the most obvious AI exposure for consumers. It is machine learning behind the voices that is driving AI—technology mostly beyond the view and ken of most consumers.
For instance, the future success of autonomous cars relies primarily on their ability to artificially learn about driving the way we do (and, apparently, “Grand Theft Auto”); Roomba’s latest floor-cleaning robots can “learn” a room’s layout to optimize its cleaning routines; late last year, Google Translate got a machine-learning overhaul; and, many voice menu systems, including a growing number of emergency 911 systems, now use “situational awareness” and machine learning to measure the relative importance of a report and to provide an appropriate response.
My question is: How do you actually define “AI”? Regardless of the technologies involved, many devices supposedly imbued with “AI” ain’t that intelligent. For instance, how many times have the aforementioned voice systems failed to comprehend your request? A child may have a brain capable of learning, but that doesn’t make she/he “intelligent.”
If I were asked to define AI, I’d apply these five criteria:
- Natural language: When I have to speak to a device in its language instead of mine, it can’t be that intelligent.
- Context: Samsung’s Bixby and Google Home have claimed contextual capability (i.e. you ask about what theaters a movie is playing in, then follow-up with a question on show times and the system understands what you’re referring to), but understanding a question or conversation thread is still beyond most of these voice command modules.
- Learning: I don’t mean just situational, stuff about me and mine and my needs/wants, but who I am. My Echo, for instance, can’t differentiate between me and someone who says “Alexa” on TV, yet often doesn’t respond to my wife.
- Independent thought: If these systems are so smart, why do they need to wait for me to ask them a question or remind me of a pre-arranged calendar appointment? Let me know when Echo or Siri can remind me to take an umbrella before I leave the house based on the weather and my calendar.
- “Why” questions: To me, the ultimate AI capability is being able to answer a “why” question.
Of course, there was one other AI issue raised by a reporter (not me) that is more fantasy than reality: sentience, or self-awareness—the old SkyNet, “Will my smart device kill me?” concern. Since today’s AI devices, and those of the foreseeable future, are unlikely of formulating independent thought and making rationalizations, marauding machines will thankfully remain the realm of science fiction. While machines are unlikely to be purposeful lethal, however, they can be otherwise dangerous.
A growing number of consumers cover their PC cameras for fear of cyber peeking; whether this practice is practical or paranoia is an open question. But all the recent talk of Russian hacking, as well as FCC commissioner Ajit Pai’s anti-net neutrality positions, and U.S. Congress’ recent repeal of Internet Privacy Rules for ISPs, have raised consumer internet privacy and security concerns to new levels. AI systems like those described above from Google, Amazon, etc. don’t have to abide by those recently overturned internet privacy laws either, continuing to set off privacy alarm bells. Coupled with a renewed sense of concern over cyber security, consumers may begin to worry more about security and privacy in the same hand-wringing moment (assuming consumer hands are actually wringing).
While we’ve all been following developments in AI and security/privacy issues for years, eSports—which was presented as an introductory tidbit at one of the presentations—is a relatively new thing for this middle-aged, non-gaming correspondent.
According to something called the Newzoo Global eSports Market Report, the eSports economy will grow to $696 million this year, a year-on-year growth of 41.3 percent, with a global audience reaching 385 million this year. According to these folks, 43 million people watched the League of Legends (which I have never heard of) world championship final last year, compared to the 31 million folks who tuned into Game 7 of the NBA championship between the Cleveland “LeBron James” Cavaliers and the Golden State “Steph Curry” Warriors.
How one tuned into the LL final I have no idea, but I may not be the only unaware one in the connected universe who didn’t. In many ways, the rise of eSports reminds me of the home video revolution in the late 1970s and ’80s. At first fought by the Hollywood studios, the home video market soon became a majority source of their income.
With the likely mainstreaming of VR and AR, which has been and will be driven by the gaming community, perhaps e-clueless folks like myself ought to be aware of this gaming evolution/revolution playing out beneath our middle-aged cyber noses, unlike the more obvious AI and security/privacy evolution/revolutions.