It’s Time to Worry About AI Video Technology

It’s a widely known (if little discussed) fact that the adult film industry often serves as a pioneer for video technology. It was the industry’s early backing of VHS that helped to bury Betamax and its embrace of Blu-ray that helped shovel dirt over rival HD-DVD.

So we should all take notice of a series of reports in Motherboard about the emergence of a new type of adult video technologyonly this format doesn’t have the imprimatur of established adult filmmakers or tech companies behind it. Instead, it’s the brainchild of coders working with machine learning tools to swap the faces of adult actresses with those of established Hollywood celebrities.

These so-called “deep fakes” are growing in popularity, Motherboard notes.  They’re also spreading beyond the confines of adult cinema and victimizing more than Hollywood celebrities. As Motherboard reports, enterprising coders are using publicly available image scraping tools to cull faces and videos from social media feeds to superimpose them onto adult films using the so-called FakeApp. It’s not an easy processat a minimum, you need an advanced GPU and the ability to train a neural network. It’s also fairly clumsyas Motherboard points out, even the most artfully produced fakes still look a bit unreal. But it’s also emblematic of how AI technologies developed to solve image recognition problems are being leveraged for more nefarious ends.

Of course, the practice of altering images isn’t new. From Stalin airbrushing away political enemies to photojournalists altering photos for prize money, image manipulation is an entrenched practice. It’s also incredibly difficult for ordinary people to detect. Fake videos may be initially easier to detect, if only because they’re more difficult to compile. But it would be foolish to believe they won’t grow more convincing with time. (For the doubters, look at the progress chipmaker Nvidia has made in creating fake photos and videos using machine learning.)

What’s more troubling, however, is the social and technological milieu in which this manipulation is taking place. Social media feeds are not only providing the raw material for fake videos, they’re also the host organism that allows this parasitic content to become uniquely dangerous.

It doesn’t take superhuman clairvoyance to appreciate how dangerous AI-generated videos are. The nefarious possibilities are endless: fake videos can be used to discredit political opponents and to shame and harass ordinary citizens. But the broader damage is even more acute: as more AI-generated video content gets dumped into our societal bloodstream, we may well lose the ability to discern the real from the fake.

We needn’t conjure a dystopian future to understand the impact of this trend. It’s happening right now.  In 2016, bots widely believed to be operating at the behest of Russian intelligence services pumped disinformation into social media platforms creating or amplifying conspiracy theories and fictional realities to advance a political agenda. Now imagine this activity amplified with fake videos. Our ability to sort the real from the unreal, already fragile, may well collapse in the face of AI-generated videos.

But the situation isn’t all bleak. The same AI technologies that have been weaponized by deep fake can be harnessed to spot fakes. As more and more fake videos are dumped onto the Internet, machine learning algorithms will have a richer resource of fake material to train on to improve their detection skills. In fact, some of the same techniques used to spot fake still images can be applied to videos. Digital forensics experts have demonstrated the ability to spot real people from fake people in videos by detecting the presence or absence of a pulse.

Homer Simpson once observed that alcohol was the “cause of, and solution to, all of life’s problems.”

We could say much the same of artificial intelligence.