Is it time to stop worrying and learn to love deep fakes?

Many of us are familiar with the story of the original War of the Worlds broadcast. On October 30, 1938, Orson Welles delivered a history-making radio broadcast that, despite a disclaimer of its fictional content at the start,  convinced at least some portion of the listening public that aliens had invaded Earth.

The Orson Welles of today wouldn’t need radio to dupe the masses into believing something outrageous. He’d just need social media.

The proliferation of disinformation and misinformation on social media had been a growing problem long before the invention of “deep fakes”—artificially generated photos and videos that often stitch a celebrity’s face to other people’s bodies. Thanks to machine learning, the quality and realism of these stitch jobs has improved rapidly. Even more, because of open source code repositories, the software to create your own deep fake is proliferating.

Naturally, the existence of deep fakes has sparked considerable concern that the rivers of misinformation already flowing through social networks would swell as an even more powerful tool of deceit landed in people’s hands. In 2019, U.S. intelligence identified deep fakes as a “major threat” to the country. And while the crisis of misinformation on social networks hasn’t improved since the 2019 declaration, deep fakes are hardly to blame—at least, not yet.

That’s not to say deep fakes haven’t become more sophisticated. Just the opposite. In this telling example, a deep fake creator was able to recreate a more compelling video of the late Carrie Fisher reprising her role as a young Princess Leia than the one produced by a major Hollywood studio for an actual Hollywood production. Or take Deep Fake Tom Cruise, who appeared on TikTok as part of a deliberate parody by a visual FX artist. They’re both incredibly realistic. By posting these deep fakes to social media and seeing them shared and reported on widely, their creators have also done an immense public service in educating the public. Many now know that it is relatively easy to create realistic but synthetic photos and videos of people.

But the threat of deep fakes can only be partially mitigated by awareness. If the past few years of social media-fueled misinformation have taught us anything, it’s that the quality of this misinformation—be it a convincing deep fake video or simply a plain text article—is not, in fact, the root of the problem. Most of the misinformation that individuals fall for can be easily debunked and yet, misinformation can prove stubbornly persistent (one study found that factually untrue information spreads more widely on social media than truthful information). It may be the case that people are falling for misinformation not because it’s convincing, but because they want to be convinced.

If that’s true, technological solutions that identify deep fakes may only be of marginal utility. As we have seen with text-based misinformation, many social networks are reluctant to move early and aggressively against these posts, and attaching fact-checking labels to posts containing misinformation can actually backfire.

Instead of fearing the rise of deep fakes as a harbinger of a new era in misinformation, perhaps we can look on the bright side of this technology. The ability to create photo-realistic synthetic images and videos could be a boon to creative industries. As the Carrie Fisher example shows, this technology can add new dimensions to visual storytelling, convincingly erasing the appearance of age or even overcoming death itself.

Machine learning algorithms that create realistic images and videos—not simply synthetic copies of real people—may transform stock photography and gaming. After all, why stage an elaborate stock photo shoot when, with the press of a few figurative buttons, you can generate scenery or models who won’t need coffee breaks or won’t protest when they’re placed at the lip of a volcano.