The further disintegration of truth

The old proverb, “seeing is believing,” is in serious jeopardy of becoming outdated.

With the advent of the latest AI deep-learning technology, it is becoming easier and easier for any computer-literate content creator to produce and distribute increasingly hard-to- detect fake videos that have people doing and saying things they didn’t really do or say. These so-called “deepfake” videos have become the newest way to spread disinformation and further undermine truth.

Deepfakes technology has a variety of mostly harmless applications, including use in creating Hollywood special effects, advertising, art and comedy. The first, more nefarious consumer-generated deepfakes were posted on Reddit in late 2017. These early deepfakes were crude videos that superimposed celebrities’ faces into pornographic movies. In early 2018, one of the creators of these videos posted FakeApp, a free, desktop computer software program that allowed just about anyone to create their own deepfake. And just this September, a newly released Chinese deepfake app called Zao became the most downloaded free app on China’s iOS App Store literally overnight.

Since then, Presidents Obama and Trump, Facebook CEO Mark Zuckerberg, Tesla CEO Elon Musk, House Speaker Nancy Pelosi (though not actually an AI-generated video) and a host of celebrities have all been the subject of altered videos. And there are several YouTube channels dedicated to these increasingly sophisticated, deceptive videos, including the aptly named channel, Ctrl Shift Face.

While all this seems relatively harmless today, the real concern is how these manipulated fake videos can be used in far more sinister ways to impact the outcome of elections, disrupt sensitive international political negotiations, create new kinds of blackmail, further fan social tensions, foment increased fear, commit fraud and even severely damage an organization’s brand and reputation as well as the executives that run them.

“The circulation of deepfakes has potentially explosive implications for individuals and society,” said then University of Maryland law professor Danielle Citron in her Congressional testimony. “Under assault will be reputation, political discourse, elections, journalism, national security and truth as the foundation of democracy.”

As if that weren’t enough, deepfakes could ultimately lead us to distrust legitimate videos and other content — or even claim that a real video is a deepfake, a concept Professors Robert Chesney and Citron call “the liar’s dividend.”

As CSO Online senior writer J.M. Porup warns, “If we are unable to detect fake videos, we may soon be forced to distrust everything we see and hear. The internet now mediates every aspect of our lives, and an inability to trust anything we see could lead to an ‘end of truth.’ This threatens not only faith in our political system, but over the longer term, our faith in what is shared objective reality.”

Although in their infancy, deepfakes are already having an impact. According to a recent Pew Research study, nearly 70% of adult Americans surveyed said altered videos and images create a “great deal” of confusion about the “facts of issues and current events.” More than a third said “made-up news” had led them to reduce the amount of news they get overall. And more than three-fourths of U.S. adults said steps should be taken to restrict altered videos and images that are intended to mislead.

So where do we go from here? There are four avenues worth pursuing, but neither alone is going to solve this potentially insidious trend. First, is technology. Although the technology needed to detect deepfakes is improving rapidly, the technology to create even more undetectable deepfakes is moving even faster. No matter how quickly detection technologies develop, they will likely never be able to keep pace with the creators or prevent all deepfakes from reaching their intended audience. But we need to keep trying.

Second, are even more complex legal and regulatory remedies. At the end of the day, it’s a delicate balancing act between speech protected by the First Amendment and the existing laws that protect the targets of deepfakes, including copyright and defamation. Lawmakers are under increasing pressure to find new legislative and regulatory solutions, but those options are very difficult to fashion in a way that doesn’t infringe on or aren’t redundant with existing law.

Third, are the actions the social media platforms can take. Some platforms are working hard to improve their detection technology, but others are just allowing these videos to pervade their sites. There is clearly a need for them to formalize some consistent and more stringent policies on deepfakes without running afoul of existing law.

Finally, is increased public awareness. Consumers need to become far more knowledgeable about deepfakes and have a far higher level of skepticism when they see or hear things that just don’t seem logical. The biggest hurdle here is the echo chamber many of us unfortunately live in. As The Guardian columnist Hannah Jane Parkinson noted, “It’s increasingly concerning, in a polarizing world, how few people seem to seek out opinions that diverge from their own, or scrutinize things that adhere to their already held views. The internet has cultivated supercharged confirmation bias.”

As professional communicators we also have the responsibility to protect the organizations and executives we serve from the potential risks associated with deepfakes—and any fake news, for that matter — or at the very least be better prepared to quickly respond to it.

However, in a recent survey of public relations professionals conducted by the Plank Center for Leadership in Public Relations at the University of Alabama, nearly 60% of the respondents called fake news a “serious threat,” yet only 13% are doing anything to detect fake information. And while 29% of respondents admit they have no plans to address the threat, only 19% think the issue has any bearing on their work.

“These studies reveal a collective passivity across the profession that’s hard to explain,” says Dr. Bruce Berger of the Plank Center. “It’s time for PR leaders to stop paying fake attention to fake news.” And to deepfakes.

And it’s time for all  of us, including tech companies, lawmakers, social media plat- forms, the general public and professional communicators to step up and aggressively address deepfakes, which are quickly becoming perhaps the greatest threat to informed, objective and accurate public dialogue. We desperately need to restore faith in the truth before it’s too late.


To download a full copy of the 2020 Relevance Report, click here.

Kirk Stewart is the founder and CEO of KTStewart, a firm focused on enhancing value for 21st-century organizations through integrated corporate communications campaigns. He is a USC graduate and a member of the USC Annenberg Center for PR Board of Advisors.