A blue robotic person staring at the viewer in a coded computer environment
Image generated by AI using iStock.

Misinformation with the speed of a thumb swipe

Walking to class, phone in hand, the image hits. The vice president grips a beer bong surrounded by 20-something girls. A frat party rages in the background. Real or fake? You freeze, but your thumb doesn’t. The image spreads. 

That image sits at the heart of tomorrow’s politics — a complete shift upending campaigns, not by dismantling truth, but by making truth irrelevant. This system no longer holds campaigns, operatives, or even voters as its primary players. And we’re already seeing it in the current presidential election. The world can be upended with a single AI-generated image, one that seeps into every text thread, every feed, and rewires trust faster than reason can react. No longer do narratives unfold slowly. They explode, instantly shifting perceptions with the tap of a screen. 

Some of it is harmless. The AI-generated cats surrounding Trump in response to the Taylor Swift endorsement of Kamala Harris are actually kind of funny. Harris, clad in red, addressing a crowd of AI-created comrades in front of the symbol for communism, not so much. (Argentina’s Javier Milei did something similar to his rivals last year before winning that country’s presidency.) 

Some of the political uses for this technology are fairly mundane, such as the Republican National Committee’s first-ever entirely AI-generated ad from early 2023, depicting society crumbling following a hypothetical Joe Biden reelection. 

Take heart that it’s not all bad. As Reuters reported, Ashley, the first artificial intelligence campaign volunteer, called thousands of Pennsylvania voters in 2023.

For the campaign operative, AI presents a new challenge that might just be met with a talented new team of Gen Z staffers who already live in this world and can fight fire with fire, or at least can attempt to match its pace.

For the voter, though, it shifts responsibility. No longer can someone rely on the media to sift through endless digital content and label what’s a truth, half-truth or lie. The volume and velocity made possible by AI, especially in a critical campaign cycle about to determine the direction of the country, are simply too much.  

So consider this thought experiment: How do voters make decisions when they can’t trust their senses? 

The nerd in me who writes academic book chapters in my spare time likes the idea that people return to the basics. When I was running politics for the Los Angeles Times, I used to be heartened by traffic figures showing that people had come to the website by searching terms on Google such as “Hillary Clinton education policy details” or “Donald Trump tax proposal analysis.” People do actually want to be informed with facts.

But we know that’s boring. Also, AI is getting better and better. The test I apply to something I see on a social media platform goes a bit like this: Is what I’m viewing so surprising/exciting/offensive that my first instinct is to show it to someone else? If so, I had better take a beat and search on Google.

In fact, as Ellie Barnes of the News Literacy Project puts it, “[e]ducators play a vital role in training students to use this new technology thoughtfully, recognize inaccurate and biased information, and make a habit of double-checking before believing.” It’s especially needed as half of American children have a smartphone by age 11, Barnes notes. 

The nonpartisan nonprofit has its own fact-checking efforts labeled “Rumor Guard,” which includes deep dives into viral AI-generated images, many of which are tied to Hurricane Helene. A post from October explains that images showing an official Oregon voter information guide without Trump’s name might be genuine, but viral social media posts inaccurately suggest the missing name is evidence of election tampering. (Team Trump opted not to participate in the guide, they write.) 

According to Lou Jacobson, the chief correspondent for Politifact, another nonpartisan outlet that uses journalists to fact-check political claims, most of what they see are cheap fakes versus deep fakes. 

“AI just isn’t good enough yet,” Jacobson said in an October interview two weeks before the election. “It is on my list of longer horizon challenges but its not one for this year.”  

Politifact keeps up with misinformation in a variety of ways, but over the last 15 years has dramatically expanded its team thanks in large part to a relationship with Meta, which uses Politifact screens on information that comes through on Facebook, Instagram and Threads. Jacobson describes that effort as making a difference by being an important “speed bump” for the spread of phony images or misleading videos.

Is slowing it down enough? He thinks so — for now.

Truth-seekers trying to guard against something going viral can take some advice from the News Literacy Project, which notes that misinformation on social media frequently is labeled with the term “breaking.” Here’s the tip: “Investigate an account’s profile to see if it is connected to a credible news outlet or has a history of publishing accurate information.” You can also check their database if something seems too outlandish to be true. 

Back to the beer bong. The absurdity of that vice president’s picture makes you laugh, so you send it to a friend. Your mom posts it on Facebook, your uncle screams corruption, and your cousin, ever the skeptic, fact-checks it with a Snopes link. But none of that matters. The damage took root the second the image went viral. It runs deep. That’s the game now: spread the lie, and let truth chase after it, gasping. 

In this new political battleground, AI creates micro-worlds for voters, tailoring content so expertly that no one questions it. A Democratic voter in Ohio scrolls through a totally different reality than a registered independent voter in Arizona. Each version, believable enough to sink in. The result? A fractured electorate, splintered by the very tool meant to inform them.  

A synthetic Joe Biden voice advising New Hampshire voters to forgo their primary might have marked the disappearance of any shared reality — until it was discovered. Truth won, didn’t it? Dig a little deeper and you learn that a political operative created it as an act of “civil disobedience to call attention to the dangers of AI in politics,” NBC News reported. 

Where does any of this lead us? And what does truth look like when we force it through curated algorithms? The answers are ever-developing, and, like so much with technology, might be outdated the moment they are printed on this page. 

For now, we’ll keep it as simple as possible: Think before you share.

Michael Kittilson, a graduate student at USC Annenberg, leads editorial and research projects at the USC Center for Public Relations, where he collaborates with brands like Microsoft and StoryCorp. Focusing on crafting innovative, data-driven narratives, he helps organizations connect with a wide range of audiences across an evolving media landscape.

 

Christina Bellantoni is a professor of professional practice, the director of USC Annenberg’s Media Center and the Annenberg Center on Communication Leadership and Policy faculty fellow. She also is a contributing editor with the independent nonprofit newsroom The 19th News, which focuses on gender, politics and policy. With over two decades of journalistic experience, from the LA Times to Roll Call, she has shaped critical political discourse and championed investigative journalism