Two text bubbles, one with the word AI, the other with an ellipses (...)
Image generated by AI using iStock.

AI proves ‘Those who know, know’

Misinformation swirls through our online feeds like invisible currents, shaping what we believe before we even notice. In a media-saturated world, we’re faced with increasingly advanced content shown across various platforms, from advertisements to politics to news. At times, bad actors exploit media by using artificial intelligence to create fake content, including deepfakes - AI-generated or manipulated videos or images that make it look as though someone is saying or doing something they never did, planting seeds of doubt. 

We set out to test the limits of perception with an experiment to see how communication professionals would fare against AI-driven falsehoods. The results do not merely expose the limits of perception – they show how knowledge, deep and specific, stands as the only shield in battles over issues like climate change.

Our experiment, a small-scale survey conducted electronically, revealed critical insights into how knowledge serves as both sword and shield against the complexities AI introduces into the media ecosystem. In the survey, the participants – media experts, journalists, educators – faced headlines that challenged their understanding. Some headlines reflected scientific consensus, while others, crafted by algorithms, carried deliberate falsehoods. 

Real headlines came with authentic images, while doctored ones were paired with manipulated visuals. Participants were also asked to refrain from using news sources or search engines like Google while taking the survey, and to only use their instincts and current knowledge to make their decisions. 

The task appeared simple: decide what to trust. Yet simplicity evaporates in the fog of today’s media landscape. Those versed in climate discourse, particularly journalists, sliced through the deception with confidence. Those less informed stumbled, some unable to identify a single truth.

Scores revealed a divide. Experts, well-versed in the rhythms of climate discourse, identified deceptions with ease. Their knowledge cut through the fog of the false information, allowing them to see clearly. Seven correct answers out of seven. They held the tools necessary for the fight. Others, lacking that familiarity, struggled in confusion – zero correct answers, or two at best. The takeaway became clear: those less informed about the news and climate science were the most vulnerable to misinformation.

Yet, even expertise doesn’t guarantee invulnerability. Fake media doesn’t have to be flawless to create doubt. A single fabricated headline can spark uncertainty, casting suspicion on everything that follows. Even those with experience sometimes wavered when faced with subtle manipulations. The boundaries between confidence and uncertainty blurred, revealing just how fragile the line becomes under pressure.

Our experiment illuminated AI’s ability to target gaps in understanding. Climate change, a polycrisis wrought with policy and economic complexities, offers fertile ground for exploitation. Bad actors manipulating media with AI can infiltrate these gaps, planting counterfeit truths where understanding runs shallow. 

We noticed the participants who excelled saw the patterns, recognized the traps, and sensed when something looked wrong. Their advantage lay not in critical thinking alone, but in experience – years of dissecting narratives and sharpening instincts. Journalists, in particular, navigated the falsehoods with precision. But this edge belongs to the few.

Many find themselves unequipped to navigate the complexities of AI-driven misinformation. The challenge lies not in deceiving all but in misleading just enough. Without expertise, individuals can be susceptible to the careful distortions AI can produce. Climate change may be just one front, but the implications stretch across every issue.

The solution? Depth. The experiment revealed that those with a profound grasp of climate science consistently identified the deceptions. Superficial knowledge proves fragile, unable to withstand sophisticated falsehoods. Developing this clarity comes from engaging with trusted sources, using available tools, and questioning the surface of what’s presented. By refining how we process information, we sharpen our ability to cut through misinformation.

Trust, too, will transform. Reliance on traditional institutions may give way to a more discerning approach. We predict that people will seek out experts – those whose immersion in specific fields offers unmatched clarity. Credibility will belong to those who understand at the deepest levels, rather than those who merely convey information.

Moreover, AI may contribute to the spread of misinformation, but it can also be part of the solution. Tools that detect deepfakes and false information already exist, and they’re only getting better. In the future, AI can play a crucial role in rooting out the very falsehoods it helps create. Our experiment underscores the need for both sides of this equation: expertise to spot the lies and AI-driven tools to expose them before they do harm.

Knowledge gleams. Our minds navigate swirling currents of truth and fabrication. AI, our creation, our tool, our challenge, amplifies both clarity and confusion. Yet through this maelstrom, one beacon shines unwavering – deep, hard-won understanding.Expertise, honed through years of immersion and critical thought, cuts cleanly through the noise. AI has not weakened this truth; rather sharpened it. As complexity surges, so must our understanding. Surface skimmers sink; deep divers thrive.

Allison Agsten leads USC Annenberg’s Center for Climate Journalism and Communication, leveraging her diverse experience from CNN and LACMA to shape the future of climate communication. She pioneers art-focused climate discussions as the first curator of the USC Wrigley Institute for Environmental Studies.

Michael Kittilson, a graduate student at USC Annenberg, leads editorial and research projects at the USC Center for Public Relations, where he collaborates with brands like Microsoft and StoryCorp. With a focus on crafting innovative, data-driven narratives, he helps organizations connect with a wide range of audiences across an evolving media landscape.

Olivia Smith is an Emmy Award-winning journalist and media consultant based in Los Angeles. She is a versatile content creator with extensive experience in print, broadcast, multimedia, and digital journalism.