Robot comprised of wiring and technology with a human-like face
(Image generated with assistance from DALL-E.)

Ai and reputation: The promise of transformation and the perils of disinformation

As a CTO, it will surprise exactly no one that I’m all-in on the power of artificial intelligence to transform the communications profession (and society, for that matter). In fact, I’ve become known around H+K and with our clients for repeating the phrase, “AI won’t take your job, but someone who knows how to use AI might.” And the environment is only getting more complex as AI gets better, faster, and cheaper. 

In many ways, part of what we’re seeing is a new manifestation of known threats, first seen by governments in the form of propaganda, then as an ugly feature of political discourse where partial truths and controversy have been actively leveraged to engage and activate voters of every stripe. Whether intentionally and maliciously spread as disinformation or unintentionally spread as misinformation, there’s been a dramatic and critical shift in the last few years. There’s a new target … you and the brands you protect. 

Consider the case of Target, the victim of AI-generated content on TikTok suggesting the company was selling satanic clothing. Turns out the images were traced back to a Facebook post that disclosed the content had been generated by AI. By the time it was discovered the TikTok post had over a million views. 

Most of us aren’t anywhere near ready for this new world. It requires a new set of specialized skills to evaluate the accuracy, precision, and truth of content, all at a scale and pace that far eclipses what we’re built to manage. The magnitude of risks associated with these challenges becomes material for individuals, businesses, and society as a whole — faster than you can type a prompt into ChatGPT.

A 2019 study from the University of Baltimore estimated the annual financial cost of misinformation on reputation management to be $9.54 billion USD. Billion (with a capital B). And that was in 2019. Does anyone want to wager that it’s a higher number today? 

 Last spring, in May 2023, stock markets around the world briefly plummeted as images of an explosion at the Pentagon went viral. Thanks to open-source-style investigators, the hoax was quickly identified, not from a technical analysis of the images themselves but from the lack of corroborating content from other “boots on the ground.”  

As tools improve, AI-generated disinformation will continue to become increasingly compelling and easier to create at such a scale that it will become exponentially harder to detect and combat. Unlike what we might historically have thought of as disinformation, which is often spread by bots or bad actors on social media or Reddit (or the like), AI-generated disinformation is much more subtle and sophisticated. It may be designed to mimic real news stories or even to create entirely new narratives to manipulate public opinion. It can even be used to create an overnight chart-topper from Drake and The Weeknd. 

So, what do we do about it? 

First, we need to find these harmful narratives and the deepfakes that support them. Luckily technology is pushing the envelope every day to help us leverage equally sophisticated AI to find bubbling harmful narratives and predict where future ones will emerge. But the tech alone isn’t going to be enough. Much of this AI-generated content is built to evade automated detection, designed to look authentic while avoiding traditional fact-checking and verification methods. 

In the Target and Pentagon examples, a bit of clever exploration helped uncover the truth. But what if the content had been about a lawsuit? Or a complex public policy issue? Or cyber security? Or all of the above at once. Specialized expertise with an understanding of AI is going to become even more important. As is the ability to convene and mobilize teams that bring together diverse skills to mitigate reputational risk and assist in reputation management.

The AI age doesn’t mark the death of trust or authenticity. But it does require a new paradigm that combines human and artificial intelligence. It is about purposefully leveraging data and AI tools in concert with human intuition and experience. Think broadly, the demands of the challenges we face require full and integrated bench strength to address it; lawyers, policy experts, strategic communicators, engineers, developers, regulators… you name it. 

Ultimately, while so many voices are focused on how AI can help us do more, faster, better, and cheaper (and as one of those voices, I’d challenge us all to engage in those discussions and innovations), let’s not lose sight of the other outcomes of these innovations. Trust isn’t dead. And it isn’t less important. It’s just harder to secure and more difficult to maintain. 

Are you ready for an influx of disinformation? Are you confident you can detect and respond? If not, you’d do well to prepare because if your company hasn’t been targeted yet, someone is probably plotting against it now. And if you have, I’d bet it’ll happen again. 


Grant Toups is the first global chief technology officer at Hill+Knowlton Strategies. He is charged with working across H+K and WPP to build the firm’s ecosystem of technology-based offerings and improve the use of data science and analytics to drive client success and employee growth and experience. He is a member of the USC Center for PR board of advisers.