Beyah Cover Image
Image generated with assistance from DALL-E.

Persuaded to be biased: The under-akin of AI’s spin

In a world of mixed hues and moods where PR practitioners scale corporate stares with care — basking in our collective potential, aspiring to be consequential — we have yet to view the complexity of AI through mirrors in our rear view. 
I was in my sister’s living room, listening to my nephew spew an interesting rap, reading it from the screen of his iPad. As a Black woman and prior slam poet, I fought back the self-conscious feeling I get when something stereotypically “Black” is awkwardly appropriated — in this instance the rap he was reciting. When he completed his recitation, he joyously revealed the rap was created by the multibillion-dollar tech known as artificial intelligence. I wondered how AI generated the stereotypes I heard and felt as the rap played. Did the AI prompt indicate what race or gender the rap should mimic?
For PR practitioners responsible for influencing publics and elevating brands, using AI to generate content or imitate art requires we proceed with caution. An AI-created rap intended to elevate an entire group or illuminate a critical issue might unintentionally perpetuate stereotypes — and so too can well-meaning practitioners using AI.
As AI goes platinum, biased algorithms struggle to see non-white faces, fail to categorize Black photographs as human, and erases major parts of history in attempts to fix the bias. Zachary Small’s New York Times article, “Black Artists Say A.I. Shows Bias, With Algorithms Erasing Their History,” describes many of these issues.
In our eagerness to let AI do the easy work — writing press releases, generating media lists, speeches, tweets — hurtful stereotypes may be scaled. When the unconscious bias we all share results in overlooking nuances in AI-generated content, we might cause unintended and costly harm to individuals and brands.
AI is drenched in stereotyped laden data created by disproportionately over-represented algorithm writers who are often White males. Attempts to make AI cleaner, less offensive, and more inclusive are complicated, and in some instances, overtly hypocritical. In Billy Perrigo’s Time magazine exclusive, “Open AI Used Kenyan Workers on Less than $2 Per hour to Make ChatGPT Less Toxic,” he describes the horrors experienced by Kenyans hired to screen out abhorrent content from AI algorithms. While scrubbing offensive data for little pay over long periods of time, many were traumatized by what they saw and had to do.
Throughout history, we’ve witnessed protests of major brands for abusive human rights violations. Will PR professionals need to tackle similar protests of AI-generated products and experiences?
I asked Rafiq Taylor, an Annenberg graduate student, who is also my son, to share his thoughts. Rafiq said, “AI’s accuracy is limited because it cannot replicate a lived experience. It can replicate shared experiences, but not experiences that are present but unacknowledged. AI can only be as advanced as people are… and the inequality present in the world impacts whether racial bias is even a consideration on the design end.”
I ask my students and teams considering AI for use in our field to keep some fundamental concepts in mind. These practices apply familiar content creation approaches to what is still an unfamiliar tool, requiring caution and deliberation:
1) Test messaging for intended audiences. Have diverse individuals read, react, and advise prior to publishing.
2) Create diverse teams. Diverse PR teams are better equipped to create AI prompts that take nuance into account and are more able to identify stereotypes before content is finalized.
3) Remember the ‘A’ in AI stands for artificial. Keep content hot by ensuring the H — for humanity — is present in what you create and amplify.

Following these key principles, basic in nature but critical to consider, may help us get a stronger handle on AI as a tool in our field. Thus, as we examine the under-skin of AI’s spin, PR practitioners must continue to flip the script, humanizing the beat and elevating the shared harmony of our humanity.

Clarissa Beyah is Chief Communications Officer for Union Pacific and a professor of professional practice at USC Annenberg. Her expertise spans the professional services, healthcare, technology, transportation and utilities sectors, and she has served as a chief communication advisor for numerous Fortune 50 companies, including Pfizer. She is a member of the USC Center for PR board of advisers.