“Power shadows” are systemic biases in AI arising from the lack of diversity in the data used to train the systems. The term was coined by MIT graduate researcher Joy Buolamwini in a national contest inspired by the film Hidden Figures. According to her research, the methods used to gather data result in skewed AI algorithms and inequitable outcomes for people of color—particularly Black people.
As AI becomes further intertwined in business, including healthcare management, mortgage loan evaluations and identity verification, we must address AI system biases to ensure more equitable treatment for people regardless of race, culture or ethnicity. AI systems are only as good as the data we put into them.
Healthcare
AI-powered systems are being designed to support medical activities ranging from patient diagnosis and triaging to drug pricing. Adriana Krasniansky, PhD, a graduate student at Harvard Divinity School, studies personal and societal relationships to devices and algorithms in healthcare. She looked at how trained AI systems determine the length of time a patient stays in healthcare facilities. She found the data inadequately represented communities of color, which led to skewed outcomes. AI’s recognition of zip codes and regions exacerbated the issue.
“The underrepresentation of minority communities in training AI data results in bias algorithms that fail to adequately address healthcare issues for people of color, especially for Black populations,” Krasniansky writes.
In 2016, AI researchers in Heidelberg, Germany, developed a model to detect various forms of melanoma using digital imaging. However, 95 percent of the training images used were of white skin (Dutchen, 2019). The lack of diversity in the imaging impacted the accuracy of detecting melanoma in darker skin tones.
Facial Analysis & Recognition
“Facial analysis algorithms often fail to detect darker skin tones due to lack of diversity in training AI data systems,” writes Buolamwini, the founder of the Algorithmic Justice League. Her research notes the unconscious discriminatory consequences that emerge when predominantly white and male demographic data is used to train AI systems.
In 2022, Dr. Abiba Birhane, a scientist researching ethical AI at Trinity College in Dublin, found that the omission of diverse skin tones perpetuates racial stereotyping, societal inequalities, and power dynamics that reflect the injustices in our society. According to her research, the inaccuracies inherently lead to racial biases in facial analysis algorithms, illustrating the need for more inclusive data collection methods and AI system training.
Lending Practices
AI also impacts whether a bank approves loan applications. Michael Armstrong, CTO of AI technology company Authenticx, investigated this. He reviewed various AI “language models,” the basis for determining approvals or denials, and found that Black applicants are more likely to be denied loans and, if approved, are charged higher interest rates than white applicants. (Armstrong, 2024). While he also found various gradations of racial bias, he notes Black applicants bear the brunt of this discrimination.
Recommendations
To alleviate these and other racial biases in AI systems, or “Power Shadows,” data training must become diversified and crucial steps taken to rectify the defective algorithms:
- Racial biases within AI algorithms must be acknowledged.
- Monumental change must be made in the ways data is collected.
- AI systems must be reconfigured to include people with darker skin tones.
- AI systems must be trained with diverse data to accurately detect and reflect the uniqueness of populations applicable to specific situations.
While we remain enthusiastic about the emergence and increased usage of life-altering AI technology tools, credible research has shown us that AI is not colorblind. Immediate attention must be given to correcting and accurately reflecting our diverse and kaleidoscopic communities. AI systems must avoid repeating historic racial biases in critical sectors of our society that negatively impact lives.
America is browner, more culturally diverse and its peoples are seeking the same American dream. A desire by technology companies to act and correct AI systems is paramount. There’s still time to get it right. More diverse participants must be included in data collection. Black researchers and entrepreneurs must become involved in the research and training of AI systems. We must work together to ensure the AI tool works for all of us.
Julia A. Wilson is the CEO & founder of Wilson Global Communications, LLC, an international public relations consultancy in Washington, D.C., and Dean of the Scripps Howard School of Journalism and Communications at Hampton University in Virginia. A USC Alumna, Wilson was selected by the Television Academy as a 2023 Alex Trebek Legacy Fellow. She is a member of the USC Center for PR board of advisers.