A robot looking pensively to the right
Image generated by AI using iStock.

Responsible and equitable AI is community-driven AI

Concerns about who benefits from the advancement of AI and who bears its heaviest burdens loom large in the minds of those who study the societal implications of AI and those who seek to develop and deploy that technology responsibly. These concerns are rooted in more than technological cynicism — they are grounded in extensive evidence of AI systems that reinforce bias and the fact that AI innovation is rarely directed to benefit marginalized communities.

The philosophy that AI should benefit our most vulnerable communities and, therefore, should be developed in partnership with those communities is the guiding principle of the USC Center for AI in Society (CAIS), a collaboration between USC’s Suzanne Dworak-Peck School of Social Work and Viterbi School of Engineering. CAIS’s Coordinated Entry System Triage Tool Research and Refinement (CESTTRR) project is an exemplar of their philosophy in practice. 

CESTTRR was a three-year project that investigated the use of triage tools used in Los Angeles to assess vulnerability among people experiencing homelessness and to assist in resource allocation decisions. Users of these tools suspected that they were not capturing the full vulnerability of certain types of people experiencing homelessness, namely people of color. The goal of CESTTRR, then, was to create an updated assessment tool that could predict risk more accurately, more equitably, and more efficiently than the current triage system. That the project was a success had everything to do with the study team’s commitments to a community-engaged research process characterized by the following three principles:

  • Principle #1: Establish Stakeholder Partnerships. CESTTRR partnered with three types of stakeholders. On its Community Advisory Board were persons with marginalized identities and lived experience and direct service providers who worked closely with this population. And its Core Planning Group was composed of key system-level stakeholders like the Los Angeles Homeless Services Authority (LAHSA) and the Los Angeles County Department of Mental Health. The study team also presented work in progress at public meetings and forums across Los Angeles.
  • Principle #2: Engage in Community-Informed Data Science. Community stakeholders like those above need to be meaningfully engaged in the research as it unfolds. In CESTTRR, community stakeholders informed refinements to the tool in many ways, including: 1) designating the adverse outcomes that the revised vulnerability assessment tool was being designed to predict, such that outcomes were chosen with racial/ethnic equity in mind; 2) incorporating equity adjustments into the algorithm to ensure that errors (e.g., false negatives) didn’t discriminate against racial/ethnic groups; and 3) reintegrating questions into the assessment tool that were not originally selected by the algorithm as being most predictive, but were nonetheless critical for making allocation decisions.
  • Principle #3: Prioritize Community-Informed Implementation. An accurate and equitable assessment tool is of little use if end-users face considerable challenges administering the tool. Thus, community stakeholders played an integral role in devising best practices that overcame perceived barriers to implementation. For example, the CAB revised the wording of most questions on the assessment to be easier to understand and to reduce stigma and induced trauma. They also recommended using trauma-informed practices during data collection, such as to not administer the tool at first meeting, to ask clients if they would like someone else to administer the tool, and asking clients if they needed breaks after difficult questions. 

As an associate director of CAIS, I am proud to be a part of a research collective that takes seriously the idea that “with great power comes great responsibility.” Our power stems from the knowledge, skills, and resources we possess to innovate with AI. And our responsibility? As the three principles that I described demonstrate, we take it as our social responsibility to direct our power toward problems that impact vulnerable communities and to platform community voices in that process. In our view, this is the only way to ensure that our most underserved and minoritized communities become beneficiaries of AI innovation and not its victims.

Lindsay Young, PhD, is an assistant professor of health communication at USC Annenberg, where she researches health inequities in minoritized populations using “Quantitative Criticalism” and social network analysis. With a commitment to community empowerment, she designs health interventions that leverage local networks and assets and holds an NIH Career Development Pathway to Independence Award.