Publications

Biased Bots: An Empirical Demonstration of How AI Bias Could Compromise Mental Healthcare

Abstract

Background The proliferation of artificial intelligence (AI) applications for mental health has advanced in recent years and shows promise to increase the reach, scope, and impact of mental healthcare. However, biases in algorithms designed to assess and treat mental health problems pose risk for equitable mental health. This cross-sectional study investigates the existence of bias in algorithms for detecting stress from mobile devices and its implications for mental health equity. Methods A diverse sample of young adults (N= 212) carried smartphones, wore physiological sensors, and completed hourly surveys assessing their subjective stress for 24 hours. We then developed a Twin Neural Network machine learning (ML) model to detect hourly stress from smartphone and wearable data and evaluated model performance across gender and ethnic/racial status. Findings The model performed moderately well overall yet showed significant variation in performance ranging from poor to good across gender and ethnic/racial groups. In particular, the model evidenced lower performance in women compared to men and overestimated the frequency of stress episodes for Hispanic/Latina women. Interpretation Findings highlight the presence of bias in AI applications for mental health and underscore the need for cautious interpretation of ML outcomes in historically underrepresented groups. Discussion focuses on the implications of AI bias for mental health and the importance of developing methods that combine AI and social justice perspectives to ensure the implementation of equitable mental healthcare. Funding: This project is based on work …
Background

Methods

Findings

Interpretation

Date
December 4, 2024
Authors
Adela Timmons, Kexin Feng, Kayla Carta, Sierra Walters, Grace Jumonville, Alyssa Carrasco, Gabrielle Freitag, Daniela Romero, Gayla Margolin, Shrikanth Narayanan, Jonathan Comer, Matthew Ahle
Publisher
OSF