Not Feeling It: AI’s Emotional Disconnect

by Julia Cohen

emojis showing different emotions
Photo credit: JakeOlimb/iStock

When we talk, we do more than exchange information – we convey emotions, values, and moral judgments. Yet, despite the rapid advancements in AI, most language models today are tuned only to mimic our words, not our feelings. To describe this gap, researchers at USC’s Information Sciences Institute (ISI) have coined the term “affective alignment,” and they will be presenting their research on the topic in the online portion of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL) on August 22, 2024. 

Positional Alignment vs. Affective Alignment

“Positional alignment” refers to how closely the opinions, viewpoints, or positions taken by language models correspond to those held by humans on various topics. Zihao He, a computer science Ph.D. student at USC Viterbi School of Engineering and research assistant at ISI, wanted to look at something beyond positional alignment. In the paper Whose Emotions and Moral Sentiments Do Language Models Reflect?, He and his co-authors define and explore “affective alignment,” how closely the emotional tones or sentiments expressed by large language models (LLMs) match or resonate with those of humans in similar contexts.

While positional alignment is about aligning facts or stances, affective alignment is about capturing the underlying experience of feeling, emotion, attachment, or mood—what psychologists refer to as “affect.” This can be illustrated through two statements about masking during the COVID-19 pandemic:

  1. “Every mask we wear is a badge of honor, showing love and respect for our communities.”
  2. “It’s heartbreaking to see the impact of not wearing masks – lives lost, dreams deferred. Every choice to ditch the mask deepens the crisis.”

Both statements support mask-wearing, but the first conveys positive emotions, while the second expresses negative emotions. Although both are pro-masking, people perceive these statements very differently because of their emotional content. This demonstrates how critical affective alignment is in shaping how messages are received and understood. 

He said, “When the emotional tone matches human expectations, it fosters trust and a sense of reliability and improves the understanding and acceptance of AI-generated content.” To explore the impact of these differences on AI performance, He and the research team analyzed how effectively LLMs align with human emotions.

Understanding the Disconnect and the Implications

To investigate affective alignment, the research team analyzed 36 LLMs, examining their responses to prompts on contentious topics like COVID-19 mask mandates and abortion rights. They compared the AI-generated responses to real-world social media posts from diverse political perspectives. The results were revealing. 

While the models produced coherent responses, they often failed to capture the emotional nuances of human communication. “We observed significant misalignment in affect,” said He. Additionally, the models displayed a persistent liberal bias, even when directed to generate conservative viewpoints, underscoring a deeper issue in affective alignment.

This has significant implications as AI becomes more integrated into daily life, particularly in areas like content moderation and virtual assistance. Misalignment in affect could lead to biased or insensitive responses, raising ethical concerns. “An AI’s emotional bias could influence how it flags or suppresses content, affecting fairness and balance,” explained He. 

What’s Next?

Looking forward, He’s advisor, Kristina Lerman, a Senior Principal Scientist at ISI and a Research Professor at the Thomas Lord Department of Computer Science said, “We are developing technology that can recreate more authentic human communication patterns, and also help us better understand people and how they talk to each other.” The ISI team also plans to expand their research to explore how factors like race, age, and culture influence emotional communication and tackle topics beyond COVID and abortion.

“Being accepted to ACL is huge because affective alignment impacts how a model will be deployed in the real world,” said He. “And it’s exciting because we were the first to propose this idea of affect alignment and it means that the research community sees this as an issue worth studying.”

Published on August 21st, 2024

Last updated on August 21st, 2024

Want to write about this story?