A Platform Problem: Hate Speech and Bots Still Thriving on X
A study published November 27, 2024 in PLOS ONE sheds light on ongoing challenges with hate speech and inauthentic accounts on X (formerly Twitter). Conducted by researchers at the USC Viterbi Information Sciences Institute, the study analyzed English-language posts from January 2022 to June 2023. The findings aim to help improve moderation strategies and policies to create healthier online environments.
The Numbers Show Hate Speech Is on the Rise
The study found that hate speech increased over the study period.
- Hate speech, in general, increased 50%.
- Some words, such as transphobic slurs, increased 260%.
- Homophobic tweets rose by 30%.
- Racist tweets rose by 42%.
Engagement also grew, with 70% more likes for hate speech posts per day compared to an average of 22% more likes per day for random English-language tweets. While no causation is proposed in the study, author Keith Burghardt said, “The policies to reduce harmful content appear not to be sufficiently effective.”
To measure hate speech, the researchers used a two-step process, first identifying harmful posts using a lexicon of specific terms and then refining the dataset to focus on inflammatory content. They also compared hate speech trends to a baseline of neutral posts to account for general changes in platform activity.
What About Bots? Inauthentic Accounts Remain Active
The study also examined inauthentic accounts, often associated with bots or coordinated campaigns, and found no significant reduction in activity. In some cases, such as cryptocurrency promotion, bot activity appeared to increase. “Accounts that post the same content or act at almost exactly the same time can amplify harmful content, and these coordinated accounts can be associated with information operations,” Burghardt said.
Using methods such as analyzing patterns in hashtag usage and repost timing, the researchers identified networks of accounts working together to amplify specific messages.
What’s Next?
Burghardt emphasized the constructive goal of their research. “This isn’t just about identifying harmful trends—it’s about using data to make things better and creating a system where these platforms can be audited fairly,” he said. The team has already expanded their research to explore how other platforms handle trends like hate speech and inauthentic activity. “We’re doing similar work on other platforms, including Reddit, BlueSky, and Telegram,” he explained.
Burghardt also stressed the need for greater transparency from social media companies. “If social media remains a black box where we can’t see what’s happening, it becomes very difficult to know where improvements are needed,” he said. He compared the value of transparency to government audits, which allow the public to hold institutions accountable and track progress.
“The ability to audit platforms fairly is essential for understanding whether their policies are effective and whether they are contributing to harmful trends like the spread of disinformation or hate speech,” Burghardt added, noting that this level of openness would benefit both researchers and the platforms themselves.
Published on November 27th, 2024
Last updated on November 27th, 2024