Publications

A geometric solution to fair representations

Abstract

To reduce human error and prejudice, many high-stakes decisions have been turned over to machine algorithms. However, recent research suggests that this does not remove discrimination, and can perpetuate harmful stereotypes. While algorithms have been developed to improve fairness, they typically face at least one of three shortcomings: they are not interpretable, their prediction quality deteriorates quickly compared to unbiased equivalents, and %the methodology cannot easily extend other algorithms they are not easily transferable across models% (e.g., methods to reduce bias in random forests cannot be extended to neural networks) . To address these shortcomings, we propose a geometric method that removes correlations between data and any number of protected variables. Further, we can control the strength of debiasing through an adjustable parameter to address the trade-off between prediction …

Date
February 7, 2020
Authors
Yuzi He, Keith Burghardt, Kristina Lerman
Book
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
Pages
279-285