Publications

Learning fairer interventions

Abstract

Explicit and implicit bias clouds human judgment, leading to discriminatory treatment of disadvantaged groups. A fundamental goal of automated decisions is to avoid the pitfalls in human judgment by developing decision strategies that can be applied to all protected groups. Improving fairness of interventions via automated decision-inspired methods, however, has been under-utilized. In this paper, we propose a causal framework that learns optimal intervention policies from data subject to novel fairness constraints. We define two measures of treatment bias and infer treatment assignments that minimize the bias against protected groups while optimizing overall outcomes. We demonstrate the existence of trade-offs when balancing fairness and overall benefit; however, allowing preferential treatment of protected groups in certain circumstances (affirmative action) can dramatically improve the overall benefit while …

Date
July 26, 2022
Authors
Yuzi He, Keith Burghardt, Siyi Guo, Kristina Lerman
Book
Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society
Pages
317-323