Individually Fair Ranking



Published on



AI Fairness ICLR

We develop an algorithm to train individually fair learning-to-rank (LTR) models. The proposed approach ensures items from minority groups appear alongside similar items from majority groups. This notion of fair ranking is based on the definition of individual fairness from supervised learning and is more nuanced than prior fair LTR approaches that simply ensure the ranking model provides underrepresented items with a basic level of exposure. The crux of our method is an optimal transport-based regularizer that enforces individual fairness and an efficient algorithm for optimizing the regularizer. We show that our approach leads to certifiably individually fair LTR models and demonstrate the efficacy of our method on ranking tasks subject to demographic biases.
One-sentence Summary: We present an algorithm for training individually fair learning-to-rank systems using optimal transport tools.

This paper has been published at ICLR 2021

Please cite our work using the BibTeX below.

title={Individually Fair Rankings},
author={Amanda Bower and Hamid Eftekhari and Mikhail Yurochkin and Yuekai Sun},
booktitle={International Conference on Learning Representations},
Close Modal