Research

Auditing ML Models for Individual Bias and Unfairness

AISTATS

Authors

Published on

08/28/2020

We consider the task of auditing ML models for individual bias/unfairness. We formalize the task in an optimization problem and develop a suite of inferential tools for the optimal value. Our tools permit us to obtain asymptotic confidence intervals and hypothesis tests that cover the target/control the Type I error rate exactly. To demonstrate the utility of our tools, we use them to reveal the gender and racial biases in Northpointe’s COMPAS recidivism prediction instrument.

Please cite our work using the BibTeX below.

@InProceedings{pmlr-v108-xue20a,
  title = 	 {Auditing ML Models for Individual Bias and Unfairness},
  author =       {Xue, Songkai and Yurochkin, Mikhail and Sun, Yuekai},
  booktitle = 	 {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics},
  pages = 	 {4552--4562},
  year = 	 {2020},
  editor = 	 {Chiappa, Silvia and Calandra, Roberto},
  volume = 	 {108},
  series = 	 {Proceedings of Machine Learning Research},
  month = 	 {26--28 Aug},
  publisher =    {PMLR},
  pdf = 	 {http://proceedings.mlr.press/v108/xue20a/xue20a.pdf},
  url = 	 {https://proceedings.mlr.press/v108/xue20a.html},
}
Close Modal