Who Should Predict? Exact Algorithms For Learning to Defer to Humans
Authors
Authors
- Hussein Mozannar
- Hunter Lang
- Dennis Wei
- Prasanna Sattigeri
- Subhro Das
- David Sontag
Authors
- Hussein Mozannar
- Hunter Lang
- Dennis Wei
- Prasanna Sattigeri
- Subhro Das
- David Sontag
Published on
04/27/2023
Categories
Automated AI classifiers should be able to defer the prediction to a human decision maker to ensure more accurate predictions. In this work, we jointly train a classifier with a rejector, which decides on each data point whether the classifier or the human should predict. We show that prior approaches can fail to find a human-AI system with low mis-classification error even when there exists a linear classifier and rejector that have zero error (the realizable setting). We prove that obtaining a linear pair with low error is NP-hard even when the problem is realizable. To complement this negative result, we give a mixed-integer-linear-programming (MILP) formulation that can optimally solve the problem in the linear setting. However, the MILP only scales to moderately-sized problems. Therefore, we provide a novel surrogate loss function that is realizable-consistent and performs well empirically. We test our approaches on a comprehensive set of datasets and compare to a wide range of baselines.
Please cite our work using the BibTeX below.
@InProceedings{pmlr-v206-mozannar23a,
title = {Who Should Predict? Exact Algorithms For Learning to Defer to Humans},
author = {Mozannar, Hussein and Lang, Hunter and Wei, Dennis and Sattigeri, Prasanna and Das, Subhro and Sontag, David},
booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics},
pages = {10520--10545},
year = {2023},
editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem},
volume = {206},
series = {Proceedings of Machine Learning Research},
month = {25--27 Apr},
publisher = {PMLR},
pdf = {https://proceedings.mlr.press/v206/mozannar23a/mozannar23a.pdf},
url = {https://proceedings.mlr.press/v206/mozannar23a.html},
abstract = {Automated AI classifiers should be able to defer the prediction to a human decision maker to ensure more accurate predictions. In this work, we jointly train a classifier with a rejector, which decides on each data point whether the classifier or the human should predict. We show that prior approaches can fail to find a human-AI system with low mis-classification error even when there exists a linear classifier and rejector that have zero error (the realizable setting). We prove that obtaining a linear pair with low error is NP-hard even when the problem is realizable. To complement this negative result, we give a mixed-integer-linear-programming (MILP) formulation that can optimally solve the problem in the linear setting. However, the MILP only scales to moderately-sized problems. Therefore, we provide a novel surrogate loss function that is realizable-consistent and performs well empirically. We test our approaches on a comprehensive set of datasets and compare to a wide range of baselines.}
}