Research

Fairness in Streaming Submodular Maximization: Algorithms and Hardness

NeurIPS

Authors

  • Marwa El Halabi
  • Slobodan Mitrović
  • Ashkan Norouzi-Fard
  • Jakab Tardos
  • Jakub Tarnawski

Published on

10/14/2020

Categories

NeurIPS

Submodular maximization has become established as the method of choice for the task of selecting representative and diverse summaries of data. However, if datapoints have sensitive attributes such as gender or age, such machine learning algorithms, left unchecked, are known to exhibit bias: under- or over-representation of particular groups. This has made the design of fair machine learning algorithms increasingly important. In this work we address the question: Is it possible to create fair summaries for massive datasets? To this end, we develop the first streaming approximation algorithms for submodular maximization under fairness constraints, for both monotone and non-monotone functions. We validate our findings empirically on exemplar-based clustering, movie recommendation, DPP-based summarization, and maximum coverage in social networks, showing that fairness constraints do not significantly impact utility.

This paper has been published as a poster at the 2020 Neural Information Processing Systems (NeurIPS) conference.

Please cite our work using the BibTeX below.

@misc{halabi2020fairness,
      title={Fairness in Streaming Submodular Maximization: Algorithms and Hardness}, 
      author={Marwa El Halabi and Slobodan Mitrović and Ashkan Norouzi-Fard and Jakab Tardos and Jakub Tarnawski},
      year={2020},
      eprint={2010.07431},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
Close Modal