Robust Federated Learning: The Case of Affine Distribution Shifts
Authors
Authors
- Amirhossein Reisizadeh
- Farzan Farnia
- Ramtin Pedarsani
- Ali Jadbabaie
Authors
- Amirhossein Reisizadeh
- Farzan Farnia
- Ramtin Pedarsani
- Ali Jadbabaie
Published on
06/16/2020
Categories
Federated learning is a distributed paradigm that aims at training models using samples distributed across multiple users in a network while keeping the samples on users’ devices with the aim of efficiency and protecting users privacy. In such settings, the training data is often statistically heterogeneous and manifests various distribution shifts across users, which degrades the performance of the learnt model. The primary goal of this paper is to develop a robust federated learning algorithm that achieves satisfactory performance against distribution shifts in users’ samples. To achieve this goal, we first consider a structured affine distribution shift in users’ data that captures the device-dependent data heterogeneity in federated settings. This perturbation model is applicable to various federated learning problems such as image classification where the images undergo device-dependent imperfections, e.g. different intensity, contrast, and brightness. To address affine distribution shifts across users, we propose a Federated Learning framework Robust to Affine distribution shifts (FLRA) that is provably robust against affine Wasserstein shifts to the distribution of observed samples. To solve the FLRA’s distributed minimax problem, we propose a fast and efficient optimization method and provide convergence guarantees via a gradient Descent Ascent (GDA) method. We further prove generalization error bounds for the learnt classifier to show proper generalization from empirical distribution of samples to the true underlying distribution. We perform several numerical experiments to empirically support FLRA. We show that an affine distribution shift indeed suffices to significantly decrease the performance of the learnt classifier in a new test user, and our proposed algorithm achieves a significant gain in comparison to standard federated learning and adversarial training methods.
This paper has been published as a poster at the 2020 Neural Information Processing Systems (NeurIPS) conference.
Please cite our work using the BibTeX below.
@misc{reisizadeh2020robust,
title={Robust Federated Learning: The Case of Affine Distribution Shifts},
author={Amirhossein Reisizadeh and Farzan Farnia and Ramtin Pedarsani and Ali Jadbabaie},
year={2020},
eprint={2006.08907},
archivePrefix={arXiv},
primaryClass={cs.LG}
}