Seeing What a GAN Cannot Generate
Authors
Authors
- David Bau
- Jun-Yan Zhu
- Jonas Wulff
- William Peebles
- Hendrik Strobelt
- Bolei Zhou
- Antonio Torralba
Authors
- David Bau
- Jun-Yan Zhu
- Jonas Wulff
- William Peebles
- Hendrik Strobelt
- Bolei Zhou
- Antonio Torralba
Published on
10/24/2019
Categories
Despite the success of Generative Adversarial Networks (GANs), mode collapse remains a serious issue during GAN training. Worse yet, little work has focused on understanding and quantifying which modes have been dropped by a model. In this work, we take a first step and present two analytic methods for systematically studying this phenomenon. First, we deploy a semantic segmentation network to compare the distribution of segmented objects in the generated images with the target distribution in the training set. Differences in segmentation statistics reveal object classes that are omitted by a GAN. Second, given the identified omitted object classes, we further visualize what the GAN is doing instead. In particular, we compare specific differences between individual photos and their approximate reconstructions by a GAN model. To this end, we propose a new image reconstruction method based on inverting the layers of a generator. Finally, we use our framework to analyze several state-of-the-art GANs trained on multiple datasets and identify the typical failure cases of existing models.
Please cite our work using the BibTeX below.
@article{Bau_2019,
title={Seeing What a GAN Cannot Generate},
ISBN={9781728148038},
url={http://dx.doi.org/10.1109/ICCV.2019.00460},
DOI={10.1109/iccv.2019.00460},
journal={2019 IEEE/CVF International Conference on Computer Vision (ICCV)},
publisher={IEEE},
author={Bau, David and Zhu, Jun-Yan and Wulff, Jonas and Peebles, William and Zhou, Bolei and Strobelt, Hendrik and Torralba, Antonio},
year={2019},
month={Oct}
}