Research

GAN Dissection: Visualizing and Understanding Generative Adversarial Networks

Generative Models

Authors

Published on

11/26/2018

Generative Adversarial Networks (GANs) have recently achieved impressive results for many real-world applications, and many GAN variants have emerged with improvements in sample quality and training stability. However, visualization and understanding of GANs is largely missing. How does a GAN represent our visual world internally? What causes the artifacts in GAN results? How do architectural choices affect GAN learning? Answering such questions could enable us to develop new insights and better models. In this work, we present an analytic framework to visualize and understand GANs at the unit-, object-, and scene-level. We first identify a group of interpretable units that are closely related to object concepts with a segmentation-based network dissection method. Then, we quantify the causal effect of interpretable units by measuring the ability of interventions to control objects in the output. Finally, we examine the contextual relationship between these units and their surrounding by inserting the discovered object concepts into new images. We show several practical applications enabled by our framework, from comparing internal representations across different layers, models, and datasets, to improving GANs by locating and removing artifact-causing units, to interactively manipulating objects in the scene. We provide open source interpretation tools to help peer researchers and practitioners better understand their GAN models.

Please cite our work using the BibTeX below.

@article{DBLP:journals/corr/abs-1811-10597,
  author    = {David Bau and
               Jun{-}Yan Zhu and
               Hendrik Strobelt and
               Bolei Zhou and
               Joshua B. Tenenbaum and
               William T. Freeman and
               Antonio Torralba},
  title     = {{GAN} Dissection: Visualizing and Understanding Generative Adversarial
               Networks},
  journal   = {CoRR},
  volume    = {abs/1811.10597},
  year      = {2018},
  url       = {http://arxiv.org/abs/1811.10597},
  archivePrefix = {arXiv},
  eprint    = {1811.10597},
  timestamp = {Fri, 30 Nov 2018 12:44:28 +0100},
  biburl    = {https://dblp.org/rec/journals/corr/abs-1811-10597.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}
Close Modal