Research

Adversarially-learned Inference via an Ensemble of Discrete Undirected Graphical Models

NeurIPS

Authors

Published on

07/09/2020

Categories

NeurIPS

Undirected graphical models are compact representations of joint probability distributions over random variables. To solve inference tasks of interest, graphical models of arbitrary topology can be trained using empirical risk minimization. However, to solve inference tasks that were not seen during training, these models (EGMs) often need to be re-trained. Instead, we propose an inference-agnostic adversarial training framework which produces an infinitely-large ensemble of graphical models (AGMs). The ensemble is optimized to generate data within the GAN framework, and inference is performed using a finite subset of these models. AGMs perform comparably with EGMs on inference tasks that the latter were specifically optimized for. Most importantly, AGMs show significantly better generalization to unseen inference tasks compared to EGMs, as well as deep neural architectures like GibbsNet and VAEAC which allow arbitrary conditioning. Finally, AGMs allow fast data sampling, competitive with Gibbs sampling from EGMs.

This paper has been published as a poster at the 2020 Neural Information Processing Systems (NeurIPS) conference.

Please cite our work using the BibTeX below.

@misc{jeewajee2020adversariallylearned,
      title={Adversarially-learned Inference via an Ensemble of Discrete Undirected Graphical Models}, 
      author={Adarsh K. Jeewajee and Leslie P. Kaelbling},
      year={2020},
      eprint={2007.05033},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
Close Modal