Research

Unsupervised learning with contrastive latent variable models

Unsupervised Learning

Published on

11/14/2018

In unsupervised learning, dimensionality reduction is an important tool for data exploration and visualization. Because these aims are typically open-ended, it can be useful to frame the problem as looking for patterns that are enriched in one dataset relative to another. These pairs of datasets occur commonly, for instance a population of interest vs. control or signal vs. signal free recordings. However, there are few methods that work on sets of data as opposed to data points or sequences. Here, we present a probabilistic model for dimensionality reduction to discover signal that is enriched in the target dataset relative to the background dataset. The data in these sets do not need to be paired or grouped beyond set membership. By using a probabilistic model where some structure is shared amongst the two datasets and some is unique to the target dataset, we are able to recover interesting structure in the latent space of the target dataset. The method also has the advantages of a probabilistic model, namely that it allows for the incorporation of prior information, handles missing data, and can be generalized to different distributional assumptions. We describe several possible variations of the model and demonstrate the application of the technique to de-noising, feature selection, and subgroup discovery settings.

Please cite our work using the BibTeX below.

@article{Severson_2019,
   title={Unsupervised Learning with Contrastive Latent Variable Models},
   volume={33},
   ISSN={2159-5399},
   url={http://dx.doi.org/10.1609/aaai.v33i01.33014862},
   DOI={10.1609/aaai.v33i01.33014862},
   journal={Proceedings of the AAAI Conference on Artificial Intelligence},
   publisher={Association for the Advancement of Artificial Intelligence (AAAI)},
   author={Severson, Kristen A. and Ghosh, Soumya and Ng, Kenney},
   year={2019},
   month={Jul},
   pages={4862–4869}
}
Close Modal