Research

Learning Embeddings into Entropic Wasserstein Spaces

ICLR

Authors

Published on

05/09/2019

Despite their prevalence, Euclidean embeddings of data are fundamentally limited in their ability to capture latent semantic structures, which need not conform to Euclidean spatial assumptions. Here we consider an alternative, which embeds data as discrete probability distributions in a Wasserstein space, endowed with an optimal transport metric. Wasserstein spaces are much larger and more flexible than Euclidean spaces, in that they can successfully embed a wider variety of metric structures. We propose to exploit this flexibility by learning an embedding that captures the semantic information in the Wasserstein distance between embedded distributions. We examine empirically the representational capacity of such learned Wasserstein embeddings, showing that they can embed a wide variety of complex metric structures with smaller distortion than an equivalent Euclidean embedding. We also investigate an application to word embedding, demonstrating a unique advantage of Wasserstein embeddings: we can directly visualize the high-dimensional embedding, as it is a probability distribution on a low-dimensional space. This obviates the need for dimensionality reduction techniques such as t-SNE for visualization.

Please cite our work using the BibTeX below.

@inproceedings{
frogner2018learning,
title={Learning Entropic Wasserstein Embeddings},
author={Charlie Frogner and Farzaneh Mirzazadeh and Justin Solomon},
booktitle={International Conference on Learning Representations},
year={2019},
url={https://openreview.net/forum?id=rJg4J3CqFm},
}
Close Modal