Research

Improved Mutual Information Estimation

AAAI

Authors

Published on

05/18/2021

We propose to estimate the KL divergence using a relaxed likelihood ratio estimation in a Reproducing Kernel Hilbert space. We show that the dual of our ratio estimator for KL in the particular case of Mutual Information estimation corresponds to a lower bound on the MI that is related to the so called Donsker Varadhan lower bound. In this dual form, MI is estimated via learning a witness function discriminating between the joint density and the product of marginal, as well as an auxiliary scalar variable that enforces a normalization constraint on the likelihood ratio. By extending the function space to neural networks, we propose an efficient neural MI estimator, and validate its performance on synthetic examples, showing advantage over the existing baselines. We demonstrate its strength in large-scale self-supervised representation learning through MI maximization.

Please cite our work using the BibTeX below.

@article{Mroueh_Melnyk_Dognin_Ross_Sercu_2021, 
title={Improved Mutual Information Estimation}, 
volume={35}, 
url={https://ojs.aaai.org/index.php/AAAI/article/view/17089}, 
DOI={10.1609/aaai.v35i10.17089},
number={10}, 
journal={Proceedings of the AAAI Conference on Artificial Intelligence}, 
author={Mroueh, Youssef and Melnyk, Igor and Dognin, Pierre and Ross, Jarret and Sercu, Tom}, 
year={2021}, 
month={May}, 
pages={9009-9017} 
}
Close Modal