Ensemble Estimation of Information Divergence

Information Theory


Published on



Information Theory

Recent work has focused on the problem of nonparametric estimation of information divergence functionals between two continuous random variables. Many existing approaches require either restrictive assumptions about the density support set or difficult calculations at the support set boundary which must be known a priori. The mean squared error (MSE) convergence rate of a leave-one-out kernel density plug-in divergence functional estimator for general bounded density support sets is derived where knowledge of the support boundary, and therefore, the boundary correction is not required. The theory of optimally weighted ensemble estimation is generalized to derive a divergence estimator that achieves the parametric rate when the densities are sufficiently smooth. Guidelines for the tuning parameter selection and the asymptotic distribution of this estimator are provided. Based on the theory, an empirical estimator of Rényi-α divergence is proposed that greatly outperforms the standard kernel density plug-in estimator in terms of mean squared error, especially in high dimensions. The estimator is shown to be robust to the choice of tuning parameters. We show extensive simulation results that verify the theoretical results of our paper. Finally, we apply the proposed estimator to estimate the bounds on the Bayes error rate of a cell classification problem.

Please cite our work using the BibTeX below.

  author    = {Kevin R. Moon and
               Kumar Sricharan and
               Kristjan H. Greenewald and
               Alfred O. Hero III},
  title     = {Improving Convergence of Divergence Functional Ensemble Estimators},
  journal   = {CoRR},
  volume    = {abs/1601.06884},
  year      = {2016},
  url       = {},
  archivePrefix = {arXiv},
  eprint    = {1601.06884},
  timestamp = {Mon, 13 Aug 2018 16:47:27 +0200},
  biburl    = {},
  bibsource = {dblp computer science bibliography,}

Close Modal