Research

Learning-based Support Estimation In Sublinear Time

ICLR

Authors

Published on

09/29/2020

Categories

ICLR

We consider the problem of estimating the number of distinct elements in a large data set (or, equivalently, the support size of the distribution induced by the data set) from a random sample of its elements. The problem occurs in many applications, including biology, genomics, computer systems and linguistics. A line of research spanning the last decade resulted in algorithms that estimate the support up to ±εn from a sample of size O(log2 (1/ε) · n/ log n), where n is the data set size. Unfortunately, this bound is known to be tight, limiting further improvements to the complexity of this problem. In this paper we consider estimation algorithms augmented with a machine-learning-based predictor that, given any element, returns an estimation of its frequency. We show that if the predictor is correct up to a constant approximation factor, then the sample complexity can be reduced significantly, to log(1/ε) · n 1−Θ(1/ log(1/ε)) . We evaluate the proposed algorithms on a collection of data sets, using the neuralnetwork based estimators from Hsu et al, ICLR’19 as predictors. Our experiments demonstrate substantial (up to 3x) improvements in the estimation accuracy compared to the state of the art algorithm.

This paper has been published at ICLR 2021

Please cite our work using the BibTeX below.

@inproceedings{
eden2021learningbased,
title={Learning-based Support Estimation in Sublinear Time},
author={Talya Eden and Piotr Indyk and Shyam Narayanan and Ronitt Rubinfeld and Sandeep Silwal and Tal Wagner},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=tilovEHA3YS}
}
Close Modal