Research

Learning and Testing Causal Models with Interventions

NeurIPS

Authors

Published on

12/08/2018

Categories

NeurIPS

We consider testing and learning problems on causal Bayesian networks as defined by Pearl [Pea09]. Given a causal Bayesian network M on a graph with n discrete variables and bounded in-degree and bounded “confounded components”, we show that O(log n) interventions on an unknown causal Bayesian network X on the same graph, and O(n/2 ) samples per intervention, suffice to efficiently distinguish whether X = M or whether there exists some intervention under which X and M are farther than  in total variation distance. We also obtain sample/time/intervention efficient algorithms for: (i) testing the identity of two unknown causal Bayesian networks on the same graph; and (ii) learning a causal Bayesian network on a given graph. Although our algorithms are non-adaptive, we show that adaptivity does not help in general: Ω(log n) interventions are necessary for testing the identity of two unknown causal Bayesian networks on the same graph, even adaptively. Our algorithms are enabled by a new subadditivity inequality for the squared Hellinger distance between two causal Bayesian networks.

Please cite our work using the BibTeX below.

@inproceedings{NEURIPS2018_78631a4b,
 author = {Acharya, Jayadev and Bhattacharyya, Arnab and Daskalakis, Constantinos and Kandasamy, Saravanan},
 booktitle = {Advances in Neural Information Processing Systems},
 editor = {S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett},
 pages = {},
 publisher = {Curran Associates, Inc.},
 title = {Learning and Testing Causal Models with Interventions},
 url = {https://proceedings.neurips.cc/paper/2018/file/78631a4bb5303be54fa1cfdcb958c00a-Paper.pdf},
 volume = {31},
 year = {2018}
}
Close Modal