Entropic Causal Inference: Graph Identifiability
Authors
Authors
- Kristjan Greenewald
- Spencer Compton
- Dmitriy Katz
- Murat Kocaoglu
Authors
- Kristjan Greenewald
- Spencer Compton
- Dmitriy Katz
- Murat Kocaoglu
Published on
07/23/2022
Categories
Entropic causal inference is a recent framework for learning the causal graph between two variables from observational data by finding the information-theoretically simplest structural explanation of the data, i.e., the model with smallest entropy. In our work, we first extend the causal graph identifiability result in the two-variable setting under relaxed assumptions. We then show the first identifiability result using the entropic approach for learning causal graphs with more than two nodes. Our approach utilizes the property that ancestrality between a source node and its descendants can be determined using the bivariate entropic tests. We provide a sound sequential peeling algorithm for general graphs that relies on this property. We also propose a heuristic algorithm for small graphs that shows strong empirical performance. We rigorously evaluate the performance of our algorithms on synthetic data generated from a variety of models, observing improvement over prior work. Finally we test our algorithms on real-world datasets.
Please cite our work using the BibTeX below.
@InProceedings{pmlr-v162-compton22a,
title = {Entropic Causal Inference: Graph Identifiability},
author = {Compton, Spencer and Greenewald, Kristjan and Katz, Dmitriy A and Kocaoglu, Murat},
booktitle = {Proceedings of the 39th International Conference on Machine Learning},
pages = {4311--4343},
year = {2022},
editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan},
volume = {162},
series = {Proceedings of Machine Learning Research},
month = {17--23 Jul},
publisher = {PMLR},
pdf = {https://proceedings.mlr.press/v162/compton22a/compton22a.pdf},
url = {https://proceedings.mlr.press/v162/compton22a.html},
abstract = {Entropic causal inference is a recent framework for learning the causal graph between two variables from observational data by finding the information-theoretically simplest structural explanation of the data, i.e., the model with smallest entropy. In our work, we first extend the causal graph identifiability result in the two-variable setting under relaxed assumptions. We then show the first identifiability result using the entropic approach for learning causal graphs with more than two nodes. Our approach utilizes the property that ancestrality between a source node and its descendants can be determined using the bivariate entropic tests. We provide a sound sequential peeling algorithm for general graphs that relies on this property. We also propose a heuristic algorithm for small graphs that shows strong empirical performance. We rigorously evaluate the performance of our algorithms on synthetic data generated from a variety of models, observing improvement over prior work. Finally we test our algorithms on real-world datasets.}
}