Finding Valid Adjustments under Non-ignorability with Minimal DAG Knowledge
Authors
Authors
- Karthikeyan Shanmugam
- Abhin Shah
- Kartik Ahuja
Authors
- Karthikeyan Shanmugam
- Abhin Shah
- Kartik Ahuja
Published on
03/30/2022
Treatment effect estimation from observational data is a fundamental problem in causal inference. There are two very different schools of thought that have tackled this problem. On the one hand, the Pearlian framework commonly assumes structural knowledge (provided by an expert) in the form of directed acyclic graphs and provides graphical criteria such as the back-door criterion to identify the valid adjustment sets. On the other hand, the potential outcomes (PO) framework commonly assumes that all the observed features satisfy ignorability (i.e., no hidden confounding), which in general is untestable. In prior works that attempted to bridge these frameworks, there is an observational criteria to identify an anchor variable and if a subset of covariates (not involving the anchor variable) passes a suitable conditional independence criteria, then that subset is a valid back-door. Our main result strengthens these prior results by showing that under a different expert-driven structural knowledge — that one variable is a direct causal parent of the treatment variable — remarkably, testing for subsets (not involving the known parent variable) that are valid back-doors is equivalent to an invariance test. Importantly, we also cover the non-trivial case where the entire set of observed features is not ignorable (generalizing the PO framework) without requiring the knowledge of all the parents of the treatment variable. Our key technical idea involves generation of a synthetic sub-sampling (or environment) variable that is a function of the known parent variable. In addition to designing an invariance test, this sub-sampling variable allows us to leverage Invariant Risk Minimization, and thus, connects finding valid adjustments (in non-ignorable observational settings) to representation learning. We demonstrate the effectiveness and tradeoffs of these approaches on a variety of synthetic datasets as well as real causal effect estimation benchmarks.
Please cite our work using the BibTeX below.
@misc{https://doi.org/10.48550/arxiv.2106.11560,
doi = {10.48550/ARXIV.2106.11560},
url = {https://arxiv.org/abs/2106.11560},
author = {Shah, Abhin and Shanmugam, Karthikeyan and Ahuja, Kartik},
keywords = {Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Finding Valid Adjustments under Non-ignorability with Minimal DAG Knowledge},
publisher = {arXiv},
year = {2021},
copyright = {arXiv.org perpetual, non-exclusive license}
}