Instrumental and Proximal Causal Inference with Gaussian Processes
2026-03-02 • Machine Learning
Machine Learning
AI summaryⓘ
The authors developed a new method called Deconditional Gaussian Process (DGP) to help understand cause-and-effect relationships when some important factors are hidden. Their method not only makes good predictions but also tells us how confident those predictions are. This helps researchers know when to trust the results and when to be cautious. They tested their approach and found it works well and provides useful information about uncertainty, something often missing in previous methods.
causal inferenceinstrumental variableproximal causal learningunobserved confoundingGaussian processepistemic uncertaintykernel estimatorsmodel selectionmarginal log-likelihooduncertainty quantification
Authors
Yuqi Zhang, Krikamol Muandet, Dino Sejdinovic, Edwin Fong, Siu Lun Chau
Abstract
Instrumental variable (IV) and proximal causal learning (Proxy) methods are central frameworks for causal inference in the presence of unobserved confounding. Despite substantial methodological advances, existing approaches rarely provide reliable epistemic uncertainty (EU) quantification. We address this gap through a Deconditional Gaussian Process (DGP) framework for uncertainty-aware causal learning. Our formulation recovers popular kernel estimators as the posterior mean, ensuring predictive precision, while the posterior variance yields principled and well-calibrated EU. Moreover, the probabilistic structure enables systematic model selection via marginal log-likelihood optimization. Empirical results demonstrate strong predictive performance alongside informative EU quantification, evaluated via empirical coverage frequencies and decision-aware accuracy rejection curves. Together, our approach provides a unified, practical solution for causal inference under unobserved confounding with reliable uncertainty.