AI summaryⓘ
The authors study how to better estimate uncertainty in predictions when only part of the data behaves similarly to the test case, a situation common in experiments where genes are intentionally changed. They focus on learning which parts of the data are "unaffected" by certain interventions, rather than knowing the full causal relationships, to improve prediction reliability. They provide a theoretical result showing how mistakes in identifying unaffected data affect prediction accuracy, propose methods to find these unaffected parts from patterns in the data, and test their ideas with simulations and real gene perturbation data. Their approach helps keep uncertainty estimates accurate even when there is some misclassification in the calibration data.
Selective conformal predictionExchangeabilityInterventional settingsCausal graphDescendantsContamination robustnessStructural equation models (SEMs)Invariant causal predictionCRISPR interference (CRISPRi)Calibration set
Authors
Amir Asiaee, Kavey Aryan, James P. Long
Abstract
Selective conformal prediction can yield substantially tighter uncertainty sets when we can identify calibration examples that are exchangeable with the test example. In interventional settings, such as perturbation experiments in genomics, exchangeability often holds only within subsets of interventions that leave a target variable "unaffected" (e.g., non-descendants of an intervened node in a causal graph). We study the practical regime where this invariance structure is unknown and must be learned from data. Our contributions are: (i) a contamination-robust conformal coverage theorem that quantifies how misclassification of "unaffected" calibration examples degrades coverage via an explicit function $g(δ,n)$ of the contamination fraction and calibration set size, providing a finite-sample lower bound that holds for arbitrary contaminating distributions; (ii) a task-driven partial causal learning formulation that estimates only the binary descendant indicators $Z_{a,i}=\mathbf{1}\{i\in\mathrm{desc}(a)\}$ needed for selective calibration, rather than the full causal graph; and (iii) algorithms for descendant discovery via perturbation intersection patterns (differentially affected variable set intersections across interventions), and for approximate distance-to-intervention estimation via local invariant causal prediction. We provide recovery conditions under which contamination is controlled. Experiments on synthetic linear structural equation models (SEMs) validate the bound: under controlled contamination up to $δ=0.30$, the corrected procedure maintains $\ge 0.95$ coverage while uncorrected selective CP degrades to $0.867$. A proof-of-concept on Replogle K562 CRISPR interference (CRISPRi) perturbation data demonstrates applicability to real genomic screens.