Improving RCT-Based Treatment Effect Estimation Under Covariate Mismatch via Calibrated Alignment

2026-03-19Machine Learning

Machine Learning
AI summary

The authors address a challenge in combining data from randomized controlled trials (RCTs) and large observational studies (OS) because these data sets often have different types of information. They propose a method called CALM that creates a shared space where features from both sources can be compared directly without guessing missing data. This method allows for better estimation of how treatment effects vary across individuals by calibrating models learned from OS with data from RCTs. Their experiments show that CALM works especially well when treatment effects are nonlinear, improving over traditional methods.

Randomized Controlled TrialsObservational StudiesHeterogeneous Treatment EffectsConditional Average Treatment EffectCovariate MismatchEmbedding LearningModel CalibrationCausal InferenceTransfer LearningDistributional Shift
Authors
Amir Asiaee, Samhita Pal
Abstract
Randomized controlled trials (RCTs) are the gold standard for estimating heterogeneous treatment effects, yet they are often underpowered for detecting effect heterogeneity. Large observational studies (OS) can supplement RCTs for conditional average treatment effect (CATE) estimation, but a key barrier is covariate mismatch: the two sources measure different, only partially overlapping, covariates. We propose CALM (Calibrated ALignment under covariate Mismatch), which bypasses imputation by learning embeddings that map each source's features into a common representation space. OS outcome models are transferred to the RCT embedding space and calibrated using trial data, preserving causal identification from randomization. Finite-sample risk bounds decompose into alignment error, outcome-model complexity, and calibration complexity terms, identifying when embedding alignment outperforms imputation. Under the calibration-based linear variant, the framework provides protection against negative transfer; the neural variant can be vulnerable under severe distributional shift. Under sparse linear models, the embedding approach strictly generalizes imputation. Simulations across 51 settings confirm that (i) calibration-based methods are equivalent for linear CATEs, and (ii) the neural embedding variant wins all 22 nonlinear-regime settings with large margins.