Back

Why Invariant Risk Minimization Fails on TabularData: A Gradient Variance Solution

Mboya, G. O.

2026-04-13 epidemiology
10.64898/2026.04.09.26350513 medRxiv
Show abstract

Machine learning models trained on observational data from one environment frequently fail when deployed in another, because standard learning algorithms exploit spurious correlations alongside causal ones. Invariant learning methods address this problem by seeking representations that support stable prediction across training environments, but their behavior on tabular data remains poorly characterized. We present CausTab, a gradient variance regularization framework for causal invariant representation learning on mixed tabular data. CausTab penalizes the variance of parameter gradients across training environments, providing a richer invariance signal than the scalar penalty used by Invariant Risk Minimization (IRM). We provide formal results showing that the gradient variance penalty is zero at causally invariant solutions and positive at solutions that rely on spurious features. Through experiments on synthetic data across three spurious-correlation regimes, four cycles of the National Health and Nutrition Examination Survey (NHANES), and four hospital systems in the UCI Heart Disease dataset, we demonstrate that: (1) IRM consistently degrades relative to standard empirical risk minimization (ERM) on tabular data, losing up to 13.8 AUC points in spurious-dominant settings, a failure we trace mechanistically to penalty collapse during training; (2) CausTab matches or exceeds ERM in every experimental condition; (3) CausTab achieves consistently better probability calibration than both ERM and IRM; and (4) invariant learning methods fail when environments differ in outcome prevalence rather than in spurious feature correlations, a boundary condition we characterize both empirically and theoretically. We introduce the Spurious Dominance Index (SDI), a practical scalar diagnostic for determining whether a dataset requires invariant learning, and validate it across all experimental settings

Matching journals

The top 7 journals account for 50% of the predicted probability mass.

1
Proceedings of the National Academy of Sciences
2130 papers in training set
Top 3%
14.6%
2
Nature Communications
4913 papers in training set
Top 11%
14.2%
3
Nature Medicine
117 papers in training set
Top 0.3%
6.3%
4
Nature Machine Intelligence
61 papers in training set
Top 0.4%
6.3%
5
PLOS ONE
4510 papers in training set
Top 34%
4.3%
6
Nature
575 papers in training set
Top 6%
3.9%
7
Science Advances
1098 papers in training set
Top 6%
3.6%
50% of probability mass above
8
Nature Neuroscience
216 papers in training set
Top 3%
3.0%
9
Nature Human Behaviour
85 papers in training set
Top 1%
3.0%
10
Scientific Reports
3102 papers in training set
Top 43%
2.9%
11
Science
429 papers in training set
Top 10%
2.9%
12
Science Translational Medicine
111 papers in training set
Top 2%
2.3%
13
Epidemiology
26 papers in training set
Top 0.2%
2.1%
14
PLOS Computational Biology
1633 papers in training set
Top 14%
2.1%
15
eLife
5422 papers in training set
Top 40%
1.8%
16
International Journal of Epidemiology
74 papers in training set
Top 1%
1.7%
17
npj Digital Medicine
97 papers in training set
Top 2%
1.5%
18
Nature Computational Science
50 papers in training set
Top 0.9%
1.3%
19
Physical Review X
23 papers in training set
Top 0.4%
1.2%
20
Nature Biotechnology
147 papers in training set
Top 6%
0.9%
21
Nature Genetics
240 papers in training set
Top 6%
0.9%
22
Bioinformatics
1061 papers in training set
Top 9%
0.9%
23
Cancer Discovery
61 papers in training set
Top 2%
0.9%
24
Emerging Infectious Diseases
103 papers in training set
Top 3%
0.8%
25
Epidemics
104 papers in training set
Top 2%
0.8%
26
Cell Reports
1338 papers in training set
Top 33%
0.7%
27
PNAS Nexus
147 papers in training set
Top 2%
0.7%
28
Nature Methods
336 papers in training set
Top 6%
0.7%
29
Clinical Cancer Research
58 papers in training set
Top 2%
0.7%
30
Physical Review Research
46 papers in training set
Top 1%
0.6%