Machine Learning Fairness in Predicting Underweight, Overweight and Adiposity Across Socioeconomic and Caste Group in India: Evidence from the Longitudinal Ageing Study in India
Lee, J. T.; Hsu, S. H.; Li, V. C.-S.; Anindya, K.; Chen, M.-H.; Wang, C.; Shen, T. K.-B.; Liu, V. T. N.; Chen, H.-H.; Atun, R.
Show abstract
BackgroundMachine learning (ML) models are widely used to predict body mass index (BMI), yet their fairness across socioeconomic and caste groups remains uncertain, especially in countries with structure inequalities. This study evaluated the accuracy and fairness of ML models in predicting underweight, overweight, and central adiposity, examined the impact of socioeconomic and household factors, identified key predictive features, and assessed the effect of bias mitigation techniques on model performance. MethodsThis study analysed data from the nationally representative Longitudinal Ageing Study in India (LASI) with over 55,000 individuals aged 45 and older. We applied ML models (Random Forest, XGBoost, Gradient Boosting, LightGBM, DNN, DCN) alongside logistic regression. Model were trained (80%) and tested (20%), evaluated using AUROC, accuracy, sensitivity, specificity, and precision. Fairness assessment included subgroup analyses across socioeconomic and caste groups, equity-based fairness (e.g. Equalized Odds, Demographic Parity). Feature importance was examined using SHAP values. Bias mitigation techniques were applied at three stages: pre-processing (Disparate Impact Remover, Reweighting), in-processing (Exponential Gradient Reduction), and post- processing (Calibrated Equalized Odds, Reject Option Classification). Prediction density analysis assessed class separability across subgroup. ResultsTree-based models--especially LightGBM and Gradient Boosting--along with Logistic Regression, consistently delivered the highest AUROC scores in predicting underweight, overweight, and high waist circumference outcomes (AUROC= 0.79-0.84). Incorporating socioeconomic and health-related variables gradually enhanced model performance; for example, the AUROC for underweight prediction increased from 0.74 to 0.78. However, our analysis revealed notable fairness issues: models performed worse for scheduled tribes and lower socioeconomic groups, as evidenced by reduced sensitivity and specificity in these subgroups. Feature importance analysis using SHAP values indicated that variables such as grip strength, gender, and residence were the key drivers of prediction differences; specifically, lower grip strength and rural residence were linked to underweight, whereas higher grip strength, urban residence, and female gender were associated with overweight and central adiposity. Regarding bias mitigation, techniques like Reject Option Classification and Equalized Odds Postprocessing showed some potential for reducing subgroup disparities by aligning the performance of low- and high-performing groups. Nevertheless, these adjustments sometimes came with trade-offs, and other methods--such as Exponentiated Gradient Reduction and Adversarial Debiasing--resulted in substantial declines in overall performance. While approaches like Disparate Impact Remover, Reweighting, and the Stratified Subgroup Best Model produced only modest changes relative to the unmitigated model, our findings highlight persistent fairness challenges. ConclusionsML models can effectively predict obesity and adiposity risks in India, but addressing biases is critical for equitable application. There are needs to further refinement of fairness awareness ML approaches in public health, particularly in the context of Indias diverse population for more inclusive and effective policy decisions. AUTHOR SUMMARYIndia now faces the paradox of widespread under-nutrition alongside a rising tide of obesity among its older population. We asked whether state-of-the-art machine-learning models could accurately identify individuals at highest risk of under-weight, overweight-obesity, and central adiposity while treating all social groups equitably. Using nationally representative data on more than 55,000 adults aged 45 years and above, we compared gradient-boosted decision trees, random forests, logistic regression, and other approaches with conventional regression techniques. Overall, the modern algorithms produced the strongest predictions. Yet a closer look revealed systematic shortfalls for scheduled tribes, scheduled castes, and the lowest income quintile--even when the models achieved excellent accuracy in the population as a whole. We then applied several well-established bias-mitigation strategies, such as re-weighting the training data and post-processing the decision thresholds. These interventions reduced the performance gap for disadvantaged groups, albeit at a modest cost to overall accuracy. By combining careful fairness audits with Shapley-based interpretation of feature importance, we illuminate how socioeconomic and caste-related factors shape both nutritional risk and prediction error. Our findings underscore that fair, trustworthy decision support systems in public health must be designed explicitly with equity objectives, rather than assuming that technical excellence alone will guarantee just outcomes.
Matching journals
The top 4 journals account for 50% of the predicted probability mass.