A Survey on Optimization and Machine -learning-based Fair Decision Making in Healthcare
Chen, Z.; Marrero, W. J.
Show abstract
The unintended biases introduced by optimization and machine learning (ML) models are a topic of great interest to medical researchers and professionals. Bias in healthcare decisions can cause patients from vulnerable populations (e.g., racially minoritized, low-income, or living in rural areas) to have lower access to resources and inferior outcomes, thus exacerbating societal unfairness. In this systematic literature review, we present a structured overview of the literature regarding fair decision making in healthcare until April 2024. After screening 801 unique references, we identified 114 articles within the scope of our review. In our review, we comprehensively examine fair decision-making methodologies in healthcare by systematically identifying and categorizing biases within both data and models. Initially, we elucidate existing bias within healthcare decision making. Then, we present a range of fairness metrics drawn from different use cases, followed by analyzing and classifying bias mitigation strategies into pre-processing, in-processing, and post-processing techniques. We provide a broad conceptual overview and practical illustrations of each approach. Additionally, we examine emerging bias mitigation technologies that, though not yet applied in healthcare, show substantial promise for future integration. Our review aims to increase awareness of fairness in healthcare decision making and facilitate the selection of appropriate approaches under varying scenarios.
Matching journals
The top 6 journals account for 50% of the predicted probability mass.