Implementing Inclusive and Community-Driven AI for Health programs: Lessons from Frontier Technology Pilots in Developing Contexts
Moore, C.; Mugwagwa, J.; Vickers, I.
Show abstract
The use of Artificial Intelligence (AI) in healthcare is a field of growing relevance and importance, but in many LMICs, those seeking to develop AI based solutions for healthcare needs, face significant outstanding challenges. This research analysed practical efforts to implement AI-based technologies to support healthcare delivery in low-resource settings. By investigating six pilots within the Foreign Commonwealth and Development Offices Frontier Technologies program through analysis of associated pilot literature and semi-structured interviews with key pilot actors, we identified differences and commonalities in the experiences of each pilot, and in the perceived enablers and barriers for effective implementation of AI health tools. We found that AI is a promising tool in this sector but currently lacks the operating environment to be widely successful in solving healthcare challenges. Gaps in regulatory and ethical governance in these contexts exacerbated concerns around the ethical and responsible use of AI and led to alternative technical approaches being followed. The value of partnerships and relationships was demonstrated as essential, and projects with pre-established networks with key decision makers in healthcare systems, both at a bureaucratic and clinical level, demonstrated greater success in both developing and scaling their solutions. The challenge of sustainability and longer-term impact was also identified. The fragmented nature of local technology ecosystems also posed a common barrier to the delivery and scale-up of promising AI tools. It is anticipated that this research can help share some useful lessons for future users and developers of AI technologies and tools in the health space, particularly in resource-constrained settings. These findings suggest that barriers to equitable AI adoption in low-resource settings are primarily institutional and systemic, rather than technical, highlighting the need for health system-level readiness alongside technological innovation. Author SummaryArtificial intelligence (AI) is increasingly promoted as a way to improve healthcare delivery, including in low- and middle-income countries (LMICs). However, much of the existing discussion focuses on technical performance, with less attention to whether AI tools can be implemented, governed, and sustained within real-world health systems. In this study, we examine a set of AI-for-health pilot projects implemented in low-resource settings to understand what enables or constrains their adoption. Using interviews with practitioners and a review of project documentation, we explore how these pilots interacted with existing health system conditions, including workforce capacity, data infrastructure, governance arrangements, and institutional partnerships. We find that many of the challenges faced by AI projects are not primarily technical, but instead reflect broader system-level constraints, such as limited regulatory capacity, fragmented data systems, and reliance on external actors for development and maintenance. Our findings suggest that achieving equitable and inclusive AI for health requires more than developing effective technologies. It also requires sustained investment in the institutions, governance structures, and system capacities that allow AI tools to be safely adopted and integrated into health services. This study offers practical insights for policymakers, funders, and practitioners seeking to use AI in ways that strengthen health systems rather than bypass them.
Matching journals
The top 4 journals account for 50% of the predicted probability mass.