What Is Responsible AI in Healthcare? A Systematic Review of Key Principles, Legal Guardrails, and Systems-Thinking Framework
Journal
SSRN
ISSN
1556-5068
Date Issued
2026
Author(s)
Wu, Alice Jo Wei
Li, Vincent Cheng Sheng
Ilyuk, Andriy
Lin, Yen-Chun
Chen, Yu-Chun
Chen, Hsiao-Hui
Niu, Kuang-Yu
Forde, Brooke
Lee, Yi-Chia
Oldenburg, Brian
Rifat, Atun
Abstract
Background
Responsible Artificial Intelligence (Responsible AI) has emerged as a central theme in the global discourse on ethical and trustworthy AI in healthcare. However, there are no universally agreed definition of what constitutes 'responsible AI', the set of underlying principles that should guide development of responsible AI and the approaches used in its assessment. There is also lack of clarity of how principles related to responsible AI are interpreted, operationalised and governed across health systems. This study synthesises global literature on the policy frameworks and legal guardrails used to frame the conceptual foundations and practical application of Responsible AI in healthcare. It proposes a systems-thinking framework that links AI ethics, governance, and adoption of Responsible AI as a new innovation.
Methods
We conducted a systematic review registered in PROSPERO (CRD420251238814). Seven databases were searched for English-language publications from Jan 1, 2010 to Oct 30, 2025 addressing ethics, governance, or policy of AI in healthcare. We extracted and coded principles, governance mechanisms, and implementation tools. Text-mining identified recurring descriptors and conceptual patterns, and mapping of legal guardrails traced how each principle is expressed in statutory and regulatory texts in the EU, USA, and Taiwan. We used a systems-thinking framework that conceptualizes the emergence of Responsible AI from interactions between AI innovation, adoption processes, institutional governance and societal context.
Findings
Of 1,828 records identified, 24 met inclusion criteria. Textual synthesis identified fourteen recurrent Responsible AI principles, of which transparency and explainability were most consistently cited (>90% of publications), followed by fairness and equity, accountability, and safety/robustness (70-90%). Mapping of the guiding legal principles revealed ethical commitments expressed as privacy rights, risk-classification duties, post-market monitoring requirements, and compliance obligations across major legal regimes. However, few publications linked ethical principles to institutional or health-system contexts.
Interpretation
This study consolidates the fragmented landscape of responsible AI in healthcare, identifying common principles and shared meanings, and reframes them within a systems-thinking perspective that integrates innovation, adoption, and regulation. The findings demonstrate that Responsible AI is best understood as a system-level construct rather than a set of isolated ethical rules. Policymakers and health-system leaders are therefore urged to move beyond declarative principles toward adaptive governance frameworks that embed continuous learning, accountability, and equity to ensure AI delivers sustainable public benefit.
Subjects
Responsible Artificial Intelligence
Trustworthy AI
Ethical AI
AI Governance
Health Policy
Publisher
Elsevier BV
Type
preprint
