AI in Clinical Medicine, ISSN 2819-7437 online, Open Access
Article copyright, the authors; Journal compilation copyright, AI Clin Med and Elmer Press Inc
Journal website https://aicm.elmerpub.com

Original Article

Volume 2, April 2026, e20


Clinically Aligned AI Governance: Integrating Ethics, Risk, and Regulation in Healthcare

Figure

↓  Figure 1. Clinically aligned AI governance: three-layer governance architecture. The architecture comprises three layers constituted by the organizational Three Lines model: Layer 1 (value articulation and clinical purpose; owned by executive and clinical leadership, first line of defense), Layer 2 (risk and control integration; anchored in patient safety, quality, and research governance functions, second line), and Layer 3 (accountability, and assurance; led by internal audit, regulators, and independent reviewers, third line). Bidirectional feedback and escalation pathways connect all three layers. Bidirectional arrows indicate feedback and escalation pathways between layers. Each layer bridges directly to an existing clinical governance structure, embedding AI oversight within, rather than parallel to, established systems of clinical care and accountability.
Figure 1.

Tables

↓  Table 1. Cross-Sector AI Governance Frameworks
 
FrameworkTypeCore focusStrengthsLimitations
Summary of the most widely referenced cross-sector AI governance frameworks, including their type, core focus, strengths, and limitations in the context of healthcare applications.
OECD AI Principles (2019, rev. 2024) [18]Global policy principlesHuman-centered values, transparency, robustness, accountabilityWidely adopted; foundation for national strategiesHigh-level; voluntary
Montréal Declaration (2018) [19]Ethical charterSocietal values and public engagementNormative legitimacyNo enforcement
IEEE 7000 Series (2021) [20]Technical standardsEthics-by-designOperationalizes valuesNot legally binding
AIGA Framework (2022) [21]Governance and auditing modelLinking principles to controlsBridges legal, ethical, technical domainsRequires mature management systems
ISO/IEC 42001:2023 [22]Management system standardEnterprise AI governanceCertifiable; institutional accountabilityRisk of formalistic compliance
NIST AI RMF (2023) [23]Risk management frameworkLifecycle risk governanceFlexible and iterativeVoluntary
EU AI Act (2024) [24]Binding regulationRisk-based complianceStrong enforcementHigh implementation complexity

 

↓  Table 2. Functional Roles of AI Governance Frameworks in Healthcare Settings
 
Framework categoryRepresentative frameworksPrimary functionWhat it does not do
Classification of AI governance frameworks by their primary functional role: ethical/normative frameworks that establish legitimacy and values; risk and organizational methods that translate values into controls; and management and legal instruments that institutionalize accountability and enforcement.
Ethical/normativeOECD AI Principles; Montréal Declaration; WHO AI EthicsEstablish legitimacy, values, and expectationsDefine clinical safety thresholds or enforce practice
Risk and organizational methodsIEEE 7000 Series; NIST AI RMF; AIGA FrameworkTranslate values into risk identification, controls, and documentationSubstitute for clinical governance
Management and legal instrumentsISO/IEC 42001; EU AI Act; MDR/IVDRInstitutionalize roles, auditability, and enforcementGovern bedside clinical decisions

 

↓  Table 3. Risk and Control Standards by Clinical Use Case
 
Clinical use casePrimary risksKey standardsGovernance focus
Mapping of primary AI-related risks, applicable governance standards, and governance focus areas across five key clinical AI deployment contexts: AI as medical device, clinical decision support, clinical research AI, molecular/genomic AI, and adaptive AI systems. Superscript letters (a–g) refer to footnotes describing the relevant international standards. aISO 14971: Standard for risk management of medical devices across the lifecycle. bIEC 62304: Standard for safe lifecycle management of medical device software. cISO/IEC 82304-1: Requirements for safety and quality of health software products. dISO 9241: Standards on usability and human-centred design of interactive systems. eDeclaration of Helsinki: Ethical principles for medical research involving human participants. fICH-GCP: International standard for ethical and scientific conduct of clinical trials. gPCCPs: Regulatory approach for controlled post-deployment changes to machine learning medical devices.
AI as medical devicePatient harm, driftISO 14971a; IEC 62304b; GMLPSafety, traceability
Clinical decision supportOver-reliance, opacityISO/IEC 82304-1c; ISO 9241d; NIST AI RMFHuman oversight
Clinical research AIBias, invalid inferenceDeclaration of Helsinkie; ICH-GCPf; OECD Health DataResearch integrity
Molecular/genomic AIReproducibility, misuseDomain bioinformatics standards; NIST AI RMFScientific validity
Adaptive AI systemsDrift, inequityGMLP; PCCPsgContinuous oversight and learning