On September 17, 2025, the Joint Commission, in collaboration with the Coalition for Health AI (“CHAI”), issued its first high-level framework on the responsible use of artificial intelligence (“AI”) in healthcare. The Guidance on the Responsible Use of AI in Healthcare (“Guidance”) is intended to help hospitals and health systems responsibly deploy, govern, and monitor AI tools across organizations. The goal of the Guidance is to help “…the industry align elements that enhance patient safety by reducing risks associated with AI error and improving administrative, operational, and patient outcomes by leveraging AI’s potential.”
The Guidance applies broadly to “health AI tools” (“AI tools”), which are defined as:
…clinical, administrative, and operational solutions that apply algorithmic methods (predictive, generative, combined methods) to a suite of tasks that are part of direct or indirect patient care (e.g., decision support, diagnosis, treatment planning, imaging, laboratory, patient monitoring), care support services (e.g., clinical documentation, scheduling, care coordination/management, patient communication), and care-relevant healthcare operations and administrative services, (e.g., revenue cycle management, coding, prior authorization, care quality management, etc.)
The Guidance is not intended to direct the development of AI tools, or to validate the effectiveness of AI tools themselves; rather, it provides broad direction to healthcare organizations on structures and processes for safe implementation and use. This Guidance is positioned as an initial, high-level standard that will be operationalized through non-binding forthcoming governance playbooks and a voluntary certification program. The Guidance sets forth seven core elements that healthcare organizations should address to manage the risks and realize the benefits of AI systems in the clinical, operational, and administrative spaces. Each element focuses on practical controls, accountability, and continuous learning. The seven core elements articulated by the Guidance are:
AI Policies and Governance Structures: The Guidance calls for the establishment of formal, risk-based governance to oversee the implementation and usage of AI across third-party, internally developed, and embedded tools. Governance should be staffed with individuals who have appropriate technical expertise, ideally in AI, and include clinical, operational, IT, compliance, privacy/security, safety/incident reporting, and representatives reflecting impacted populations. Policies should align with internal standards and external regulatory and ethical frameworks and be reviewed regularly. The governing body and/or fiduciary board should receive periodic updates on AI use, and outcomes, and potential adverse events. The Guidance states that “…governance creates accountability which will help to drive the safe use of AI tools.”
Patient Privacy and Transparency: The Guidance calls for organizations to implement policies addressing data access, use, and protection, coupled with mechanisms to disclose AI use and to educate patients and families. When AI directly impacts care, patients should be notified, and when appropriate, consent should be obtained. Transparency and education should extend to staff as well, clarifying how AI tools function, their role in decision-making, and data handling practices. The Guidance aims to protect patient data and to preserve trust while enabling AI’s benefits, recognizing that AI often relies on sensitive, large-scale datasets.
Data Security and Data Use Protections: The Guidance emphasizes robust security controls and contractual guardrails. At a minimum, organizations should encrypt data in transit and at rest, enforce strict access controls, perform regular security assessments, and maintain an incident response plan. Data use agreements should define permitted uses, require data minimization, prohibit re-identification of de-identified datasets, impose third-party security obligations, and preserve the rights of the organization to audit third-party vendors for compliance. HIPAA obligations apply whenever PHI is involved, so policies should be tailored to comply with HIPAA, particularly the HIPAA Privacy Rule. Even for properly de-identified data, organizations should maintain strong technical and contractual protections given re-identification risks and downstream use in model development and tuning.
Ongoing Quality Monitoring: The Guidance highlights that AI performance can drift as data inputs or algorithms change, vendor updates roll out, or workflows evolve. The Guidance urges pre-deployment validation, and post-deployment risk-based and context-appropriate monitoring. During procurement, organizations should request validation evidence, understand bias evaluations, and where possible, secure vendor support for tuning/validation of a sample that is representative of the deployment context. Comprehensive policies should be developed identifying the responsible parties for the monitoring and evaluation of AI tools. This monitoring and evaluation should consist of regular validation, evaluations of quality and reliability of data and outputs, assessments of use-case relevant outcomes, ensuring the AI tools rely on current data, the development of an AI dashboard, and creating a process for reporting adverse events and/or errors to the relevant parties. Monitoring responsibility should also be discussed as part of third-party procurement and contracting.
Voluntary, Blinded Reporting of AI Safety-Related Events: The Guidance promotes the dissemination of knowledge across the industry to help healthcare providers stay informed about potential risks and best practices. To avoid imposing new regulatory burdens, the Guidance encourages confidential, blinded, reporting of AI-related safety events to independent entities (such as Patient Safety Organizations). Organizations should capture AI-related near misses and harms (e.g., unsafe recommendations, major performance degradation after an update, biased outputs, etc.) within internal incident systems and share de-identified details through existing channels both internally and externally as appropriate. This approach enables pattern recognition and rapid, field-wide learning while protecting patient privacy.
Risk and Bias Assessment: The Guidance stipulates that organizations should proactively identify and address risks and biases in AI tools, both prospectively and through ongoing monitoring. They should seek vendor disclosures on known risks, limitations, and bias (including specifically how bias was evaluated). The Guidance states that healthcare organizations should determine if the AI tools are fit for purpose, if they have undergone the appropriate bias detection assessment during development, whether the algorithms have been tested for the specific populated they serve (and ensure that they are tuned/tested on local data), and that the AI tools must be audited and monitored to identify, mitigate, and/or manage biases when appropriate. The aim is to prevent safety errors, misdiagnoses, administrative burdens, and inequities that can arise when AI tools are applied outside their validated context.
Education and Training: The Guidance states that clinicians and staff members must receive training in AI tools to ensure safe implementation and integration of AI tools into clinical workflows. The Guidance recommends role-specific training for clinicians and staff on each AI tool’s intended use, limitations, and monitoring obligations, supported by accessible documentation. Broader AI literacy and change management initiatives should be considered to help create a shared vocabulary and understanding of AI principles, risks, and benefits. Organizations should determine when pre-implementation and periodic training are necessary for clinicians and staff.
The Guidance does not elaborate on the actual implementation of the broad elements discussed above; rather, it solicits additional feedback on the high-level guidance provided and requests further input from stakeholders in the development of “Responsible Use of AI Playbooks” (“Playbooks”), these Playbooks will serve as the practical resources to guide health systems toward aligning with the Guidance. Once the Playbooks have been developed, a voluntary Joint Commissions Responsible Use of AI certification program will be developed based on the Playbooks. While not binding, the Guidance is likely to influence how AI tools are used across the healthcare industry, and could serve as a model for future accreditation-related expectations related to AI governance in the healthcare sector.
It is of paramount importance that healthcare organizations employing AI not only consider this Guidance, but also adapt it to their circumstances and understand the specific AI tools to be employed from both an operational and technical perspective, and the potential unanticipated consequences of the AI tools use. Organizations that most successfully employ AI tools will be those that are best able to recognize the unanticipated consequences and recalibrate the AI tools accordingly.