_Page_1.jpg)
As artificial intelligence (AI) becomes increasingly embedded in processes and systems across healthcare organizations, clinicians and administrators face an intensifying challenge to adapt their organizations to the rapidly evolving AI environment. In the absence of governmental authority or other requisites to guide their AI oversight, healthcare organizations are left to develop their AI policies and procedures while guessing what future laws, regulations, or accreditation standards may require.
In September 2025, the Joint Commission and the Coalition for Health AI (CHAI) released a comprehensive, high-level guidance document titled "The Responsible Use of AI in Healthcare." As an effort to bring consistency and accountability to AI practices in healthcare, the new guidance provides a model for responsible review and deployment of AI while broader legal and regulatory frameworks continue to emerge. By adopting the approach outlined in the seven elements set out in the guidance, healthcare clinicians and administrators have an opportunity to take proactive action now, while positioning their organizations for future compliance as the legal landscape of AI healthcare takes shape.
AI Policies and Governance Structures
Healthcare organizations should begin their process by establishing clear policies and oversight structures to guide the safe and effective use of AI. The guidance recommends defining who is responsible for evaluating, approving, and monitoring all AI tools within the organization — whether those tools support clinical decisions, scheduling, billing, or other operations. To carry out this oversight, organizations should form a cross-functional governance team with expertise in areas like compliance, IT, clinical operations, and data privacy. This team should regularly update leadership on how AI is used and how well it is performing.
β
Do Now: Appoint an AI governance committee or identify existing committees or leaders who can take ownership of AI oversight and outline initial responsibilities. The AI governance committee should include leaders from compliance, IT, clinical operations, and data privacy. Charge the committee with developing clear policies and oversight processes for evaluating, approving, and monitoring all AI tools.
π
Plan Ahead: Establish a standing schedule of meetings and deliverables for the committee for the following:
- Consideration of approval for any new AI tools proposed for deployment in the organization.
- Review of new AI guidance, state and federal regulations, licensing organization requirements, and accreditation standards.
- Development and adoption of policies guiding the use of AI within the organization.
- Established reporting mechanism to organization leadership, and accreditation standards.
The committee should update AI policies annually, or more frequently as needed, to stay aligned with evolving standards and best practices.
Patient Privacy and Transparency
Protecting patient information and maintaining patient trust are essential. The guidance recommends that organizations develop clear policies around data access, use, and protection, and communicate transparently with patients and staff about how AI is used in clinical decision-making and data handling. Educating both staff and patients about how these tools work and how patient information is protected can help build understanding and confidence in AI-enabled care.
β
Do Now: Create or update your organizational policies addressing patient transparency around AI use, and revise patient consent forms accordingly to reflect when and how AI tools may influence care decisions. Consider updating the organization’s notice of privacy practices to contemplate use of AI in the provision of care or the organization’s healthcare operations. Ensure that workforce members are prepared to answer questions from patients regarding the use of AI technologies in connection with the provision of care, or where to direct patients for those answers.
π
Plan Ahead: Monitor and prepare to comply with laws and accreditation standards directly impacting patient privacy and transparency as they are adopted. One such law is already on the books. Texas House Bill 149 (TRAIGA), which will be effective January 1, 2026, requires clear and conspicuous disclosure to patients when AI is used in connection with healthcare services, even if that use is apparent at the time that care is rendered.
Data Security and Data-Use Protections
Protecting patient data is a cornerstone of responsible AI use. The guidance reminds health organizations that any use of patient data for AI must comply with the Health Insurance Portability and Accountability Act (HIPAA) and other privacy laws. At a minimum, organizations should ensure that the adoption of AI technologies does not impede the ability to encrypt data in transit and at rest, limit access to authorized users, conduct regular security assessments, and maintain a comprehensive incident response plan.
The guidance also recommends updating business associate and data use agreements with third-party vendors utilizing AI technologies that access patient healthcare information to clearly define how data can be used in the deployment of those technologies, minimize what is shared, prohibit re-identification, require vendors to comply with the organization’s privacy standards regarding the use of AI, and reserve audit rights. These steps help safeguard patient information and reduce the risk of costly breaches.
β
Do Now: Evaluate existing Business Associate Agreements (BAAs), data use agreements, and other third-party contracts to ensure they address AI-related data use, privacy, and security concerns. Require AI committee review or approval guidelines for all new BAAs and data-use agreements going forward. Develop and publish policies across the organization alerting workforce members to limitations/prohibitions on use of non-sanctioned/non-approved AI tools, along with a prohibition on incorporation of any patient information within non-sanctioned/non-approved tools.
π
Plan Ahead: Revise annual HIPAA training materials/modules to ensure that the organization’s workforce understands how the organization’s deployment of AI is accessing patient protected health information to support clinical decision-making and patient engagement and how workforce members ensure that the deployment preserves HIPAA protections.
Ongoing Quality Monitoring
AI systems are dynamic and can change over time, which makes continuous oversight essential. The guidance recommends that organizations establish processes for continuous evaluation of AI performance and safety. This includes conducting regular performance testing, documenting results, creating clear channels for reporting issues or adverse events, and ensuring that workforce members understand how and when to report. Ongoing monitoring helps ensure that AI tools remain accurate, reliable, and aligned with patient-safety standards.
β
Do Now: Identify your current process for validating technology tools and extend it to cover all AI applications including those used by vendors.
π
Plan Ahead: Formalize a program of continuous clinical validation and quality assurance by (i) incorporating "human in the loop" review and audits of AI treatment recommendations, (ii) performance benchmarking of AI treatment plans to clinical consensus/existing treatment protocols, and (iii) ensuring that AI remains accurate across patient populations (no bias). Incorporate mandatory participation in the organizational programs into vendor contracts and relationships.
Voluntary, Blinded Reporting of AI Safety Events
Sharing safety-related experiences with AI can drive industry-wide learning without increasing regulatory burden. The guidance encourages healthcare organizations to participate in confidential, blinded reporting of AI-related safety events. This approach allows providers to report errors or near misses without disclosing patient data or identifying their organization. By sharing lessons learned, the industry can improve safety standards and stay ahead of potential risks while avoiding additional layers of regulation.
β
Do Now: Ensure that AI-related incidents are included in existing internal event-reporting systems. Identify de-identified data to share in support of industry-wide learning as those programs become available.
π
Plan Ahead: Participate in external, blinded reporting programs (such as through a patient safety organization or CHAI registry) to contribute to sector-wide learning as those reporting programs are developed and come online. A leader in this area is Assess-AI, developed by the American College of Radiology.
Risk and Bias Assessment
AI tools can unintentionally introduce or amplify bias if they are built or tested on limited data, potentially affecting both patient safety and equity. The guidance encourages organizations to ask vendors how their models were trained, validated, and tested and to evaluate those tools against their own patient populations. Ongoing review helps identify gaps or uneven performance and ensures that AI supports fair, consistent care across all patient groups.
β
Do Now: Incorporate bias and risk evaluation into your vendor review process and templates. Require vendors to provide detailed information on model training, validation, testing methods, and known limitations as part of contract due diligence.
π
Plan Ahead: Provide ongoing education for clinicians and staff on monitoring for, recognizing, and mitigating algorithmic bias. Include bias-awareness training as part of your organization’s AI governance and compliance programs for all workforce members.
Education and Training
A well-informed workforce is essential to the safe, effective, and responsible use of AI in healthcare. The guidance underscores the need for clinicians and non-clinicians to receive training on the use of AI tools within the organization, basic education on AI principles and ethics, use-case-specific instruction (based on job description and AI tools made available to the team member), and role-based training. Building AI literacy across the workforce supports safer decision-making, fosters collaboration, and promotes smoother integration of AI tools into clinical and administrative workflows.
β
Do Now: Develop an AI training program across the organization. Provide training on current AI tools in use, ensuring clinicians and non-clinicians understand each tool’s purpose, limitations, and appropriate use within their roles.
π
Plan Ahead: Implement annual AI training and refresher programs to reinforce responsible use, update staff on new technologies, and maintain organization-wide AI literacy.
Final Takeaways for Healthcare Industry
As AI adoption continues to expand across the healthcare industry, organizations must balance innovation and opportunity with patient safety, ethical responsibility, and operational readiness. While formal regulation is still emerging, the Joint Commission and CHAI’s new guidance offers a practical framework for immediate action and long-term preparedness.
By implementing governance, monitoring, and training measures now, and planning for future expectations around transparency, data protection, and bias mitigation, organizations can manage risk while adapting in real time to the explosive growth of AI within the healthcare industry. Healthcare organizations that take proactive steps today will be best positioned to lead responsibly, ensuring that innovation enhances both quality of care and public trust.
For more information, please contact us or your regular Parker Poe contact. Click here to subscribe to our latest alerts and insights.