Select Page

Agentic AI in Healthcare: Navigating Regulatory Uncertainty and Building Governance That Lasts

AHLA’s Speaking of Health Law | Sponsored by Clearwater

Artificial intelligence in healthcare has crossed a threshold. The question for CISOs, compliance officers, and health system counsel is no longer whether agentic AI will enter clinical and administrative workflows — it already has. The real question is whether your organization is prepared for what comes next: a rapidly shifting patchwork of state regulations, an unsettled federal preemption landscape, and civil liability exposure that is only beginning to materialize.

In this episode of AHLA Speaking of Health Law, sponsored by Clearwater, Andrew Mahler — Clearwater’s Vice President of Privacy and Compliance Services — speaks with David Peloquin, Partner at Ropes & Gray LLP and co-author of “Paging Doctor Algorithm: Navigating the Regulatory Landscape for Agentic AI in the Healthcare Industry” in AHLA Health Law Weekly. Together they deliver one of the most practical, expert-level conversations on healthcare AI governance available today.

What This Episode Covers

Mahler and Peloquin move beyond the headlines and into the legal and operational realities healthcare leaders are facing right now:

  • How agentic AI differs from earlier AI tools — and why that distinction fundamentally changes accountability and liability
  • Significant use cases across administrative workflows (scheduling, billing, prior authorization) and clinical settings (medical imaging, coding, adverse event monitoring)
  • The state-by-state regulatory landscape — from Colorado’s risk-based AI Act to California’s disclosure mandates, Texas Medical Board authority, and new laws in Illinois, Nevada, Utah, and New York
  • Federal preemption: what’s actually realistic — including why a 99-to-1 Senate vote signals a durable state-level patchwork, not an imminent national standard
  • Where enforcement is heading — Texas AG settlements, California’s 2025 AI liability advisory, and the expanding role of state medical boards and civil litigation
  • The core elements of a flexible AI governance framework built to adapt as laws evolve and enforcement accelerates

Why Healthcare CISOs and Compliance Leaders Need to Act Now

Agentic AI systems don’t simply respond to prompts, they pursue goals, make sequential decisions, and take independent actions across connected systems with limited human intervention. That autonomy is what makes them powerful, and precisely what makes them legally complex.

When an agentic system autonomously compares payer requirements, prompts for missing documentation, and submits a prior authorization without a human driving each step, the question of who is accountable when something goes wrong becomes genuinely difficult to answer under existing law.

State legislatures are not waiting for that question to be resolved. Colorado, California, Texas, Illinois, Nevada, Utah, and New York have all enacted or proposed AI-specific healthcare laws — each with different jurisdictional thresholds, obligations, and enforcement mechanisms. For health systems operating across multiple states, this is a real and growing compliance challenge. Organizations that are waiting for federal preemption to solve the problem are likely to find themselves exposed.

“Organizations waiting to build compliance infrastructure until preemption solves the problem are likely to find themselves dissatisfied with that approach.”

— David Peloquin, Partner, Ropes & Gray LLP

About the Speakers

David Peloquin is a Partner in Ropes & Gray’s healthcare practice in Boston. He advises academic medical centers, health systems, pharmaceutical and medical device manufacturers, digital health companies, and research institutions on matters related to data privacy, clinical research, digital health, and AI governance. He recently co-authored “Paging Doctor Algorithm: Navigating the Regulatory Landscape for Agentic AI in the Healthcare Industry” in AHLA Health Law Weekly.

Andrew Mahler is Vice President of Privacy and Compliance Services at Clearwater. He works with healthcare organizations across the country to design and mature risk-based compliance programs — including governance frameworks that address the full complexity of today’s AI regulatory environment.

Clearwater Can Help You Build an AI Governance Framework

Clearwater helps healthcare organizations translate complex, shifting AI regulations into practical, defensible compliance programs. Whether you’re evaluating your first agentic AI deployment, auditing an existing governance structure, or trying to make sense of a growing web of state laws, our team brings the regulatory fluency, technical depth, and healthcare-specific expertise to help you move forward with confidence.

👉 Contact Us to Get Started

About Clearwater

Clearwater is the leading provider of cybersecurity and compliance solutions for the healthcare industry, helping organizations align privacy, security, and business objectives to achieve resilience and trust.

Related Blogs

No results found.