Mental Health AI Regulatory Tracker

Comprehensive tracking of federal, state, clinical, and international regulatory developments affecting mental health AI systems. Updated continuously as new regulations emerge.

Last Updated: November 2025
November 2025
Clinical Standards High Impact

APA Issues Health Advisory Warning Against AI Chatbots and Wellness Apps for Mental Health

American Psychological Association releases health advisory stating that AI chatbots and wellness applications lack scientific evidence and necessary regulations to ensure user safety. Advisory warns these technologies may lack scientific validation and oversight, often do not include adequate safety protocols, and have not received regulatory approval. Emphasizes that even well-developed generative AI tools lack evidence of effectiveness or safety. Calls for randomized clinical trials, longitudinal outcome studies, and federal regulatory action. Warns against using these tools as substitutes for qualified mental health professionals.

Major professional organization declares current AI mental health tools unvalidated and unsafe
November 2025
Federal High Impact

FDA Advisory Committee Recommends Stricter Approval Standards for Generative AI Mental Health Devices

FDA's Digital Health Advisory Committee issued formal recommendations that all generative AI-enabled mental health devices require De Novo classification or premarket approval (PMA), explicitly rejecting the 510(k) substantial equivalence pathway. Committee recommends randomized controlled trials for safety and effectiveness validation, and mandates human oversight for all AI therapy applications.

Fundamentally changes approval pathway for generative AI mental health tools
October 2025
State High Impact

California Enacts First-in-Nation AI Companion Chatbot Safeguards (SB 243)

Governor Newsom signs SB 243 requiring companion chatbot operators to implement critical safeguards including protocols for addressing suicidal ideation and self-harm, preventing exposure of minors to sexual content, and clear notifications that interactions are with AI. Operators must provide crisis service referrals and submit annual reports on connections between chatbot use and suicidal ideation. Creates private right of action for injuries from noncompliance. Effective January 1, 2026.

First state law mandating suicide prevention protocols for AI chatbots
October 2025
State

California Bans AI from Misrepresenting Healthcare Credentials (AB 489)

California AB 489, signed alongside SB 243, prohibits developers and deployers of AI tools from indicating or implying that the AI possesses a license or certificate to practice a healthcare profession. Additionally bans advertisements suggesting that AI-provided care comes from a licensed or certified human healthcare professional. Establishes consumer protection standards for transparency about AI use in healthcare settings. Effective January 1, 2026.

September 2025
Clinical Standards

Joint Commission and Coalition for Health AI Release First-of-Its-Kind Guidance on Responsible AI Use in Healthcare

Joint Commission (TJC), in collaboration with the Coalition for Health AI (CHAI), released its Guidance on the Responsible Use of Artificial Intelligence in Healthcare (RUAIH). This marks the first formal framework from a U.S. accrediting body aimed at helping health care organizations safely, effectively and ethically integrate AI technologies into clinical and operational practice.

September 2025
State High Impact

California Enacts First-in-Nation Frontier AI Regulation (SB 53)

Governor Newsom signs SB 53, the Transparency in Frontier Artificial Intelligence Act, establishing oversight and accountability requirements for developers of advanced AI models trained with more than 10^26 floating-point operations. Requires public disclosure of safety standards, establishes formal safety incident reporting mechanisms, protects whistleblowers raising AI safety concerns, and mandates annual legislative updates. Affects mental health AI systems built on frontier models like GPT-4 or Claude. California becomes first state to directly regulate frontier foundation model developers. Effective January 1, 2026.

First state regulation of frontier AI models used in mental health applications
August 2025
State High Impact

Illinois Enacts First-in-Nation Ban on AI-Only Mental Health Therapy

Illinois HB 1806 (Wellness and Oversight for Psychological Resources Act) prohibits AI systems from independently performing therapy, counseling, or psychotherapy without direct oversight by a licensed mental health professional. The law, which passed unanimously and was signed by Gov. Pritzker, represents the first state ban on autonomous AI therapy and sets a precedent for other states considering similar restrictions.

Bans autonomous AI therapy; requires licensed professional oversight
June 2025
State

Nevada Regulates AI Chatbots in Mental Healthcare Settings

Nevada AB 406, signed by Gov. Lombardo, establishes disclosure requirements and regulatory oversight for AI chatbot use in mental and behavioral healthcare contexts. The law requires clear notification to users when interacting with AI systems and mandates data privacy protections specific to mental health applications.

May 2025
State

Utah Establishes Disclosure Requirements for Mental Health AI Chatbots

Utah HB 452, signed by Gov. Cox and effective May 7, 2025, requires suppliers of AI mental health chatbots to provide clear disclosures about AI capabilities and limitations. The law establishes consumer protection standards and requires transparency about data usage and algorithm decision-making in mental health contexts.

January 2025
Federal

FDA Issues Draft Guidance on Lifecycle Management of AI-Based Medical Device Software

FDA released comprehensive draft guidance outlining expectations for transparency, clinical validation, algorithm updates, and post-market monitoring of AI-enabled medical devices. The guidance applies to mental health AI systems classified as medical devices and emphasizes continuous monitoring requirements throughout the product lifecycle.

December 2024
Federal High Impact

FDA Issues Draft Guidance on Clinical Decision Support Software

FDA clarifies which clinical decision support (CDS) software functions are considered medical devices requiring premarket review. Mental health AI systems making diagnostic or treatment recommendations fall under increased scrutiny, particularly those assessing suicide risk or recommending involuntary commitment.

May require premarket submission for mental health AI systems
July 2024
Federal

CMS Announces Reimbursement Rules for Digital Mental Health Treatment

Centers for Medicare & Medicaid Services establishes billing codes for AI-assisted mental health screening but requires documentation of clinical oversight, validation studies, and adverse event reporting. Telehealth mental health AI must meet same standards as in-person care.

June 2024
International

EU AI Act Classifies Mental Health AI as "High-Risk"

European Union's AI Act officially designates mental health AI systems—particularly those used for diagnosis, treatment planning, or crisis assessment—as high-risk applications requiring conformity assessment, transparency requirements, and human oversight. Enforcement begins in 2026.

Need Help Navigating Regulatory Compliance?

Our evaluation frameworks help mental health AI systems meet evolving regulatory requirements across federal, state, and clinical standards.

Request Risk Assessment