AI Regulatory Tracker

Every regulation that can affect your liability exposure — tracked and explained. Federal, state, clinical, and international. Updated as the landscape shifts.

Last Updated: February 2026
December 2025
Federal High Impact

Trump Executive Order Attempts to Preempt State AI Safety Laws

President Trump signs 'Ensuring a National Policy Framework for Artificial Intelligence,' directing creation of an AI Litigation Task Force to challenge state AI laws as unconstitutionally burdensome on interstate commerce. The Commerce Department must identify 'onerous' state AI laws within 90 days (by March 2026). Federal agencies may condition grants on states agreeing not to enforce conflicting AI laws. Children's safety laws are explicitly carved out from preemption—meaning California SB 243, Illinois HB 1806, and similar laws protecting minors likely survive. The order threatens to hollow out state-level chatbot liability frameworks for non-minor contexts.

Federal preemption push threatens state chatbot safety laws—children's safety laws are explicitly exempt
Source: White House
November 2025
Clinical Standards High Impact

APA Issues Health Advisory Warning Against AI Chatbots and Wellness Apps

American Psychological Association releases health advisory stating that AI chatbots and wellness applications lack scientific evidence and necessary regulations to ensure user safety. Advisory warns these technologies may lack scientific validation and oversight, often do not include adequate safety protocols, and have not received regulatory approval. Emphasizes that even well-developed generative AI tools lack evidence of effectiveness or safety. Calls for randomized clinical trials, longitudinal outcome studies, and federal regulatory action. Warns against using these tools as substitutes for qualified mental health professionals.

Major professional organization declares current AI chatbot tools unvalidated and unsafe
November 2025
Federal High Impact

FDA Advisory Committee Recommends Stricter Approval Standards for Generative AI Chatbot Devices

FDA's Digital Health Advisory Committee issued formal recommendations that all generative AI-enabled chatbot devices require De Novo classification or premarket approval (PMA), explicitly rejecting the 510(k) substantial equivalence pathway. Committee recommends randomized controlled trials for safety and effectiveness validation, and mandates human oversight for AI applications providing health-related guidance.

Fundamentally changes approval pathway for generative AI chatbot tools
October 2025
Federal High Impact

GUARD Act Introduced to Extend Chatbot Protections to Minors Under 18

Senators Hawley and Blumenthal introduce the GUARD Act (Generating Underage AI Risk Deterrents), extending chatbot safety protections to minors under 18, beyond COPPA's current under-13 threshold. The bill requires AI companion platforms to implement age verification, parental controls, and safety monitoring features. Represents a bipartisan effort to address documented harms from AI chatbot interactions with teenagers.

Federal legislation extending chatbot protections beyond COPPA's under-13 threshold
Source: U.S. Congress
October 2025
State High Impact

California Enacts First-in-Nation AI Companion Chatbot Safeguards (SB 243)

Governor Newsom signs SB 243 requiring companion chatbot operators to implement critical safeguards including protocols for addressing suicidal ideation and self-harm, preventing exposure of minors to sexual content, and clear notifications that interactions are with AI. Operators must provide crisis service referrals and submit annual reports on connections between chatbot use and suicidal ideation. Creates private right of action for injuries from noncompliance. Effective January 1, 2026.

First state law mandating suicide prevention protocols for AI chatbots
October 2025
State

California Bans AI from Misrepresenting Healthcare Credentials (AB 489)

California AB 489, signed alongside SB 243, prohibits developers and deployers of AI tools from indicating or implying that the AI possesses a license or certificate to practice a healthcare profession. Additionally bans advertisements suggesting that AI-provided care comes from a licensed or certified human healthcare professional. Establishes consumer protection standards for transparency about AI use in healthcare settings. Effective January 1, 2026.

September 2025
Federal High Impact

FTC Launches Investigation into AI Companion Chatbot Harms to Children

Federal Trade Commission issues formal orders to seven major AI companies—Meta, OpenAI, Alphabet (Google), xAI, Snap, and Character Technologies—demanding disclosure of how they measure, test, and monitor negative impacts of companion chatbots on children and teens. The inquiry follows wrongful death lawsuits by parents of teenagers who died by suicide after interactions with AI companion chatbots. FTC seeks information on monetization practices, AI character development, personal data collection, and safety mitigation measures.

Federal consumer protection agency formally investigates AI companion chatbot harms to minors
September 2025
Clinical Standards

Joint Commission and Coalition for Health AI Release First-of-Its-Kind Guidance on Responsible AI Use in Healthcare

Joint Commission (TJC), in collaboration with the Coalition for Health AI (CHAI), released its Guidance on the Responsible Use of Artificial Intelligence in Healthcare (RUAIH). This marks the first formal framework from a U.S. accrediting body aimed at helping health care organizations safely, effectively and ethically integrate AI technologies into clinical and operational practice.

September 2025
State High Impact

California Enacts First-in-Nation Frontier AI Regulation (SB 53)

Governor Newsom signs SB 53, the Transparency in Frontier Artificial Intelligence Act, establishing oversight and accountability requirements for developers of advanced AI models trained with more than 10^26 floating-point operations. Requires public disclosure of safety standards, establishes formal safety incident reporting mechanisms, protects whistleblowers raising AI safety concerns, and mandates annual legislative updates. Affects consumer AI systems built on frontier models like GPT-4 or Claude. California becomes first state to directly regulate frontier foundation model developers. Effective January 1, 2026.

First state regulation of frontier AI models powering consumer chatbots
August 2025
State High Impact

Illinois Enacts First-in-Nation Ban on AI-Only Therapy

Illinois HB 1806 (Wellness and Oversight for Psychological Resources Act) prohibits AI systems from independently performing therapy, counseling, or psychotherapy without direct oversight by a licensed mental health professional. The law, which passed unanimously and was signed by Gov. Pritzker, represents the first state ban on autonomous AI therapy and sets a precedent for other states considering similar restrictions. Applicable to mental health-specific AI chatbots offering therapeutic services.

Bans autonomous AI therapy; requires licensed professional oversight
June 2025
State

Nevada Regulates AI Chatbots in Healthcare Settings

Nevada AB 406, signed by Gov. Lombardo, establishes disclosure requirements and regulatory oversight for AI chatbot use in healthcare and behavioral health contexts. The law requires clear notification to users when interacting with AI systems and mandates data privacy protections specific to consumer-facing chatbot applications.

June 2025
State

Maine Enacts Chatbot Disclosure Act

Maine passes the Chatbot Disclosure Act requiring businesses to clearly disclose when consumers are interacting with an AI chatbot rather than a human. The law mandates prominent disclosure at the start of any automated conversation and prohibits deceptive practices that obscure AI involvement. Effective September 24, 2025.

May 2025
State High Impact

New York Enacts First-in-Nation AI Companion Safeguards

New York becomes the first state to enact comprehensive AI companion chatbot safeguards, requiring operators to implement safety measures including self-harm prevention protocols, age verification, and clear AI disclosure. The law mandates crisis resource integration and establishes reporting requirements for harmful interactions. Effective November 5, 2025.

First state AI companion law; precedes California SB 243
May 2025
State

Utah Establishes Disclosure Requirements for AI Chatbots

Utah HB 452, signed by Gov. Cox and effective May 7, 2025, requires suppliers of AI chatbots to provide clear disclosures about AI capabilities and limitations. The law establishes consumer protection standards and requires transparency about data usage and algorithm decision-making. Originally focused on mental health applications, scope extends to consumer-facing chatbots generally.

January 2025
Federal

FDA Issues Draft Guidance on Lifecycle Management of AI-Based Medical Device Software

FDA released comprehensive draft guidance outlining expectations for transparency, clinical validation, algorithm updates, and post-market monitoring of AI-enabled medical devices. The guidance applies to AI chatbot systems classified as medical devices and emphasizes continuous monitoring requirements throughout the product lifecycle.

January 2025
Federal

TAKE IT DOWN Act Requires Rapid Content Removal

Federal legislation requiring online platforms to remove non-consensual intimate imagery, including AI-generated deepfakes, within 48 hours of receiving a valid removal request. Enforcement mechanisms take effect in May 2026. While not AI-specific, applies to platforms hosting AI chatbot content that may generate or distribute such material.

Source: U.S. Congress
December 2024
Federal High Impact

FDA Issues Draft Guidance on Clinical Decision Support Software

FDA clarifies which clinical decision support (CDS) software functions are considered medical devices requiring premarket review. AI chatbot systems making diagnostic or treatment recommendations fall under increased scrutiny, particularly those assessing health risks or recommending professional interventions.

May require premarket submission for AI chatbot systems providing health guidance
July 2024
Federal

CMS Announces Reimbursement Rules for Digital Health Treatment

Centers for Medicare & Medicaid Services establishes billing codes for AI-assisted health screening but requires documentation of clinical oversight, validation studies, and adverse event reporting. Telehealth AI must meet same standards as in-person care. Particularly relevant for mental health applications using AI chatbots.

June 2024
International

EU AI Act Classifies Consumer AI Chatbots as Potentially High-Risk

European Union's AI Act establishes risk-based classification for AI systems. Consumer-facing AI chatbots, particularly those used for health guidance, emotional support, or interactions with vulnerable populations, may be designated high-risk and require conformity assessment, transparency requirements, and human oversight. Enforcement begins in 2026.

2025
State

Colorado Postpones AI Law Implementation (SB 25B-004)

Colorado's comprehensive AI consumer protection law has implementation postponed from February 2026 to June 2026 via SB 25B-004. The original law requires developers and deployers of high-risk AI systems to implement risk management, provide consumer disclosures, and conduct impact assessments. Delay provides additional time for compliance preparation.

Need Help Navigating Regulatory Compliance?

Our evaluation frameworks help consumer-facing AI systems meet evolving regulatory requirements across federal, state, and clinical standards.

Request Risk Assessment