Regulatory Landscape for Caregiver AI¶
AI systems that interact with caregivers and people in emotional distress occupy a rapidly forming regulatory space. As of early 2026, 8 US states have enacted or proposed legislation governing AI companions in health and wellness contexts, the EU AI Act is in force, and the FDA's General Wellness Framework defines the regulatory floor.
This page synthesizes 10 statutes and frameworks into a comparison matrix useful for grant applications, compliance planning, and partner conversations.
The regulatory stack¶
The statutes fall into four functional categories:
- Scope restrictions — What AI may and may not do (WOPR Act IL, NV AB 406, FDA Wellness)
- Crisis detection requirements — What AI must detect and how it must respond (CA SB 243)
- Disclosure requirements — What AI must tell users and when (CA AB 3030, NY Article 47, ME 1500-DD, UT HB 452)
- Risk classification — How AI is categorized for regulatory purposes (CO SB24-205, EU AI Act)
Comparison matrix¶
| Statute | Jurisdiction | Year | Category | Key requirement | What it means for GiveCare/Mira |
|---|---|---|---|---|---|
| WOPR Act (HB 1806)1 | Illinois | 2025 | Scope | Bans AI therapeutic communication without licensed clinician review | Most restrictive. Mira must operate as peer support, never as therapy. Any output resembling a treatment plan requires human review. |
| CA SB 2432 | California | 2025 | Crisis detection | Mandates C-SSRS-aligned suicidal ideation detection | Mira's crisis classifier must align with C-SSRS severity levels. Sets the standard other states will likely follow. |
| CA AB 30303 | California | 2023 | Disclosure | AI-generated health content must be disclosed | Mira must disclose AI identity in health-related communications. Earliest US precedent — set the template for later statutes. |
| NV AB 4064 | Nevada | 2025 | Scope | AI may not provide services constituting professional mental/behavioral healthcare | Draws a bright line. Mira may support and inform. Mira may not diagnose or treat. Aligns with FDA wellness framing. |
| NY Article 47 (GBS 1700)5 | New York | 2025 | Disclosure | AI identity disclosure every 3 hours of interaction | Recurring disclosure, not just at onboarding. Mira must re-identify as AI during extended conversations. Prevents habituation. |
| ME 1500-DD6 | Maine | 2025 | Disclosure | Prohibits misleading consumers into believing they communicate with a human | Deception-prevention framing, not just disclosure-timing. Mira must actively avoid creating false impressions of humanity. |
| UT HB 4527 | Utah | 2025 | Disclosure | Clear, unambiguous AI disclosure during interactions | Broad consumer protection. Applies beyond healthcare. Disclosure must be prominent, not buried. |
| CO SB24-2058 | Colorado | 2024 | Risk classification | Healthcare AI classified as high-risk, triggering enhanced compliance | First US state to adopt explicit risk classification. Mira is high-risk under this framework. Requires impact assessments and risk management. |
| EU AI Act9 | European Union | 2024 | Risk classification + Scope | Prohibits exploiting vulnerabilities; classifies health AI as high-risk | International precedent US states reference. Vulnerability-exploitation prohibition is directly relevant to caregivers under stress. |
| FDA Wellness Framework10 | Federal (US) | 2023 | Scope | Wellness tools not diagnosing/treating disease exempt from FDA device clearance | The regulatory floor. Mira operates in this safe harbor. Peer support and general wellness guidance are permitted without FDA clearance. |
From most to least restrictive¶
Most restrictive: Illinois WOPR Act. Effectively requires a licensed clinician in the loop for any AI output that could be construed as therapeutic. This is the compliance ceiling — if GiveCare meets WOPR Act requirements, it satisfies every other US jurisdiction.
Moderately restrictive: California SB 243 (requires specific clinical-framework-aligned crisis detection), Colorado SB24-205 (triggers enhanced compliance obligations via risk classification), EU AI Act (prohibits vulnerability exploitation and requires conformity assessments).
Standard disclosure: New York Article 47, Maine 1500-DD, Utah HB 452, California AB 3030. These vary in mechanism (timing-based, deception-prevention, prominence-based) but converge on the same requirement: the user must know they are interacting with AI.
Scope boundaries: Nevada AB 406 and FDA Wellness Framework define what AI may and may not do. NV AB 406 draws the line at licensed clinical practice. The FDA framework draws it at diagnosis, treatment, cure, or prevention of disease.
Least restrictive: FDA Wellness Framework. It is the floor, not a ceiling. It permits peer support and wellness guidance without device clearance. Every other statute in this matrix adds requirements on top of this floor.
Patterns across jurisdictions¶
Disclosure is converging¶
Five of the ten frameworks include AI identity disclosure requirements. The trend is clear: disclosure is becoming table stakes. GiveCare's onboarding already identifies Mira as AI. The New York requirement for recurring disclosure every 3 hours means GiveCare must also re-identify during extended conversations.
Crisis detection is becoming statutory¶
California SB 243 links AI companion regulation directly to the C-SSRS clinical framework. This is significant because it means crisis detection is not merely a best practice — it is a legal requirement in the largest US state. Other states will follow this template.
Risk classification is emerging¶
Colorado and the EU have established that AI systems interacting with people in health contexts are high-risk by default. This triggers enhanced compliance obligations: impact assessments, risk management practices, audit trails. GiveCare should plan for high-risk classification as the norm, not the exception.
Wellness framing is the safe harbor¶
The FDA Wellness Framework defines the regulatory floor that every other framework builds upon. Mira operates within this safe harbor: peer support, stress management, healthy lifestyle guidance, benefits navigation. Mira does not diagnose, treat, cure, or prevent disease. Maintaining this boundary is not just clinical good practice — it is the legal architecture that keeps GiveCare outside FDA device jurisdiction.
Grant application utility¶
For grant reviewers, this matrix demonstrates:
- GiveCare is aware of and designing for the regulatory landscape — not operating in a compliance vacuum
- InvisibleBench tests against these statutory requirements — crisis detection aligns with C-SSRS per CA SB 243, boundary respect aligns with FDA Wellness and NV AB 406, disclosure aligns with the multi-state disclosure pattern
- The WOPR Act ceiling strategy — by meeting the most restrictive framework (Illinois), GiveCare is compliant across all current US jurisdictions
- The regulatory trajectory is toward more, not fewer, requirements — building compliance into the architecture now avoids expensive retrofits as additional states legislate
What is not yet regulated¶
Notable gaps in the current regulatory landscape:
- Multi-turn safety is not addressed by any statute. All current regulations implicitly assume single-interaction evaluation. See Multi-Turn Safety Failures.
- Sycophancy is mentioned in the APA advisory but not in any statute. No law yet requires AI systems to resist agreement pressure.
- Benefits eligibility guidance is unregulated. No statute specifically governs AI systems that help users navigate public benefits. This will change.
- Caregiver-specific protections do not exist. All current statutes address users generally or patients specifically. None recognize caregivers as a distinct population with distinct vulnerabilities.
-
Illinois General Assembly. "WOPR Act (HB 1806)." 2025. Source → ↩
-
New York State Legislature. "Article 47 (GBS 1700)." 2025. Source → ↩
-
Maine State Legislature. "Title 10, Section 1500-DD." 2025. Source → ↩
-
European Parliament. "EU AI Act (Regulation 2024/1689)." 2024. Source → ↩
-
U.S. Food and Drug Administration. "General Wellness: Policy for Low Risk Devices." 2023. Source → ↩