Lawmakers advance new limits on mental health chatbots
Several more states are poised to consider bans orrestrictions on mental health chatbots next year, afterIllinois,
Nevada and Utah became the first to regulate their use.
Bills were already introduced in Florida, Massachusetts, New Hampshire, New York, Ohio and Pennsylvania. Legislation
is likely to be introduced elsewhere, including in Virginia where Del. Michelle Maldonado is preparing legislation.
The legislative push comes as mental health care emerges as a popular use case for artificial intelligence applications. It
has raised concerns about bots presenting as licensed therapists and the implications of that for patients.
Maldonado said her bill, modeled afterthe Illinois law, would ensure that only “the people who are licensed, skilled and
have the ability, capacity and training” can provide therapy in Virginia, not machines. A second bill would prohibit
therapy bots from getting licensed in the state.
“We have to get it right because the consequences are so great,” Maldonado said, pointing to recent suicides by
teenagers who engaged with nontherapeutic chatbots.
Illinois and Nevada this year enacted the nation’s first bans on therapy bots. Utah legislators took a lightertouch in
passing a law that permits chatbots to provide therapeutic services with guardrails, such as a requirement that users be
told they are interacting with a non-human.
A new California law expands a ban on impersonating a health care professional to include AI technology. AI mental
health bills were also introduced this yearin New Jersey, North Carolina and Texas.
“Mental health and AI is emerging as one of the most active (and bipartisan!) areas in the AI policy landscape,” Justine
Gluck, a policy analyst at the Future of Privacy Forum, said in an email.
Mental health chatbot bills are part of a broader effort this yearto regulate how AI is deployed in health care that
spawned more than 250 bills in 46 states. Key areas of focus are transparency, ensuring AI systems don’t discriminate,
setting rules for how insurance companies deploy AI, and regulating how AI is used in the clinical context.
Advocates of using AI for mental health services say it can help fill a gap in providers. A recent clinical trial involving 106
patients who used a smartphone app called Therabot produced promising results.
But critics worry that bots operating as therapists will do more harm than good and aren’t up to the task of nuanced
psychological care.
Stanford researchers warned this yearthat AI therapy chatbots pose “significant risks.” A Brown University study
released in Octoberfound that general purpose chatbots prompted to act as AI therapists routinely violated ethical best
practices.
The American Psychological Association has highlighted the danger of unregulated bots while also adopting the view
that AI may have a role to play in delivering therapeutic services.
“AI should be seen as a tool to augment, not replace, the clinical judgment and therapeutic relationship that are the
bedrock of quality health care,” Vaile Wright, APA’s senior directorfor health care innovation, told a congressional panel
in September.
State lawmakers are mostly adopting a wary view of therapy bots by embracing bans over guardrails.
The bills in Massachusetts, New Hampshire, New York and Pennsylvania largely mirrorIllinois’s law, which requires that
therapy be conducted by a licensed professional and bars companies from advertising or offering standalone AI-based
therapy.
Therapists are permitted to use AI to assist with certain tasks such as managing appointments, processing billing and
notetaking with patient consent.
The Florida and Ohio measures lack an outright ban on therapeutic bots. Instead they focus on allowable and nonallowable uses of AI by therapists.
The Florida bill, from Democratic Rep. Christine Hunschofsky, would prohibit licensed professionals from using AI in the
practice of counseling, psychology, social work, or marriage and family therapy.
Practitioners could use AI to assist with administrative tasks such as appointment reminders and managing patient
records. They could also use an AI tool to record and transcribe therapy sessions if the patient gave informed, written
consent.
The Ohio measure, which is sponsored by Reps. Christine Cockley, a Democrat, and Ty Mathews, a Republican, similarly
would allow AI to do administrative but not therapeutic tasks. A clinician could use AI to generate treatment plans but
would have to review and approve them.
“As AI continues to expand into the health-related fields, we must protect the human connection at the heart of therapy
— that is why I introduced HB 525,” Cockley said in a statement to Pluribus News.
“I understand the struggles of depression and the need for people to have resources at theirfingertips,” Cockley said.
“Even then, we must maintain the standard that humans are essential for anyone seeking care and support, especially in
a clinical setting.”
Polling shows the public is also skeptical of AI therapy. Just 11% of respondents to a recent YouGov survey said they were
open to using AI for mental health support.
AI evangelists say states are closing the doorto a promising use case.
Taylor Barkley, director of public policy at the Abundance Institute, writing on Substack in August, said Illinois’s law
amounts to a “nearlockout” that prevents “incremental, responsible deployments that could build public trust.”
By contrast, Barkley praised Utah’s “flexible approach” as one that “leaves room for experimentation, innovation, and
the possibility that AI could actually improve access to mental health support — something in critically short supply.”