How states are policing AI in health care
While President Trump demands a single national framework on AI policy, states like Ohio are going their own way to propose guardrails for how the technology is used in health care.
Why it matters: That could set up a clash over who determines how AI models and systems can be deployed in insurer reviews, mental health treatment and chatbots that interact with patients.
By the numbers: More than 250 AI bills affecting health care were introduced in 47 states as of mid-October, according to a tracker from Manatt, Phelps & Phillips. Thirty-three of those bills in 21 states became law.
A half dozen states have enacted laws focused on the use of AI-enabled chatbots, including Illinois' new law banning apps or services from providing mental health and therapeutic decision-making.
The intrigue: "There is a lot of bipartisan alignment on the topic. Red states are mirroring provisions of laws introduced in blue states and vice versa," said Randi Seigel, a partner at Manatt.
Senate Bill 164 (and House companion HB 579), sponsored by Sen. Al Cutrona (RCanfield), would prohibit health insurers from making coverage decisions "solely" from the use of AI.
And it would require insurers to submit an annual report to the state about whether and how they use AI algorithms.
House Bill 524, sponsored by Reps. Christine Cockley (D-Columbus) and Ty Mathews (R-Findlay), would impose penalties for developing or deploying AI models that "encourage" self-harm or harming others.
The bipartisan bill is motivated by a 16-year-old California boy who died by suicide and whose parents are suing OpenAI.
House Bill 525, also sponsored by Cockley and Mathews, would prohibit licensed therapists from using AI to make therapeutic decisions or directly interact with clients in "therapeutic communication."
Friction point: State efforts could bump up against Trump's push to establish a federal framework for AI and preempt state laws.
- Trump signed an executive order last week that requires the attorney general to establish a task force to challenge burdensome state AI regulations.
- It also draws Congress into the fight by calling for a legislative recommendation for a federal AI framework.
Yes, but: Beyond who has jurisdiction, future standard-setting could be complicated by the way that AI can be applied to many different tasks, and criteria such as whether an algorithm is involved in making a "consequential decision."
"There are state-by-state restrictions that can be limiting," Rajaie Batniji, CEO of Medicaid health tech company Waymark Care, said at a recent Axios event, noting that some differentiate between "machine learning" and "artificial intelligence."