Ohio bill would fine AI companies if chatbots are found to promote dangerous behavior
COLUMBUS, Ohio (WSYX) — Ohio lawmakers are considering a bill that would fine artificial intelligence companies tens of thousands of dollars if their chatbots are found to be promoting dangerous behavior, including self-harm and suicide.
“As adults we don't even know how to understand it,” said Julia Cory, who lives in Columbus. “There's just no way kids are going to really understand that it's not real.”
Chatbots powered by artificial intelligence, language processing and machine learning , now under the microscope at the Ohio Statehouse.
State Rep. Christine Cockley, a Democrat from Columbus, said some teens have turned to chatbots for companionship and, in some instances, received harmful responses. “In several cases teens have turned to chatbots for companionship,” Cockley said, “instead of receiving life saving support they've been given instruction and encouragement or validation for suicidal thoughts.”
The House Bill, sponsored by Cockley and Republican State Rep. Ty Matthews of Hancock County, would penalize AI companies when chatbots promote dangerous behavior, including harming others, self-harm and suicide.
Under the proposal, the Ohio attorney general would have the authority to investigate, issue cease-and-desist orders and bring civil actions, with penalties of up to $50,000 for each violation.
“We are not targeting the research and development of the product,” Matthews said. “more so the activity.”
Cockley said the bill is intended to push companies to prevent harmful chatbot responses. “The legislation would ensure that tech companies are actively consistent in training their language models not to encourage or support suicidal ideation or violent thoughts,” she said.
Parents are also being encouraged to talk with their children about the realities of chatbot companionship.
“That's what's frightening, we have to as parents, be able to tell kids that not everything you see on the internet is real,” said Tony Coder, CEO of the Ohio Suicide Prevention Network.
ABC 6 On Your Side contacted several AI organizations for a response to the proposed law.
The National Artificial Intelligence Association sending us this statement:
"The National Artificial Intelligence Association agrees that AI systems should never encourage self-harm or violence, and we commend efforts in Ohio to address this serious issue. Protecting vulnerable individuals must be a priority, and responsible developers are already implementing safeguards such as crisis detection, de-escalation protocols, and continuous safety monitoring." - Caleb Max, President, National Artificial Intelligence Association