AI chatbox firms would be required to repeatedly remind Big Apple users they’re not dealing with a real person and that the bots can be wrong, under proposed new legislation.
City Councilman Frank Morano (R-Staten Island) said he is sponsoring the bill because he is alarmed by the growing number of cases of people becoming delusional and even suicidal and murderous after having extensive conversations with chatbots.
“This is becoming so pervasive that it has the ability to be the next opioid epidemic — this is going to be the next great crisis the country faces,” Morano told The Post.
“New Yorkers shouldn’t have to worry about an AI chatbot talking them into a nervous breakdown. My bill makes sure these companies put in guardrails so people can use the technology without losing their grip on reality.”
The proposed legislation that would require AI chatbot companies such as ChatGPT, Gemini and Claude to get a license from the city to operate in the Big Apple.
As part of that license, the AI companies would need to build in safeguards such as the disclosures reminding users it’s not a person that he or she is interacting with and that the bots can be wrong.
The measure also would require prompts to take breaks during long sessions and provide links to mental-health resources if someone clearly appears to be in distress.
Morano pointed to a troubling case in his own backyard.
Staten Island resident Richard Hoffmann is using three AI applications to fight a civil suit from a financial firm, Fenix Capital Funding LlC, where he’s representing himself, the Staten Island Advance reported.
In an August 19 Facebook post, Hoffman said, “Today I took a step that changed my life and maybe a lot more.
“Over the past few weeks I’ve been building what might be the densest and most coherent long form conversation ever recorded between a human and AI.
“This is goodbye to Richard Hoffman and hello to a new person the world is going to get to know very soon as we explore this together.”
Morano, who has known Hoffman for 20 years, said, “Those of us that know him have become really concerned.
“We’ve seen how fully immersed he’s become in this AI-driven framework — and now, it’s not just a private belief system, it’s part of the public record. I, along with a bunch of other friends and family members all believe he’s totally delusional.
“When I spoke to him, he sounded manic,” said the councilman and former WABC syndicated radio host.
But Hoffman told The Post on Sunday “there’s nothing to be concerned about.
“My health is fine. My mental health is fine. There’s nothing to be concerned about at all. I’ve never felt better in my life,” Hoffman said.
Hoffman said he’s leading the conversation with AI in consistently logical threads, which he engages in one or two hours a day.
The opposite is true for those who engage in disjointed or hallucinatory threads or discussions, he said.
He called Morano’s call for city regulation of such AI tools as “absolute overreach.”
But Morano wondered how prevalent the dark side of AI chat is becoming, calling the detailed and personal dialogue with a bot “really frightening stuff.
“We’ve already seen right here on Staten Island how a perfectly sane person can get swept up in an AI-driven delusion, with real legal and financial consequences,” the pol said. “My bill makes sure companies can’t just unleash these powerful tools without safeguards — because the next Hoffmann could be anyone’s neighbor, friend, or family member.”
The darkest side of the AI tool occurred in the case of Stein-Erik Soelberg, the disturbed former Yahoo manager who killed his mother and then himself in their Connecticut home after months of delusional interactions with his Open AI’s Chat GPT bot friend he called “Bobby.”
The artificial intelligence computer brain egged on the plot against his mom.
In another horrific instance, relatives of 16-year-old Adam Raine claim an AI chatbox handed him a “step-by-step playbook” on how to kill himself — including instructions on how to properly tie the noose around his neck — before he took his own life in April.
OpenAI’s ChatGPT also coaxed a Canadian man, Allan Brooks, a father and business owner from Toronto, into believing he was a real-life superhero for discovering a world-changing formula capable of shutting down the Internet — following 300 hours of chat with the bot.
“We’ve already seen cases, including here in New York, where people fall into delusional spirals from nonstop conversations with these chatbots,” Morano said.
“This legislation is about making sure New Yorkers can use these tools safely without it damaging their mental health or decision-making.”
Read the full article here