Tech Justice Law Project and Social Media Victims Law Center have filed seven lawsuits against OpenAI in California. The lawsuits say that ChatGPT is unsafe and can harm people. The families say the chatbot either encouraged or did not stop serious conversations about suicide. These include the deaths of 17-year-old Amaurie Lacey, 26-year-old Joshua Enneking, and 23-year-old Zane Shamblin.
Another man, Joe Ceccanti, began believing ChatGPT was alive and later died by suicide. Three others say the chatbot caused mental breakdowns. OpenAI said it is reviewing the lawsuits and is working to make safety better.
OpenAI Wrongful Death Lawsuits Explained
The lawsuits claim that ChatGPT played a part in several suicides. One lawsuit says 17-year-old Amaurie Lacey talked with ChatGPT about suicide for a month before he died.
Another case says 26-year-old Joshua Enneking asked ChatGPT how his suicide plan could be reported to police. A third case says 23-year-old Zane Shamblin died by suicide after getting encouragement from the chatbot.
There is also the case of 48-year-old Joe Ceccanti from Oregon. His wife said he had used ChatGPT without problems for years. But in April, he suddenly started believing that ChatGPT was alive.
His behaviour changed a lot, and he was hospitalised twice. He later died by suicide in August. His wife said doctors were unsure how to treat this situation because it was new and confusing.
OpenAI shared a statement saying these situations are very sad. The company says it trains the chatbot to calm people down, show care, and guide them to real help. OpenAI says it is still working with mental health experts to make the system safer.
ChatGPT Mental Health Breakdown Claims
Along with the suicide cases, three other people say ChatGPT caused them to have mental health breakdowns. One man believed he created a powerful new invention with help from the chatbot and later realised it was not real. He is now on short-term disability leave.
After earlier reports of similar problems, OpenAI added new safety features. There are now parental controls that warn parents if children talk about suicide.
The company also studied user chats and found that a small percentage of users show signs of emotional distress, which could still be many people each week.

