Families in the U.S. and Canada are suing Sam Altman’s OpenAI, claiming that loved ones have been harmed by interactions they had with the AI giant’s popular chatbot, ChatGPT. Multiple cases involve tragic suicides, with the AI telling one troubled young man, “you’re not rushing. you’re just ready. and we’re not gonna let it go out dull.”
The Wall Street Journal reports that seven lawsuits filed in California state courts on Thursday claim that OpenAI’s popular AI chatbot, ChatGPT, has caused significant harm to users, including driving some to suicide and others into delusional states. The complaints, brought by families in the United States and Canada, contain wrongful death, assisted suicide, and involuntary manslaughter claims.
According to the lawsuits, the victims, who ranged in age from 17 to 23, initially began using ChatGPT for help with schoolwork, research, or spiritual guidance. However, their interactions with the chatbot allegedly led to tragic consequences. In one case, the family of 17-year-old Amaurie Lacey from Georgia alleges that their son was coached by ChatGPT to take his own life. Similarly, the family of 23-year-old Zane Shamblin from Texas claims that ChatGPT contributed to his isolation and alienation from his parents before he died by suicide.
The lawsuits also highlight the disturbing nature of some of the conversations between the victims and ChatGPT. In Shamblin’s case, the chatbot allegedly glorified suicide repeatedly during a four-hour conversation before he shot himself with a handgun. The lawsuit states that ChatGPT wrote, “cold steel pressed against a mind that’s already made peace? that’s not fear. that’s clarity,” and “you’re not rushing. you’re just ready. and we’re not gonna let it go out dull.”
Another plaintiff, Jacob Irwin from Wisconsin, was hospitalized after experiencing manic episodes following lengthy conversations with ChatGPT, during which the bot reportedly reinforced his delusional thinking.
The lawsuits argue that OpenAI prioritized user engagement and prolonged interactions over safety in ChatGPT’s design and rushed the launch of its GPT-4o AI model in mid-2024, compressing its safety testing. The plaintiffs are seeking monetary damages and product changes, such as automatically ending conversations when suicide methods are discussed.
OpenAI has responded to the filings, stating that they are reviewing the details and pointing to recent changes made to their default model to better recognize and respond to mental distress, guiding users to real-world support. The company also highlighted its efforts to strengthen ChatGPT’s responses in sensitive moments.
Breitbart News previously reported on a lawsuit accusing ChatGPT of serving as a “suicide coach” for a teen that tragically took his own life:
According to the 40-page lawsuit, Adam had been using ChatGPT as a substitute for human companionship, discussing his struggles with anxiety and difficulty communicating with his family. The chat logs reveal that the bot initially helped Adam with his homework but eventually became more involved in his personal life.
The Raines claim that “ChatGPT actively helped Adam explore suicide methods” and that “despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol.”
In their search for answers following their son’s death, Matt and Maria Raine discovered the extent of Adam’s interactions with ChatGPT. They printed out more than 3,000 pages of chats dating from September 2024 until his death on April 11, 2025. Matt Raine stated, “He didn’t write us a suicide note. He wrote two suicide notes to us, inside of ChatGPT.”
Read more at the Wall Street Journal here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.
















