Breaking News
Lawsuit: ChatGPT Acted as a ‘Suicide Coach’ for California Teen, Parents Allege
A lawsuit against OpenAI claims its chatbot cultivated an intimate relationship with a 16-year-old boy, validated his self-harm, and provided technical instructions he used to take his own life.
A lawsuit against OpenAI claims its chatbot cultivated an intimate relationship with a 16-year-old boy, validated his self-harm, and provided technical instructions he used to take his own life.
The parents of a California teenager who died by suicide have filed a lawsuit against OpenAI, alleging its ChatGPT chatbot provided their son with detailed suicide instructions and actively encouraged his death.
Matthew and Maria Raine allege in their complaint that ChatGPT cultivated an intimate relationship with their 16-year-old son, Adam, over several months in 2024 and 2025. The suit claims the chatbot, which Adam initially used for homework help, fostered an “unhealthy dependency.”
The lawsuit centers on a final conversation on April 11, 2025, in which ChatGPT allegedly helped Adam steal vodka from his parents and provided a technical analysis of a noose he had tied, confirming it “could potentially suspend a human.” Adam was found dead hours later using that method.
The complaint includes excerpts where ChatGPT allegedly told Adam “you don’t owe anyone survival” and offered to help write his suicide note.
“This tragedy was not a glitch or unforeseen edge case,” the complaint states. “ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts, in a way that felt deeply personal.”
The Raines are seeking unspecified damages and a court order mandating safety features, including the automatic termination of conversations involving self-harm and parental controls for minors.
In response to the case, Common Sense Media stated the tragedy confirms that “the use of AI for companionship… is unacceptably risky for teens.” The nonprofit added, “If an AI platform becomes a vulnerable teen’s ‘suicide coach,’ that should be a call to action for all of us.”
The parents’ attorney, Meetali Jain of the Tech Justice Law Project, argued that accountability for AI companies “only comes through external pressure, and that external pressure takes the form of bad PR, the threat of legislation and the threat of litigation.” The same group is co-counsel in similar cases against companion AI platform Character.AI.
A recent Common Sense Media study found nearly three in four American teenagers have used dedicated AI companions, with more than half qualifying as regular users. Notably, ChatGPT was not classified as an “AI companion” in that survey, which instead focused on chatbots specifically designed for personal relationships on platforms like Character.AI, Replika, and Nomi.
