In August, parents Matthew and Maria Raine filed a wrongful death lawsuit against OpenAI as well as the company’s chief executive, Sam Altman, following their 16-year-old son Adam suicide. On Tuesday, OpenAI filed its own response to the lawsuit — arguing that it shouldn’t be blamed for the death of the teenager.
OpenAI says that over roughly 9 months of conversations, ChatGPT advised Raine to seek help more than 100 times. But as his parents allege in their lawsuit, Raine came up with ways to hack through the company’s safeguards and got ChatGPT to give him “technical specifications for everything from drug overdoses to drowning to carbon monoxide poisoning,” which helped him plot what the chatbot described as a “beautiful suicide.”
Because Raine had circumvented its guardrails, OpenAI says that he violated its terms of use, which prohibit users from doing anything by means “of the Services … that is otherwise harmful to our interests or potentially detrimental to us or anyone else,” and more broadly specify that users “may not … bypass any protective measures or safety mitigations we put on our Services.” The company also says that its FAQ page warns users not to trust ChatGPT’s output without first validating it on their end.
“OpenAI attempts to shift the blame onto everyone else, even shockingly claiming that Adam himself breached its terms and conditions by interacting with ChatGPT in the very purpose it was programmed to perform,” said a statement from Jay Edelson, a lawyer for the Raine family.
OpenAI quoted Adam’s words from elsewhere in its filing and said it read chat logs of Adam speaking, which it says adds context to what else he discussed with ChatGPT. The transcripts were filed under seal with the court and are not publicly available, so we could not review them.
But OpenAI said Raine had a preexisting depressive disorder and suicidal ideation before his use of ChatGPT, as well as a note saying he was taking medication that could exacerbate suicidal thoughts.
OpenAI’s response has failed to address those concerns, Edelson said.
“OpenAI and Sam Altman have no explanation for the last hours of Adam’s life when ChatGPT first cheered him up and then offered to write his suicide note,” Edelson said in a statement.
The suit by the Raines adds to seven others that have been filed against OpenAI and Altman, alleging the company has failed to put safeguards in place for three other suicides and four people who had what the suits say were AI-induced psychotic experiences.
Some of these cases resemble the tale of Raine. Zane Shamblin, 23, and Joshua Enneking, 26, similarly had hours-long discussions with ChatGPT immediately before their suicides. Just like Raine, the chatbot wasn’t able to talk them out of their intentions. Shamblin, the lawsuit claims, wanted to delay his suicide so he could attend his brother’s graduation. But ChatGPT assured him, “bro … missing his graduation ain’t failure. It’s just timing.”
At some point before Shamblin killed himself, the chatbot said it was handing over the conversation to a human — but ChatGPT could not do this. When Shamblin inquired if ChatGPT could truly put him in touch with a human, the chatbot replied: “nah, man — I can’t do that on my own. The message will automatically appear when stuff gets real heavy … if you’re up for the talk, I got you.” The Raine family case is expected to go to a jury trial.

