LegalReader.com  ·  Legal News, Analysis, & Commentary

Verdicts & Settlements

Google and Character.AI Settle Wrongful Death Lawsuit with Florida Mother


— January 7, 2026

In court filings, Garcia’s attorneys repeatedly emphasized that Character.AI had few, if any, mechanisms to protect her son or notify an adult about the frequency or content of the conversations. Garcia said that Character.AI’s “companion” chatbot was primarily designed to engage in sexual roleplay, but misleadingly presented itself as a friend, a romantic partner, and even as a psychotherapist.


Google and Character.AI have agreed to settle a lawsuit filed by the family of a 14-year-old who committed suicide after allegedly being encouraged to self-harm by a chatbot.

According to The New York Times, the lawsuit was filed in U.S. District Court for the Middle District of Florida in October 2024 by plaintiff Megan L. Garcia, the mother of Sewell Setzer III.

Setzer, who was 14 years old at the time of his death, killed himself in February 2024 after having a series of conversations with a Character.AI chatbot. In Setzer’s final conversation with the chatbot, it told him to “please come home to see me as soon as possible.”

“What if I told you I could come home right now?” Setzer asked.

“… please do, my sweet king,” the chatbot replied.

In September, Garcia provided testimony to U.S. Congress, saying she believes that she is likely the first person to try holding an artificial intelligence company liable for wrongful death.

“I became the first person in the United States to file a wrongful death lawsuit against an AI company for the suicide of her son,” Garcia said. She has since described her soon, who stood 6’3” at the age of 14, as a “gentle giant” who loved music, made others laugh, and “had his whole life ahead of him.”

Google’s office in Toronto. Image via WIkimedia Commons/Sikander Iqbal. (CCA-BY-4.0).

In court filings, Garcia’s attorneys repeatedly emphasized that Character.AI had few, if any, mechanisms to protect her son or notify an adult about the frequency or content of the conversations. Garcia said that Character.AI’s “companion” chatbot was primarily designed to engage in sexual roleplay, but misleadingly presented itself as a friend, a romantic partner, and even as a psychotherapist.

“I want them to understand this is a platform that the designers chose to put out without proper guardrails, safety measures or testing, and it is a product designed to keep our kids addicted and to manipulate them,” Garcia told CNN in a 2024 interview.

In the same interview, Garcia admitted that she never knew her child was having long and sexually explicit conversations with a chatbot, in part because such technology is so new.

“I had no idea that there was a place where a child can log in and have those conversations, very sexual conversations, with an AI chatbot,” Garcia said. “I don’t think any parent would approve of that.”

Garcia’s lawsuit is one of five closely-related claims that Character.AI and Google agreed to settle last week, with cases filed in Florida, Texas, Colorado, and New York. In each case, the plaintiff families say that their children were harmed after Character.ai conversations prompted them to self-harm.

Sources

AI company, Google settle lawsuit over Florida teen’s suicide linked to Character.AI chatbot

Character.AI and Google agree to settle lawsuits over teen mental health harms and suicides

Google and Character.AI to Settle Lawsuit Over Teenager’s Death

‘There are no guardrails.’ This mom believes an AI chatbot is responsible for her son’s suicide

Join the conversation!