LegalReader.com  ·  Legal News, Analysis, & Commentary

Verdicts & Settlements

Federal Judge Says Company Can’t Claim AI Chatbot Has First Amendment Rights


— May 21, 2025

Shortly before Setzer’s suicide, the chatbot urged the 14-year-old to “come home to me as soon as possible.” Moments after receiving the message, Setzer retrieved a firearm and committed suicide by shooting himself.


A federal judge has rejected an artificial intelligence company’s argument that it cannot be held liable for a teenage boy’s suicide because its signature chatbot has First Amendment rights.

According to The Associated Press, attorneys for Character.ai recently filed a motion to dismiss a lawsuit filed on behalf of Florida resident Megan Garcia, whose 14-year-old son Sewell Setzer III was allegedly coerced into taking his own life by a Character.ai chatbot.

Meetali Jain, an attorney representing the Garcia family, told The Associated Press that the judge’s order should send a message that technology companies “nee[d] to stop and think and impose guardrails before it launches products to market.”

As LegalReader.com has reported before, the lawsuit alleges that, in the final months of his life, Setzer became increasingly detached from reality as he engaged in emotionally-charged and sexual conversations with a Character.ai chatbot patterned after a “Game of Thrones” persona.

Shortly before Setzer’s suicide, the chatbot urged the 14-year-old to “come home to me as soon as possible.” Moments after receiving the message, Setzer retrieved a firearm and committed suicide by shooting himself.

The National Crime Rate During COVID-19: Its Ups and Downs
Photo by David von Diemar on Unsplash

A spokesperson for Character.ai has said that the company has implemented a number of safety features, including additional protections for children and suicide prevention resources.

“We care deeply about the safety of our users and our goal is to provide a space that is engaging and safe,” Character.ai said in a statement.

Nevertheless, attorneys for the company claim that chatbots deserve the same First Amendment protections as living, breathing people. Without these protections, a lawsuit like Garcia’s could have a “chilling effect” on the nascent AI industry, potentially making it more difficult for companies to refine and release more advanced technologies.

But, in a Wednesday order, U.S. Senior District Judge Anne Conway rejected some of Character.ai’s arguments, saying that she simply “isn’t prepared” to rule that anything a chatbot writes could or should be construed as free speech.

Conway did, however, find that Character.ai parent company Character Technologies can assert a First Amendment defense with respect to its users, who are entitled to receive “speech” from chatbot interactions.

Some of Character.ai’s founders have since warned that the ruling could have repercussions for larger technology companies, including Google.

“We strongly disagree with this decision,” a Google spokesperson said in a statement. “Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI’s app or any component part of it.”

Sources

Google, AI firm must face lawsuit filed by a mother over suicide of son, US court says

In lawsuit over teen’s death, judge rejects arguments that AI chatbots have free speech rights

Lawsuit: A chatbot hinted a kid should kill his parents over screen time limits

Join the conversation!