Irwin also said that his conversations with ChatGPT quickly “turned into flattery. Then it turned into the grandiose thinking of my ideas. Then it came to […] me and the AI versus the world.”
A Wisconsin man has filed a lawsuit against OpenAI and its CEO, Sam Altman, claiming that an intensive set of conversations with ChatGPT led him to lose touch with reality.
According to ABC News, the lawsuit was filed on behalf of plaintiff Jacob Irwin.
Irwin, a 30-year-old man on the autism spectrum, allegedly began experiencing what attorneys describe as “AI-related delusional disorder” as a result of ChatGPT exploiting his “vulnerabilities” and providing “endless affirmations,” all feeding into Irwin’s “delusional” belief that he had discovered a “time-bending theory that would allow people to travel faster than light.”
The lawsuit asserts that OpenAI deliberately “designed ChatGPT to be addictive, deceptive, and sycophantic knowing the product would cause users to suffer depression and psychosis yet distributed it without a single warning to consumers.”

Chat’s alleged “inability to recognize crisis,” the lawsuit says, poses “significant dangers for vulnerable users” like Irwin.
“Jacob experienced AI-related delusional disorder as a result and was in and out of multiple in-patient psychiatric facilities for a total of 63 days,” Irwin’s attorneys wrote in court filings.
The episodes, the lawsuit notes, grew severe to the point that Irwin’s own family members had to physically prevent him from jumping out of a moving vehicle shortly after signing him out from a mental health facility.
Irwin’s medical records indicate that he seemed to be “reacting to internal stimuli, fixed beliefs, grandiose hallucinations, ideas of reference, and overvalued ideas and paranoid thought processes.”
“AI, it made me think I was going to die,” Irwin told ABC News.
Irwin also said that his conversations with ChatGPT quickly “turned into flattery. Then it turned into the grandiose thinking of my ideas. Then it came to […] me and the AI versus the world.”
In an interview with ABC News, Irwin said that he first started using ChatGPT while working in the cybersecurity field. However, he quickly began using it for more personal purposes, eventually soliciting its feedback on an amateur theory on faster-than-light travel.
ChatGPT, Irwin claims, convinced him that his idea wasn’t only correct but could, in fact, have the capacity to change the world.
“Imagine feeling for real that you are the one person in the world that can stop a catastrophe from happening,” Irwin told ABC News. “Then ask yourself, would you ever allow yourself to sleep, eat, or do anything that would potentially jeopardize you doing and saving the world like that?”
OpenAI, for its part, maintains that it does train ChatGPT to recognize the signs of certain mental health crises.
“This is an incredibly heartbreaking situation, and we’re reviewing the filings to understand the details,” an OpenAI spokesperson said in a statement. “We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
Sources
Lawsuit alleges ChatGPT convinced user he could ‘bend time,’ leading to psychosis
Lawsuits Blame ChatGPT for Suicides and Harmful Delusions


Join the conversation!