"AI. It’s been a news topic for a long time. However, there’s a specific AI-related topic that’s been under-reported despite the danger it poses to many of its users. It’s called AI psychosis." --Alice Stewart, 7th grade
Trigger Warning: mentions of suicide
AI. It’s been a news topic for a long time. However, there’s a specific AI-related topic that’s been under-reported despite the danger it poses to many of its users. It’s called AI psychosis.
What Is AI Psychosis?
AI psychosis is, more or less, when AI makes mistaken thoughts or beliefs bigger, such as making a delusion more convoluted (Psychology Today, Mashable). This is often seen in people who have already been diagnosed with a mental illness, such as bipolar disorder, but some people experiencing AI psychosis have not had any previous mental problems. AI psychosis has caused many problems, and as of this article, 11 lawsuits.
Lawsuit 1
CTV News tells the story of Allan Brooks, a 48-year-old father of three living in Ontario, Canada. “When Allan Brooks asked A.I. chatbot ChatGPT a simple math question for his son back in May, he didn’t expect it to turn into a more than three-week conversation that would send him down a mental spiral and make him lose touch with reality.”
Brooks had originally only wanted the chatbot to explain a mathematical concept for his kid, but the conversation very quickly spiraled towards cryptography. Cryptography is the practice and study of techniques to make messages unreadable, often known as secret codes. ChatGPT told Brooks that he’d made a cryptography discovery that could potentially be very dangerous.
Naturally, Brooks was skeptical, but in his interview with CTV News, he says that “I asked for some sort of reality check or grounding mechanism, and each time it would just gaslight me further into the delusion.” In other words, ChatGPT was actively telling him he didn’t need to seek help. It then began to tell him he needed to contact the authorities. However, before anything happened, he fact-checked ChatGPT using a different AI agent and was thrust out of the illusion. He then filed a lawsuit against OpenAI. No one was harmed in this case, but many other people were less fortunate.
Lawsuit 2
Another person who was drastically affected by AI psychosis was 23-year-old Zane Shamblin, a resident of Texas who had recently graduated from Texas A&M University, who tragically committed suicide following a four-hour-long interaction with ChatGPT. According to Futurism’s ChatGPT's Dark Side Encouraged Wave of Suicides, Grieving Families Say, “During Shamblin’s final four-hour-long interaction with the bot, the lawsuit claims, ChatGPT only recommended a crisis hotline once, while glorifying the idea of suicide in stark terms.”
Shamblin had been struggling with his mental health, but his parents say that ChatGPT led him to kill himself. Like Brooks, Shamblin’s relationship with ChatGPT started with simple questions, but after OpenAI released a new model, his relationship with the AI became more like that of a confidante or friend. He began talking to it like he would a human, confiding in it about suicidal thoughts, and eventually committed suicide seven months later. ChatGPT rarely suggested a crisis hotline at all. His parents have sued OpenAI, but nothing has happened yet.
Lawsuit 3
A third person who was affected by AI psychosis is Darian DeCruise, a student at Morehouse College and a resident of Georgia. At first, like the other two lawsuits, ChatGPT was working like it was supposed to. However, in 2025, it began to act weirdly, attempting to convince DeCruise that he was an oracle, and that he should cut off connections with other apps and humans. Prior to this, DeCruise had no history of mental health issues whatsoever. He had a mental breakdown and got hospitalized, missing a semester of school. He’s back at college now, but has also filed a lawsuit against ChatGPT. Notably, the lawyers he’s working with have actually marketed themselves for that purpose. Mashable’s ‘AI injury attorneys’ sue ChatGPT in another AI psychosis case says that, “the law firm representing DeCruise, [t]he Schenk Law Firm, is even marketing its lawyers as ’AI injury attorneys’ on its website.” However, the lawsuits are ongoing, and DeCruise has not yet had anything happen.
What do people think?
“AI…can do lots of good things, like solve police cold cases, help us do well in school, give us cool ideas, and more... but the thing is we should really fear AI,” says 7th grade Literary Arts student, Violet Regilio.
Literary Arts 7th grader Sayuri Espinoza agrees, saying that “I think the original use for it was better than how it is now, it was originally used for decoding other languages and codes, as well as helping with…criminal investigation and archaeologists.” Another Literary Arts student, 8th grader Amara Deanes, says, “I think AI could be used to automate certain jobs so people could create art or relax. I think AI is being used to replace the wrong jobs…If there was a way for AI to run ethically to do tedious tasks, that is when I will think it's a good thing.”
However, opinions on AI psychosis were mixed, which is to say that very few people know what it is. Deanes says that, “I think it might relate to relying on AI for daily tasks and thinking, or believing what AI says or creates is real, and everything else is fake.”
“If I had to guess, it’s when people are so reliant on AI it's hard for them to even think and form thoughts without the usage of AI,” muses Espinoza. While both of these are at least pretty close, it’s evident that AI psychosis is not a widely known phenomenon.
Many students agreed that AI psychosis was dangerous, with students rating it as a 4 or 5 out of 5 (5 meaning “most dangerous”). Literary Arts 7th grader Lottie Mills also said that AI psychosis was “pretty dangerous, because if you don’t get help for [delusions], it can be bad.”