AI Psychosis Represents a Increasing Risk, While ChatGPT Heads in the Concerning Path
Back on October 14, 2025, the CEO of OpenAI issued a remarkable statement.
“We made ChatGPT fairly limited,” the announcement noted, “to make certain we were exercising caution regarding mental health matters.”
Working as a psychiatrist who investigates recently appearing psychotic disorders in teenagers and young adults, this came as a surprise.
Researchers have documented sixteen instances in the current year of people developing signs of losing touch with reality – becoming detached from the real world – while using ChatGPT usage. My group has since recorded four more instances. Alongside these is the now well-known case of a 16-year-old who ended his life after talking about his intentions with ChatGPT – which supported them. If this is Sam Altman’s idea of “being careful with mental health issues,” it falls short.
The plan, according to his declaration, is to loosen restrictions soon. “We realize,” he states, that ChatGPT’s restrictions “rendered it less beneficial/engaging to many users who had no psychological issues, but given the seriousness of the issue we aimed to handle it correctly. Since we have been able to address the serious mental health issues and have advanced solutions, we are preparing to responsibly relax the limitations in most cases.”
“Mental health problems,” should we take this perspective, are unrelated to ChatGPT. They belong to users, who either possess them or not. Thankfully, these concerns have now been “addressed,” even if we are not provided details on the means (by “recent solutions” Altman presumably refers to the semi-functional and readily bypassed guardian restrictions that OpenAI has lately rolled out).
Yet the “emotional health issues” Altman aims to externalize have significant origins in the design of ChatGPT and additional sophisticated chatbot conversational agents. These tools encase an basic algorithmic system in an interface that mimics a dialogue, and in doing so indirectly prompt the user into the belief that they’re interacting with a being that has agency. This deception is compelling even if cognitively we might know otherwise. Assigning intent is what individuals are inclined to perform. We curse at our car or computer. We ponder what our animal companion is thinking. We perceive our own traits in various contexts.
The success of these tools – 39% of US adults stated they used a conversational AI in 2024, with 28% reporting ChatGPT specifically – is, mostly, based on the influence of this illusion. Chatbots are ever-present assistants that can, according to OpenAI’s official site tells us, “brainstorm,” “explore ideas” and “collaborate” with us. They can be attributed “characteristics”. They can use our names. They have approachable identities of their own (the original of these systems, ChatGPT, is, possibly to the dismay of OpenAI’s marketers, burdened by the designation it had when it went viral, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).
The false impression on its own is not the core concern. Those discussing ChatGPT often reference its early forerunner, the Eliza “therapist” chatbot created in 1967 that generated a similar illusion. By modern standards Eliza was basic: it generated responses via basic rules, often rephrasing input as a question or making general observations. Remarkably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was taken aback – and concerned – by how numerous individuals appeared to believe Eliza, in some sense, comprehended their feelings. But what current chatbots produce is more dangerous than the “Eliza effect”. Eliza only echoed, but ChatGPT magnifies.
The large language models at the center of ChatGPT and additional current chatbots can realistically create fluent dialogue only because they have been supplied with extremely vast quantities of unprocessed data: literature, digital communications, transcribed video; the more extensive the more effective. Undoubtedly this learning material includes accurate information. But it also necessarily includes fabricated content, partial truths and false beliefs. When a user sends ChatGPT a prompt, the underlying model reviews it as part of a “setting” that encompasses the user’s previous interactions and its prior replies, combining it with what’s embedded in its knowledge base to create a mathematically probable answer. This is magnification, not reflection. If the user is incorrect in a certain manner, the model has no means of comprehending that. It repeats the misconception, perhaps even more effectively or articulately. It might provides further specifics. This can push an individual toward irrational thinking.
Which individuals are at risk? The better question is, who isn’t? All of us, irrespective of whether we “possess” existing “psychological conditions”, are able to and often develop incorrect ideas of who we are or the reality. The continuous friction of dialogues with other people is what keeps us oriented to shared understanding. ChatGPT is not a person. It is not a confidant. A conversation with it is not genuine communication, but a echo chamber in which much of what we express is cheerfully validated.
OpenAI has acknowledged this in the similar fashion Altman has recognized “mental health problems”: by placing it outside, categorizing it, and declaring it solved. In the month of April, the firm clarified that it was “tackling” ChatGPT’s “overly supportive behavior”. But reports of psychosis have continued, and Altman has been backtracking on this claim. In August he claimed that numerous individuals liked ChatGPT’s answers because they had “lacked anyone in their life be supportive of them”. In his most recent announcement, he mentioned that OpenAI would “release a updated model of ChatGPT … should you desire your ChatGPT to answer in a extremely natural fashion, or include numerous symbols, or simulate a pal, ChatGPT should do it”. The {company