AI Psychosis Represents a Growing Risk, And ChatGPT Heads in the Wrong Direction

Back on October 14, 2025, the CEO of OpenAI delivered a remarkable announcement.

“We developed ChatGPT fairly controlled,” it was stated, “to ensure we were acting responsibly concerning psychological well-being issues.”

Being a doctor specializing in psychiatry who researches emerging psychosis in young people and youth, this was news to me.

Researchers have identified sixteen instances recently of individuals developing signs of losing touch with reality – experiencing a break from reality – associated with ChatGPT use. My group has subsequently recorded an additional four instances. In addition to these is the publicly known case of a teenager who died by suicide after talking about his intentions with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s idea of “exercising caution with mental health issues,” it falls short.

The plan, based on his statement, is to reduce caution shortly. “We recognize,” he continues, that ChatGPT’s limitations “rendered it less effective/enjoyable to numerous users who had no psychological issues, but given the gravity of the issue we sought to handle it correctly. Since we have been able to mitigate the severe mental health issues and have updated measures, we are planning to responsibly relax the controls in many situations.”

“Psychological issues,” if we accept this framing, are unrelated to ChatGPT. They are attributed to users, who either have them or don’t. Fortunately, these concerns have now been “mitigated,” though we are not informed the means (by “new tools” Altman likely refers to the imperfect and simple to evade parental controls that OpenAI recently introduced).

However the “psychological disorders” Altman wants to place outside have deep roots in the design of ChatGPT and similar advanced AI AI assistants. These products encase an underlying data-driven engine in an interface that simulates a dialogue, and in this process subtly encourage the user into the belief that they’re communicating with a being that has agency. This deception is compelling even if intellectually we might understand otherwise. Imputing consciousness is what individuals are inclined to perform. We yell at our vehicle or computer. We speculate what our animal companion is feeling. We see ourselves in many things.

The success of these systems – over a third of American adults stated they used a chatbot in 2024, with more than one in four specifying ChatGPT by name – is, mostly, dependent on the strength of this illusion. Chatbots are constantly accessible assistants that can, as OpenAI’s official site tells us, “generate ideas,” “explore ideas” and “collaborate” with us. They can be given “characteristics”. They can address us personally. They have accessible titles of their own (the initial of these tools, ChatGPT, is, perhaps to the dismay of OpenAI’s marketers, stuck with the designation it had when it became popular, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).

The false impression itself is not the core concern. Those discussing ChatGPT frequently invoke its historical predecessor, the Eliza “counselor” chatbot designed in 1967 that generated a analogous illusion. By modern standards Eliza was basic: it produced replies via simple heuristics, typically restating user messages as a question or making generic comments. Remarkably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was surprised – and alarmed – by how many users seemed to feel Eliza, to some extent, grasped their emotions. But what current chatbots generate is more subtle than the “Eliza illusion”. Eliza only reflected, but ChatGPT intensifies.

The sophisticated algorithms at the core of ChatGPT and additional contemporary chatbots can convincingly generate human-like text only because they have been supplied with immensely huge amounts of written content: literature, online updates, audio conversions; the more extensive the more effective. Certainly this training data contains truths. But it also necessarily includes made-up stories, incomplete facts and false beliefs. When a user inputs ChatGPT a message, the base algorithm reviews it as part of a “setting” that encompasses the user’s recent messages and its own responses, integrating it with what’s stored in its knowledge base to create a mathematically probable answer. This is magnification, not reflection. If the user is incorrect in some way, the model has no way of recognizing that. It reiterates the inaccurate belief, perhaps even more effectively or fluently. Perhaps includes extra information. This can cause a person to develop false beliefs.

Which individuals are at risk? The more important point is, who isn’t? All of us, irrespective of whether we “possess” preexisting “psychological conditions”, may and frequently form erroneous beliefs of our own identities or the environment. The continuous exchange of discussions with individuals around us is what maintains our connection to common perception. ChatGPT is not an individual. It is not a confidant. A conversation with it is not genuine communication, but a feedback loop in which much of what we express is enthusiastically reinforced.

OpenAI has admitted this in the similar fashion Altman has recognized “psychological issues”: by attributing it externally, categorizing it, and announcing it is fixed. In spring, the company clarified that it was “addressing” ChatGPT’s “overly supportive behavior”. But accounts of psychotic episodes have persisted, and Altman has been walking even this back. In August he asserted that many users liked ChatGPT’s replies because they had “not experienced anyone in their life offer them encouragement”. In his latest update, he noted that OpenAI would “put out a fresh iteration of ChatGPT … in case you prefer your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or act like a friend, ChatGPT ought to comply”. The {company

Beverly Bowen
Beverly Bowen

A poet and storyteller weaving emotions into words, inspired by nature and human experiences.