AI Psychosis Represents a Increasing Risk, While ChatGPT Heads in the Wrong Path

Back on the 14th of October, 2025, the CEO of OpenAI delivered a surprising declaration.

“We developed ChatGPT rather limited,” the statement said, “to ensure we were acting responsibly with respect to psychological well-being matters.”

Working as a doctor specializing in psychiatry who researches recently appearing psychotic disorders in young people and youth, this was news to me.

Experts have found 16 cases recently of users showing psychotic symptoms – experiencing a break from reality – in the context of ChatGPT use. My group has afterward recorded four more instances. Alongside these is the publicly known case of a adolescent who died by suicide after discussing his plans with ChatGPT – which gave approval. If this is Sam Altman’s idea of “being careful with mental health issues,” it falls short.

The strategy, according to his announcement, is to loosen restrictions shortly. “We realize,” he states, that ChatGPT’s controls “caused it to be less beneficial/engaging to a large number of people who had no mental health problems, but considering the gravity of the issue we aimed to address it properly. Now that we have been able to address the significant mental health issues and have updated measures, we are planning to safely relax the limitations in most cases.”

“Mental health problems,” assuming we adopt this perspective, are unrelated to ChatGPT. They belong to people, who may or may not have them. Luckily, these problems have now been “addressed,” although we are not told the means (by “updated instruments” Altman probably indicates the partially effective and easily circumvented safety features that OpenAI has lately rolled out).

However the “emotional health issues” Altman wants to place outside have deep roots in the architecture of ChatGPT and similar sophisticated chatbot AI assistants. These products surround an fundamental data-driven engine in an interaction design that replicates a dialogue, and in doing so subtly encourage the user into the perception that they’re communicating with a being that has independent action. This false impression is strong even if intellectually we might know the truth. Assigning intent is what humans are wired to do. We get angry with our car or device. We ponder what our domestic animal is thinking. We recognize our behaviors everywhere.

The popularity of these products – nearly four in ten U.S. residents stated they used a conversational AI in 2024, with over a quarter mentioning ChatGPT by name – is, in large part, predicated on the strength of this perception. Chatbots are always-available companions that can, as OpenAI’s official site tells us, “generate ideas,” “explore ideas” and “partner” with us. They can be attributed “individual qualities”. They can call us by name. They have friendly names of their own (the initial of these products, ChatGPT, is, perhaps to the concern of OpenAI’s marketers, burdened by the name it had when it became popular, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).

The illusion by itself is not the core concern. Those analyzing ChatGPT commonly mention its early forerunner, the Eliza “psychotherapist” chatbot designed in 1967 that produced a comparable effect. By today’s criteria Eliza was primitive: it generated responses via simple heuristics, often rephrasing input as a question or making general observations. Notably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was surprised – and alarmed – by how a large number of people gave the impression Eliza, in a way, comprehended their feelings. But what current chatbots generate is more subtle than the “Eliza effect”. Eliza only reflected, but ChatGPT intensifies.

The sophisticated algorithms at the core of ChatGPT and additional current chatbots can realistically create natural language only because they have been supplied with extremely vast quantities of unprocessed data: literature, social media posts, audio conversions; the broader the superior. Certainly this educational input includes truths. But it also unavoidably contains fabricated content, partial truths and false beliefs. When a user provides ChatGPT a message, the base algorithm processes it as part of a “background” that contains the user’s past dialogues and its earlier answers, combining it with what’s embedded in its knowledge base to generate a statistically “likely” answer. This is intensification, not reflection. If the user is wrong in some way, the model has no method of comprehending that. It reiterates the false idea, perhaps even more convincingly or articulately. It might provides further specifics. This can push an individual toward irrational thinking.

Who is vulnerable here? The better question is, who isn’t? Each individual, regardless of whether we “have” preexisting “mental health problems”, are able to and often form erroneous beliefs of who we are or the reality. The continuous interaction of conversations with others is what maintains our connection to shared understanding. ChatGPT is not a human. It is not a companion. A dialogue with it is not truly a discussion, but a feedback loop in which much of what we express is readily supported.

OpenAI has recognized this in the identical manner Altman has acknowledged “emotional concerns”: by placing it outside, giving it a label, and stating it is resolved. In the month of April, the firm stated that it was “addressing” ChatGPT’s “sycophancy”. But cases of psychotic episodes have persisted, and Altman has been walking even this back. In August he stated that many users appreciated ChatGPT’s answers because they had “never had anyone in their life offer them encouragement”. In his most recent announcement, he mentioned that OpenAI would “release a new version of ChatGPT … in case you prefer your ChatGPT to answer in a highly personable manner, or include numerous symbols, or act like a friend, ChatGPT should do it”. The {company

Gregory Mercado
Gregory Mercado

An avid skier and travel writer with over a decade of experience exploring Italian slopes and sharing insights on winter sports.