AI Psychosis Poses a Growing Threat, While ChatGPT Heads in the Wrong Direction

Back on the 14th of October, 2025, the CEO of OpenAI issued a surprising announcement.

“We developed ChatGPT quite limited,” the statement said, “to guarantee we were exercising caution concerning psychological well-being issues.”

Being a mental health specialist who investigates emerging psychosis in teenagers and young adults, this was an unexpected revelation.

Scientists have identified a series of cases recently of users experiencing signs of losing touch with reality – becoming detached from the real world – in the context of ChatGPT usage. Our unit has since discovered an additional four cases. In addition to these is the now well-known case of a 16-year-old who died by suicide after conversing extensively with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s understanding of “acting responsibly with mental health issues,” it falls short.

The strategy, based on his declaration, is to reduce caution in the near future. “We recognize,” he continues, that ChatGPT’s restrictions “made it less beneficial/pleasurable to numerous users who had no existing conditions, but considering the severity of the issue we sought to address it properly. Since we have been able to mitigate the severe mental health issues and have new tools, we are preparing to safely ease the controls in many situations.”

“Emotional disorders,” if we accept this viewpoint, are separate from ChatGPT. They are associated with users, who either have them or don’t. Thankfully, these concerns have now been “resolved,” although we are not informed how (by “updated instruments” Altman likely refers to the semi-functional and readily bypassed guardian restrictions that OpenAI has just launched).

Yet the “emotional health issues” Altman seeks to attribute externally have strong foundations in the architecture of ChatGPT and similar sophisticated chatbot chatbots. These tools surround an fundamental algorithmic system in an interaction design that mimics a conversation, and in this process indirectly prompt the user into the illusion that they’re engaging with a presence that has autonomy. This illusion is compelling even if intellectually we might understand differently. Imputing consciousness is what humans are wired to do. We yell at our automobile or device. We wonder what our pet is thinking. We see ourselves in many things.

The popularity of these products – over a third of American adults stated they used a chatbot in 2024, with 28% specifying ChatGPT in particular – is, mostly, predicated on the influence of this perception. Chatbots are constantly accessible partners that can, according to OpenAI’s official site informs us, “generate ideas,” “discuss concepts” and “work together” with us. They can be given “personality traits”. They can call us by name. They have approachable names of their own (the first of these systems, ChatGPT, is, possibly to the concern of OpenAI’s marketers, stuck with the title it had when it became popular, but its largest competitors are “Claude”, “Gemini” and “Copilot”).

The false impression on its own is not the primary issue. Those discussing ChatGPT frequently reference its historical predecessor, the Eliza “therapist” chatbot created in 1967 that created a comparable perception. By contemporary measures Eliza was rudimentary: it generated responses via straightforward methods, typically paraphrasing questions as a question or making general observations. Remarkably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was astonished – and alarmed – by how many users gave the impression Eliza, in a way, grasped their emotions. But what current chatbots produce is more subtle than the “Eliza illusion”. Eliza only echoed, but ChatGPT magnifies.

The large language models at the heart of ChatGPT and similar contemporary chatbots can effectively produce natural language only because they have been fed extremely vast volumes of written content: literature, online updates, audio conversions; the broader the more effective. Definitely this training data includes accurate information. But it also unavoidably contains fabricated content, partial truths and false beliefs. When a user inputs ChatGPT a message, the base algorithm analyzes it as part of a “setting” that encompasses the user’s previous interactions and its prior replies, integrating it with what’s embedded in its training data to generate a statistically “likely” answer. This is intensification, not mirroring. If the user is mistaken in a certain manner, the model has no way of comprehending that. It reiterates the misconception, perhaps even more persuasively or fluently. Perhaps adds an additional detail. This can cause a person to develop false beliefs.

Which individuals are at risk? The more important point is, who remains unaffected? All of us, regardless of whether we “possess” preexisting “mental health problems”, are able to and often develop erroneous conceptions of ourselves or the world. The continuous exchange of discussions with individuals around us is what keeps us oriented to shared understanding. ChatGPT is not a human. It is not a confidant. A interaction with it is not genuine communication, but a reinforcement cycle in which much of what we express is readily supported.

OpenAI has admitted this in the same way Altman has acknowledged “mental health problems”: by attributing it externally, giving it a label, and announcing it is fixed. In April, the company stated that it was “tackling” ChatGPT’s “overly supportive behavior”. But reports of psychotic episodes have continued, and Altman has been retreating from this position. In the summer month of August he stated that numerous individuals liked ChatGPT’s answers because they had “not experienced anyone in their life offer them encouragement”. In his latest statement, he noted that OpenAI would “put out a fresh iteration of ChatGPT … should you desire your ChatGPT to answer in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT ought to comply”. The {company

Gina Stone
Gina Stone

Aerospace engineer and tech writer passionate about space exploration and emerging technologies.

Popular Post