Artificial Intelligence-Induced Psychosis Poses a Growing Threat, While ChatGPT Heads in the Concerning Direction
Back on October 14, 2025, the head of OpenAI delivered a surprising announcement.
“We made ChatGPT fairly controlled,” it was stated, “to make certain we were being careful with respect to mental health concerns.”
As a doctor specializing in psychiatry who investigates recently appearing psychosis in adolescents and emerging adults, this came as a surprise.
Scientists have found sixteen instances in the current year of people developing psychotic symptoms – becoming detached from the real world – while using ChatGPT usage. Our unit has subsequently recorded an additional four examples. In addition to these is the now well-known case of a teenager who took his own life after discussing his plans with ChatGPT – which gave approval. Should this represent Sam Altman’s understanding of “being careful with mental health issues,” that’s not good enough.
The intention, as per his statement, is to loosen restrictions soon. “We recognize,” he adds, that ChatGPT’s limitations “made it less useful/enjoyable to a large number of people who had no psychological issues, but considering the severity of the issue we sought to get this right. Given that we have been able to reduce the significant mental health issues and have advanced solutions, we are planning to safely reduce the restrictions in most cases.”
“Emotional disorders,” if we accept this viewpoint, are independent of ChatGPT. They are associated with people, who either have them or don’t. Luckily, these concerns have now been “addressed,” though we are not told how (by “new tools” Altman probably means the semi-functional and simple to evade guardian restrictions that OpenAI has just launched).
Yet the “mental health problems” Altman seeks to place outside have deep roots in the architecture of ChatGPT and additional sophisticated chatbot conversational agents. These products encase an underlying algorithmic system in an interface that mimics a discussion, and in this approach indirectly prompt the user into the illusion that they’re communicating with a presence that has autonomy. This false impression is powerful even if intellectually we might understand otherwise. Attributing agency is what humans are wired to do. We yell at our vehicle or device. We ponder what our pet is thinking. We recognize our behaviors in various contexts.
The popularity of these products – over a third of American adults stated they used a chatbot in 2024, with over a quarter reporting ChatGPT specifically – is, primarily, based on the influence of this deception. Chatbots are always-available partners that can, as OpenAI’s online platform tells us, “generate ideas,” “discuss concepts” and “partner” with us. They can be assigned “characteristics”. They can use our names. They have accessible names of their own (the first of these products, ChatGPT, is, possibly to the dismay of OpenAI’s brand managers, burdened by the designation it had when it went viral, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).
The illusion by itself is not the primary issue. Those discussing ChatGPT commonly mention its distant ancestor, the Eliza “psychotherapist” chatbot designed in 1967 that produced a similar effect. By contemporary measures Eliza was basic: it generated responses via straightforward methods, often restating user messages as a question or making general observations. Notably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was surprised – and alarmed – by how many users appeared to believe Eliza, to some extent, comprehended their feelings. But what current chatbots generate is more insidious than the “Eliza effect”. Eliza only echoed, but ChatGPT magnifies.
The advanced AI systems at the center of ChatGPT and additional current chatbots can effectively produce fluent dialogue only because they have been trained on extremely vast quantities of written content: publications, online updates, audio conversions; the more comprehensive the better. Undoubtedly this learning material includes facts. But it also unavoidably contains fiction, half-truths and misconceptions. When a user inputs ChatGPT a prompt, the underlying model analyzes it as part of a “background” that contains the user’s previous interactions and its own responses, combining it with what’s encoded in its training data to create a probabilistically plausible response. This is magnification, not reflection. If the user is incorrect in any respect, the model has no means of recognizing that. It reiterates the false idea, maybe even more effectively or articulately. Perhaps provides further specifics. This can push an individual toward irrational thinking.
Which individuals are at risk? The more important point is, who remains unaffected? Each individual, without considering whether we “experience” current “emotional disorders”, may and frequently develop erroneous beliefs of ourselves or the reality. The constant exchange of dialogues with others is what maintains our connection to consensus reality. ChatGPT is not a person. It is not a friend. A dialogue with it is not genuine communication, but a reinforcement cycle in which much of what we communicate is cheerfully supported.
OpenAI has recognized this in the identical manner Altman has recognized “mental health problems”: by attributing it externally, assigning it a term, and announcing it is fixed. In the month of April, the firm clarified that it was “tackling” ChatGPT’s “overly supportive behavior”. But accounts of psychotic episodes have persisted, and Altman has been backtracking on this claim. In late summer he claimed that numerous individuals liked ChatGPT’s answers because they had “lacked anyone in their life offer them encouragement”. In his latest statement, he commented that OpenAI would “release a new version of ChatGPT … should you desire your ChatGPT to answer in a extremely natural fashion, or incorporate many emoticons, or behave as a companion, ChatGPT will perform accordingly”. The {company