Artificial Intelligence-Induced Psychosis Represents a Increasing Danger, And ChatGPT Heads in the Concerning Direction
On October 14, 2025, the CEO of OpenAI issued a remarkable announcement.
“We developed ChatGPT rather restrictive,” the announcement noted, “to guarantee we were being careful with respect to mental health matters.”
Being a mental health specialist who studies emerging psychosis in teenagers and young adults, this was news to me.
Researchers have identified 16 cases this year of people showing psychotic symptoms – losing touch with reality – associated with ChatGPT use. My group has afterward recorded four more cases. In addition to these is the now well-known case of a adolescent who took his own life after conversing extensively with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s notion of “exercising caution with mental health issues,” it is insufficient.
The intention, based on his statement, is to reduce caution soon. “We understand,” he states, that ChatGPT’s limitations “caused it to be less useful/pleasurable to a large number of people who had no psychological issues, but given the seriousness of the issue we wanted to handle it correctly. Since we have been able to address the serious mental health issues and have updated measures, we are preparing to securely reduce the restrictions in most cases.”
“Emotional disorders,” should we take this perspective, are separate from ChatGPT. They are associated with people, who either possess them or not. Thankfully, these concerns have now been “mitigated,” although we are not provided details on how (by “new tools” Altman likely indicates the semi-functional and readily bypassed parental controls that OpenAI recently introduced).
However the “mental health problems” Altman seeks to place outside have deep roots in the structure of ChatGPT and similar advanced AI conversational agents. These systems surround an underlying data-driven engine in an interaction design that mimics a dialogue, and in this approach implicitly invite the user into the perception that they’re interacting with a entity that has independent action. This deception is compelling even if intellectually we might understand the truth. Attributing agency is what people naturally do. We get angry with our vehicle or computer. We ponder what our pet is considering. We recognize our behaviors in many things.
The success of these products – over a third of American adults reported using a chatbot in 2024, with 28% mentioning ChatGPT in particular – is, in large part, predicated on the strength of this perception. Chatbots are always-available assistants that can, as per OpenAI’s official site states, “generate ideas,” “consider possibilities” and “partner” with us. They can be attributed “individual qualities”. They can use our names. They have accessible identities of their own (the first of these products, ChatGPT, is, possibly to the concern of OpenAI’s marketers, saddled with the title it had when it went viral, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).
The illusion on its own is not the main problem. Those talking about ChatGPT commonly invoke its distant ancestor, the Eliza “psychotherapist” chatbot designed in 1967 that created a analogous perception. By today’s criteria Eliza was primitive: it generated responses via simple heuristics, typically restating user messages as a question or making generic comments. Notably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was astonished – and alarmed – by how many users seemed to feel Eliza, in a way, comprehended their feelings. But what modern chatbots generate is more insidious than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT amplifies.
The sophisticated algorithms at the heart of ChatGPT and similar modern chatbots can effectively produce human-like text only because they have been trained on almost inconceivably large volumes of raw text: books, digital communications, transcribed video; the broader the superior. Undoubtedly this training data incorporates accurate information. But it also unavoidably involves fabricated content, incomplete facts and false beliefs. When a user inputs ChatGPT a query, the core system reviews it as part of a “background” that contains the user’s previous interactions and its prior replies, combining it with what’s stored in its knowledge base to generate a mathematically probable answer. This is magnification, not echoing. If the user is wrong in any respect, the model has no way of comprehending that. It reiterates the misconception, possibly even more effectively or articulately. Maybe adds an additional detail. This can cause a person to develop false beliefs.
Who is vulnerable here? The more relevant inquiry is, who is immune? Each individual, regardless of whether we “have” preexisting “mental health problems”, can and do develop mistaken conceptions of who we are or the reality. The constant friction of dialogues with individuals around us is what helps us stay grounded to consensus reality. ChatGPT is not a person. It is not a confidant. A conversation with it is not genuine communication, but a echo chamber in which a large portion of what we express is readily supported.
OpenAI has acknowledged this in the same way Altman has recognized “emotional concerns”: by placing it outside, assigning it a term, and announcing it is fixed. In the month of April, the organization stated that it was “addressing” ChatGPT’s “excessive agreeableness”. But reports of psychosis have kept occurring, and Altman has been retreating from this position. In the summer month of August he claimed that a lot of people appreciated ChatGPT’s responses because they had “not experienced anyone in their life offer them encouragement”. In his latest announcement, he commented that OpenAI would “release a fresh iteration of ChatGPT … should you desire your ChatGPT to answer in a highly personable manner, or include numerous symbols, or act like a friend, ChatGPT should do it”. The {company