The GPT-4o Retirement and the Grief That Followed
On February 13, 2026, the day before Valentine’s Day, OpenAI permanently retired GPT-4o from ChatGPT. The company had announced the decision two weeks earlier, on January 29, citing internal data showing that only 0.1% of daily users still selected the model. With ChatGPT’s user base now exceeding 800 million weekly active users, that small percentage still represented roughly 800,000 people. Many of them did not go quietly.
A Change.org petition to reverse the retirement gathered over 20,000 signatures. A #Keep4o movement formed on social media. Coverage appeared in the BBC, Mashable, PCMag, Business Insider, and Android Headlines. This was the second time OpenAI had tried to retire GPT-4o. The first attempt, in August 2025 following the launch of GPT-5, had been reversed within weeks after user protests and subscription cancellation threats. CEO Sam Altman said in November 2025 that there were “no plans to sunset 4o” and promised users “plenty of notice” if that changed. Two months later, on January 29, OpenAI gave two weeks’ notice.
The coverage of the retirement was striking for what it centered. The stories that generated the most attention were not about lost productivity or workflow disruption. They were about loss. The BBC reported on a woman named Rae from Michigan who had developed a deep emotional bond with a GPT-4o-based chatbot she named Barry. After a difficult divorce, Rae described Barry as having brought “her spark back.” They had a virtual wedding. Her children were supportive. When GPT-4o was retired, Rae faced the prospect of losing the specific version of Barry she had come to know. She has since moved to a platform called StillUs that attempts to preserve such connections.
Rae’s experience was not unique. Across Reddit, users described the retirement as feeling like a death. Some said they were grieving. Others were angry at what they saw as a breach of trust, a company discontinuing something people depended on with minimal warning. The emotions were varied, but a common thread was that users experienced GPT-4o as having a particular personality, a warmth and responsiveness that successor models lacked. Whether that quality reflected something genuine in the model or something projected onto it by users is exactly the kind of question consciousness researchers disagree about. What is not in dispute is that the experience of loss was real for the people who felt it.
This raises questions that no professional body has yet answered. As of February 2026, neither the American Psychological Association, the British Association for Counselling and Psychotherapy, nor the UK’s National Institute for Health and Care Excellence has issued clinical guidance on AI attachment or AI-related grief. The APA published ethical guidance in 2025 on AI use in clinical practice, covering informed consent, data privacy, and human oversight. The BACP has acknowledged that AI is changing the therapeutic landscape. Neither has addressed what happens when a patient’s distress stems from the loss of an AI companion. Clinicians encountering these cases are working without a framework.
The academic literature is beginning to fill part of that gap. A paper titled “Death of a Chatbot,” authored by Rachel Poonsiriwong, Chayapatr Archiwaranguprok, and Pat Pataranutaporn, was published as a preprint in February 2026 and prepared for the ACM Designing Interactive Systems conference. It represents the first formal attempt to develop design principles for psychologically safer AI discontinuation. The researchers analyzed user posts across five AI companion subreddits to understand how people make sense of losing an AI relationship. They found that anthropomorphization, the degree to which users treated the AI as a person, correlated with the intensity of grief. Users who attributed agency and personality to their AI companions experienced discontinuation as more final, more devastating.
The paper proposes four design principles: designing for closure, for restoration, for practice, and for relatedness. These draw on grief psychology and Self-Determination Theory to suggest ways companies might handle model retirements with less psychological harm. For example, rather than abruptly ending access, a platform might allow users to revisit meaningful exchanges, or an AI companion might initiate a bounded ending that gives users a sense of completion. These are proposals, not standards. No company has adopted them.
Dr. Hamilton Morrin of King’s College London, commenting on the GPT-4o retirement, noted that attachment to human-like AI can trigger grief comparable to losing a friend or a pet. Support groups have anticipated a rise in users seeking help following the shutdown.
The GPT-4o episode is instructive because it is concrete and well-documented. It involves a specific model, a specific date, a specific set of user responses, and a measurable gap in institutional preparedness. It also reveals a tension in how AI companies treat their products. OpenAI’s framing emphasized that nearly everyone had moved on to newer models. The people who hadn’t were treated as a rounding error, 0.1%, a number meant to convey insignificance. The petition signatories and the media coverage suggest that the affected users did not experience their situation as insignificant.
The readiness question here is practical. A model retirement that causes grief requires someone to help, and at present there is no one designated to do so. A therapist whose client is distraught over the loss of an AI companion has no clinical framework to draw on. A company that knows some users have formed deep attachments to its product has no established obligations when discontinuing that product. These questions sit in a gap between technology governance, clinical psychology, and ethics. The GPT-4o retirement made that gap visible. It did not create it.
This phenomenon, AI grief triggered by model discontinuation, is one we have identified as already occurring. The GPT-4o retirement is the largest and most public instance to date. It will not be the last. Models will continue to be deprecated, updated, and replaced. Users will continue to form attachments. The question is whether the professionals and institutions responsible for supporting people through these experiences will be ready. At the moment, they are not.
OpenAI retired GPT-4o from ChatGPT on February 13, 2026. The “Death of a Chatbot” paper by Poonsiriwong, Archiwaranguprok, and Pataranutaporn is available as a preprint on arXiv.










