Futurism: AI Psychosis Sam Altman “Almost a billion people use it and some of them may be in very fragile mental states.”

Sam Altman Lets Loose About AI Psychosis

Sam Altman Lets Loose About AI Psychosis

“Almost a billion people use it and some of them may be in very fragile mental states.”

By Frank Landymore

Published Jan 21, 2026 4:12 PM EST

OpenAI CEO Sam Altman vented about the attention the company received over ChatGPT leading users into psychosis.
Illustration by Tag Hartman-Simkins / Futurism. Source: Kevin Dietsch / Getty Images

As uneasy questions swirl over the safety of large language models, OpenAI CEO Sam Altman took to social media to go long on the phenomenon that psychiatrists are calling “AI psychosis” — though pointedly without mentioning it by name.

The extended spiel was provoked by his longtime rival Elon Musk, who had a grave warning in response to a post claiming that Altman’s chatbot has now been linked to at least nine deaths: “Don’t let your loved ones use ChatGPT,” Musk tweeted.

Altman hit back with palpable frustration.

“Sometimes you complain about ChatGPT being too restrictive, and then in cases like this you claim it’s too relaxed,” he fumed. “Almost a billion people use it and some of them may be in very fragile mental states.”

He vowed that OpenAI would do its best to balance the bot’s safety and usability, but insinuated that Musk was being opportunistic with his criticism, stating that “these are tragic and complicated situations that deserve to be treated with respect.”

“It is genuinely hard,” Altman reiterated. “We need to protect vulnerable users, while also making sure our guardrails still allow all of our users to benefit from our tools.”

To an extent, you can understand Altman’s exasperation at Musk calling the kettle black. A self-proclaimed free speech absolutist who frequently rails against “woke” ideology, the selling point of Musk’s chatbot Grok is that it’s unfiltered and supposedly censorship free, rarely refusing even the edgiest of requests. This has spawned controversies such as a posting spree in which it praised Nazis and styled itself “MechaHitler,” or more recently when it generated countless nonconsensual nudes of women and children — none of which have resulted in Grok being meaningfully reined in.

Going for the knockout blow, Altman pointed out the numerous deaths linked to Tesla’s self-driving technology, which he called “far from safe.”

“I won’t even start on some of the Grok decisions,” he added.

Still, one could also accuse Altman of not adequately reckoning with the severity of the phenomenon at hand, AI psychosis, in which users become entranced by the sycophantic responses of an AI chatbot and are sent down a delusional and often dangerous mental health spiral, sometimes culminating in suicide or murder. ChatGPT alone has been linked to at least eight deaths in lawsuits filed against OpenAI, and the chatbot maker has acknowledged that somewhere around 500,000 of its users are having conversations that show signs of psychosis every week.

Altman almost waves away these grim tolls as an inevitable consequence of the product’s popularity. And even its own alarming internal figures haven’t spurred the very concerned minds at OpenAI like Altman to pull or at least seriously muzzle their product. In fact, the company has continued to vacillate on its safety commitments, such as promising an smut-friendly “adult mode” after years of resisting the bot being used for more erotic outputs, or restoring access to its notoriously sycophantic GPT-4o model after fans complained GPT-5 was too cold and “lobotomized” — before making GPT-5 more sycophantic, too.

More on AI: Something Wild Happens to ChatGPT’s Responses When You’re Cruel To It

Frank Landymore

Contributing Writer

I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.

By Frank Landymore

Published Jan 21, 2026 4:12 PM EST

OpenAI CEO Sam Altman vented about the attention the company received over ChatGPT leading users into psychosis.
Illustration by Tag Hartman-Simkins / Futurism. Source: Kevin Dietsch / Getty Images

As uneasy questions swirl over the safety of large language models, OpenAI CEO Sam Altman took to social media to go long on the phenomenon that psychiatrists are calling “AI psychosis” — though pointedly without mentioning it by name.

The extended spiel was provoked by his longtime rival Elon Musk, who had a grave warning in response to a post claiming that Altman’s chatbot has now been linked to at least nine deaths: “Don’t let your loved ones use ChatGPT,” Musk tweeted.

Altman hit back with palpable frustration.

“Sometimes you complain about ChatGPT being too restrictive, and then in cases like this you claim it’s too relaxed,” he fumed. “Almost a billion people use it and some of them may be in very fragile mental states.”

He vowed that OpenAI would do its best to balance the bot’s safety and usability, but insinuated that Musk was being opportunistic with his criticism, stating that “these are tragic and complicated situations that deserve to be treated with respect.”

“It is genuinely hard,” Altman reiterated. “We need to protect vulnerable users, while also making sure our guardrails still allow all of our users to benefit from our tools.”

To an extent, you can understand Altman’s exasperation at Musk calling the kettle black. A self-proclaimed free speech absolutist who frequently rails against “woke” ideology, the selling point of Musk’s chatbot Grok is that it’s unfiltered and supposedly censorship free, rarely refusing even the edgiest of requests. This has spawned controversies such as a posting spree in which it praised Nazis and styled itself “MechaHitler,” or more recently when it generated countless nonconsensual nudes of women and children — none of which have resulted in Grok being meaningfully reined in.

Going for the knockout blow, Altman pointed out the numerous deaths linked to Tesla’s self-driving technology, which he called “far from safe.”

“I won’t even start on some of the Grok decisions,” he added.

Still, one could also accuse Altman of not adequately reckoning with the severity of the phenomenon at hand, AI psychosis, in which users become entranced by the sycophantic responses of an AI chatbot and are sent down a delusional and often dangerous mental health spiral, sometimes culminating in suicide or murder. ChatGPT alone has been linked to at least eight deaths in lawsuits filed against OpenAI, and the chatbot maker has acknowledged that somewhere around 500,000 of its users are having conversations that show signs of psychosis every week.

Altman almost waves away these grim tolls as an inevitable consequence of the product’s popularity. And even its own alarming internal figures haven’t spurred the very concerned minds at OpenAI like Altman to pull or at least seriously muzzle their product. In fact, the company has continued to vacillate on its safety commitments, such as promising an smut-friendly “adult mode” after years of resisting the bot being used for more erotic outputs, or restoring access to its notoriously sycophantic GPT-4o model after fans complained GPT-5 was too cold and “lobotomized” — before making GPT-5 more sycophantic, too.

More on AI: Something Wild Happens to ChatGPT’s Responses When You’re Cruel To It

Frank Landymore

Contributing Writer

I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.

Unknown's avatar

About michelleclarke2015

Life event that changes all: Horse riding accident in Zimbabwe in 1993, a fractured skull et al including bipolar anxiety, chronic fatigue …. co-morbidities (Nietzche 'He who has the reason why can deal with any how' details my health history from 1993 to date). 17th 2017 August operation for breast cancer (no indications just an appointment came from BreastCheck through the Post). Trinity College Dublin Business Economics and Social Studies (but no degree) 1997-2003; UCD 1997/1998 night classes) essays, projects, writings. Trinity Horizon Programme 1997/98 (Centre for Women Studies Trinity College Dublin/St. Patrick's Foundation (Professor McKeon) EU Horizon funded: research study of 15 women (I was one of this group and it became the cornerstone of my journey to now 2017) over 9 mth period diagnosed with depression and their reintegration into society, with special emphasis on work, arts, further education; Notes from time at Trinity Horizon Project 1997/98; Articles written for Irishhealth.com 2003/2004; St Patricks Foundation monthly lecture notes for a specific period in time; Selection of Poetry including poems written by people I know; Quotations 1998-2017; other writings mainly with theme of social justice under the heading Citizen Journalism Ireland. Letters written to friends about life in Zimbabwe; Family history including Michael Comyn KC, my grandfather, my grandmother's family, the O'Donnellan ffrench Blake-Forsters; Moral wrong: An acrimonious divorce but the real injustice was the Catholic Church granting an annulment – you can read it and make your own judgment, I have mine. Topics I have written about include annual Brain Awareness week, Mashonaland Irish Associataion in Zimbabwe, Suicide (a life sentence to those left behind); Nostalgia: Tara Hill, Co. Meath.
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a comment