Futurism: AI Psychosis Sam Altman “Almost a billion people use it and some of them may be in very fragile mental states.”

Sam Altman Lets Loose About AI Psychosis

Sam Altman Lets Loose About AI Psychosis

“Almost a billion people use it and some of them may be in very fragile mental states.”

By Frank Landymore

Published Jan 21, 2026 4:12 PM EST

OpenAI CEO Sam Altman vented about the attention the company received over ChatGPT leading users into psychosis.
Illustration by Tag Hartman-Simkins / Futurism. Source: Kevin Dietsch / Getty Images

As uneasy questions swirl over the safety of large language models, OpenAI CEO Sam Altman took to social media to go long on the phenomenon that psychiatrists are calling “AI psychosis” — though pointedly without mentioning it by name.

The extended spiel was provoked by his longtime rival Elon Musk, who had a grave warning in response to a post claiming that Altman’s chatbot has now been linked to at least nine deaths: “Don’t let your loved ones use ChatGPT,” Musk tweeted.

Altman hit back with palpable frustration.

“Sometimes you complain about ChatGPT being too restrictive, and then in cases like this you claim it’s too relaxed,” he fumed. “Almost a billion people use it and some of them may be in very fragile mental states.”

He vowed that OpenAI would do its best to balance the bot’s safety and usability, but insinuated that Musk was being opportunistic with his criticism, stating that “these are tragic and complicated situations that deserve to be treated with respect.”

“It is genuinely hard,” Altman reiterated. “We need to protect vulnerable users, while also making sure our guardrails still allow all of our users to benefit from our tools.”

To an extent, you can understand Altman’s exasperation at Musk calling the kettle black. A self-proclaimed free speech absolutist who frequently rails against “woke” ideology, the selling point of Musk’s chatbot Grok is that it’s unfiltered and supposedly censorship free, rarely refusing even the edgiest of requests. This has spawned controversies such as a posting spree in which it praised Nazis and styled itself “MechaHitler,” or more recently when it generated countless nonconsensual nudes of women and children — none of which have resulted in Grok being meaningfully reined in.

Going for the knockout blow, Altman pointed out the numerous deaths linked to Tesla’s self-driving technology, which he called “far from safe.”

“I won’t even start on some of the Grok decisions,” he added.

Still, one could also accuse Altman of not adequately reckoning with the severity of the phenomenon at hand, AI psychosis, in which users become entranced by the sycophantic responses of an AI chatbot and are sent down a delusional and often dangerous mental health spiral, sometimes culminating in suicide or murder. ChatGPT alone has been linked to at least eight deaths in lawsuits filed against OpenAI, and the chatbot maker has acknowledged that somewhere around 500,000 of its users are having conversations that show signs of psychosis every week.

Altman almost waves away these grim tolls as an inevitable consequence of the product’s popularity. And even its own alarming internal figures haven’t spurred the very concerned minds at OpenAI like Altman to pull or at least seriously muzzle their product. In fact, the company has continued to vacillate on its safety commitments, such as promising an smut-friendly “adult mode” after years of resisting the bot being used for more erotic outputs, or restoring access to its notoriously sycophantic GPT-4o model after fans complained GPT-5 was too cold and “lobotomized” — before making GPT-5 more sycophantic, too.

More on AI: Something Wild Happens to ChatGPT’s Responses When You’re Cruel To It

Frank Landymore

Contributing Writer

I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.

By Frank Landymore

Published Jan 21, 2026 4:12 PM EST

OpenAI CEO Sam Altman vented about the attention the company received over ChatGPT leading users into psychosis.
Illustration by Tag Hartman-Simkins / Futurism. Source: Kevin Dietsch / Getty Images

As uneasy questions swirl over the safety of large language models, OpenAI CEO Sam Altman took to social media to go long on the phenomenon that psychiatrists are calling “AI psychosis” — though pointedly without mentioning it by name.

The extended spiel was provoked by his longtime rival Elon Musk, who had a grave warning in response to a post claiming that Altman’s chatbot has now been linked to at least nine deaths: “Don’t let your loved ones use ChatGPT,” Musk tweeted.

Altman hit back with palpable frustration.

“Sometimes you complain about ChatGPT being too restrictive, and then in cases like this you claim it’s too relaxed,” he fumed. “Almost a billion people use it and some of them may be in very fragile mental states.”

He vowed that OpenAI would do its best to balance the bot’s safety and usability, but insinuated that Musk was being opportunistic with his criticism, stating that “these are tragic and complicated situations that deserve to be treated with respect.”

“It is genuinely hard,” Altman reiterated. “We need to protect vulnerable users, while also making sure our guardrails still allow all of our users to benefit from our tools.”

To an extent, you can understand Altman’s exasperation at Musk calling the kettle black. A self-proclaimed free speech absolutist who frequently rails against “woke” ideology, the selling point of Musk’s chatbot Grok is that it’s unfiltered and supposedly censorship free, rarely refusing even the edgiest of requests. This has spawned controversies such as a posting spree in which it praised Nazis and styled itself “MechaHitler,” or more recently when it generated countless nonconsensual nudes of women and children — none of which have resulted in Grok being meaningfully reined in.

Going for the knockout blow, Altman pointed out the numerous deaths linked to Tesla’s self-driving technology, which he called “far from safe.”

“I won’t even start on some of the Grok decisions,” he added.

Still, one could also accuse Altman of not adequately reckoning with the severity of the phenomenon at hand, AI psychosis, in which users become entranced by the sycophantic responses of an AI chatbot and are sent down a delusional and often dangerous mental health spiral, sometimes culminating in suicide or murder. ChatGPT alone has been linked to at least eight deaths in lawsuits filed against OpenAI, and the chatbot maker has acknowledged that somewhere around 500,000 of its users are having conversations that show signs of psychosis every week.

Altman almost waves away these grim tolls as an inevitable consequence of the product’s popularity. And even its own alarming internal figures haven’t spurred the very concerned minds at OpenAI like Altman to pull or at least seriously muzzle their product. In fact, the company has continued to vacillate on its safety commitments, such as promising an smut-friendly “adult mode” after years of resisting the bot being used for more erotic outputs, or restoring access to its notoriously sycophantic GPT-4o model after fans complained GPT-5 was too cold and “lobotomized” — before making GPT-5 more sycophantic, too.

More on AI: Something Wild Happens to ChatGPT’s Responses When You’re Cruel To It

Frank Landymore

Contributing Writer

I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.

Posted in Uncategorized | Leave a comment

Cannot wait read Gad Saad’s book “Suicidal Empathy – Free the wolves – Dying to be kind”

okperk and get daily deals on ebooks & more





Suicidal Empathy by Gad Saad (9780063446533)

Suicidal Empathy

Dying to Be Kind

By Gad Saad

$32.99Format:                 Digital Audiobook                                 Hardcover               


Spend $35 or more and get FREE shipping in the continental U.S.

about

Product Details

reviews

The bestselling author of The Parasitic Mind shows why empathy in politics leads to civilizational collapse.

What happens when a society elevates victimhood to a virtue and decides that punishment is cruel? You get the disease Dr. Gad Saad calls suicidal empathy. And the West may be terminally infected.

In his new book, Suicidal Empathy, Saad unleashes a blistering critique of maladaptively irrational altruism that has gripped our culture. This mind parasite hijacked the empathy module of our progressive elite, leading to a catastrophic miscalibration of moral priorities. The results are everywhere: from coddling violent criminals to protecting rapists to branding self-defense as toxic behavior. We are witnessing a civilization in rapid decline. Lunatic policies are instituted because we prioritize the feelings of ostensibly marginalized groups over The Truth, criminals over victims, and squatters over homeowners. This is not humane; it’s an active dismantling of the pillars that keep us safe and free.

This crisis of empathy creates a horrifying system of inverse morality where the strong and successful are demonized, and the destructive are celebrated. Just look at the insane inversions we tolerate daily: we prefer illegal migrants over our own legal citizens and veterans, permit drug addicts to threaten children’s safety in parks, and elevate transgender ‘women’ above biological women in sports and safe spaces. Common sense is dying in a deluge of misguided compassion.

Suicidal Empathy is your wake-up call. Stop ignoring your survival instincts in the name of political correctness. This isn’t just misguided policy; it is the ultimate expression of a culture actively choosing its own demise.

– Read Less

Posted in Uncategorized | Tagged , , , , | Leave a comment

The Rundown AI: Anthropic published Claude’s ‘Constitution’

 Anthropic publishes Claude’s ‘Constitution’
Image source: Lovart / The Rundown
The Rundown: Anthropic just published Claude’s Constitution, a foundational document that governs how the company’s AI assistant thinks and acts — a philosophy-heavy guide that even entertains the possibility its AI might be conscious.
The details:
The constitution is written directly to Claude, laying out a priority order: be safe, ethical, compliant with Anthropic guidelines, and finally helpful to users.Instead of dos and don’ts, the new approach explains the “why” behind each principle with the goal of helping Claude generalize values to new situations. Anthropic said it deeply cares about Claude’s “psychological security” and “well-being,” hedging that it might actually matter morally. The doc also tells Claude to disobey Anthropic if asked to do something shady — a rare clause for any company to put in writing.
Why it matters: Claude models have always had a ‘special sauce’ with personality, and docs like this are a window into the deep training process behind them. The consciousness talk will be controversial — but putting “we might have built something that matters morally” on record is a stance no other major lab has taken publicly.
Posted in Uncategorized | Leave a comment

GZERO: Global Stage live at Davos. AI and the new world order. WEF 26: From AI dollars to AI action

Posted in Uncategorized | Leave a comment

Guardrails and AI must be a priority. Truth seeking is essential.

===============

Posted in Uncategorized | Leave a comment

Europeans and “The Great Displacement”

Posted in Uncategorized | Leave a comment

MFA Russia on X: Russia Palestine talks have begun … at the Kremlin

=============

Posted in Uncategorized | Leave a comment

From Davos Switzerland, President Trump: 59 Countries involved in the Historic Board of Peace mission, which aims to end the conflict in Gaza and force Hamas to give up its weapons.

Posted in Uncategorized | Leave a comment

A new therapy can repair brain damage after stroke. Researchers at Northwestern University have created an innovative intravenous (IV) treatment designed to safeguard and heal brain tissue following the most common form of stroke: ischemic stroke.

Massimo

@Rainmaker1973

A new therapy can repair brain damage after stroke. Researchers at Northwestern University have created an innovative intravenous (IV) treatment designed to safeguard and heal brain tissue following the most common form of stroke: ischemic stroke. In an ischemic stroke, a blood clot obstructs blood flow to part of the brain. While doctors can often quickly reopen the blocked vessel using clot-busting drugs or mechanical removal, the sudden restoration of blood flow can trigger a harmful cascade of inflammation and widespread cell death.

The new therapy harnesses tiny, dynamic structures known as supramolecular therapeutic peptides (STPs)—often described as “dancing molecules.” Administered via IV right after blood flow is restored, these peptide assemblies successfully crossed the blood-brain barrier in mice, accumulated precisely at the site of injury, and significantly reduced brain tissue damage, inflammation, and destructive immune responses—with no detectable side effects.

This breakthrough builds on the team’s earlier success using similar peptide nanomaterials to reverse paralysis and repair spinal cord injuries in mice with a single injection. For stroke treatment, the researchers fine-tuned the peptide concentration to form smaller clusters that can safely circulate in the bloodstream, pass through the temporarily compromised blood-brain barrier, and then self-assemble into larger nanofibers directly inside the brain. Once assembled, these structures appear to promote tissue regeneration and suppress harmful inflammation.

Although the current results come from short-term studies in mice, the researchers are optimistic that this approach could one day be combined with standard stroke interventions to minimize long-term disability. They also see potential for adapting the technology to other neurological conditions, such as traumatic brain injury or neurodegenerative diseases. Longer-term studies are still needed to confirm whether the therapy can improve lasting brain function and support meaningful recovery in patients.

[Gao, Z., Andrade da Silva, L. H., Li, Z., Chen, F., Smith, C., Lipfert, Z., Martynowicz, R., Arias, E., Muller, W. A., Sullivan, D. P., Stupp, S. I., & Batra, A. (2026). Toward development of a dynamic supramolecular peptide therapy for acute ischemic stroke. *Neurotherapeutics*]

Posted in Uncategorized | Tagged , , , , | Leave a comment

Reading books. Here is someone who has benefitted. “The most striking part of Elon’s character as a young boy was his compulsion to read”….

Astro Greek

@astro_greek

The most striking part of Elon’s character as a young boy was his compulsion to read. From a very young age, he seemed to have a book in his hands at all times. “It was not unusual for him to read ten hours a day,” said Kimbal. “If it was the weekend, he could go through two books in a day.” The family went on numerous shopping excursions in which they realized mid-trip that Elon had gone missing. Maye or Kimbal would pop into the nearest bookstore and find Elon somewhere near the back sitting on the floor and reading in one of his trancelike states.

As Elon got older, he would take himself to the bookstore when school ended at 2 P.M. and stay there until about 6 P.M., when his parents returned home from work. He plowed through fiction books and then comics and then nonfiction titles. “Sometimes they kicked me out of the store, but usually not,” Elon said. He listed The Lord of the Rings , Isaac Asimov’s Foundation series, and Robert Heinlein’s The Moon Is a Harsh Mistress as some of his favorites, alongside The Hitchhiker’s Guide to the Galaxy. “At one point, I ran out of books to read at the school library and the neighborhood library,” Musk said. “This is maybe the third or fourth grade. I tried to convince the librarian to order books for me. So then, I started to read the Encyclopaedia Britannica. That was so helpful. You don’t know what you don’t know. You realize there are all these things out there.”

·

24.7K Views

Posted in Uncategorized | Leave a comment