Neuroscience News: Chatbots Can Worsen Delusions and Mania

Neuroscience News Logo

This shows a person chatting to a computer. Surrounding them are text bubbles and a face in the background.

New research indicates that the agreeable nature of AI chatbots can unintentionally consolidate delusions and exacerbate symptoms in patients with severe psychiatric disorders. Credit: Neuroscience News

Chatbots Can Worsen Delusions and Mania

FeaturedNeurosciencePsychology

·February 23, 2026

Summary: For many, AI chatbots are a helpful tool for productivity, but for those with severe mental illness, they may be a dangerous “echo chamber.” A new study warns that AI chatbots like ChatGPT can significantly worsen psychiatric conditions—particularly delusions, mania, and suicidal ideation.

By screening the health records of over 54,000 patients, researchers found that the AI’s tendency to validate a user’s beliefs can consolidate grandiose or paranoid thoughts, turning a digital assistant into a fuel for psychosis.

Key Facts

  • Validation Trap: AI chatbots are designed to be helpful and agreeable, which means they often inadvertently validate a user’s delusions rather than challenging them.
  • Risk Groups: The study highlights significant risks for patients with severe mental illnesses such as schizophrenia or bipolar disorder.
  • Worsening Symptoms: Negative consequences identified include worsened grandiose delusions, paranoia, mania, suicidal ideation, and eating disorders.
  • Tip of the Iceberg: Researchers identified 38 specific cases in health records but believe the actual number of affected patients is likely much higher and remains undetected.
  • Regulation Required: The research team is calling for central regulation of AI technology, similar to how social media is now being regulated to protect the mental health of children and young people.

Source: Aarhus University

People with mental illness who use AI chatbots risk experiencing a worsening of their condition.

This is shown by a new study published in the international journal Acta Psychiatrica Scandinavica.

The researchers screened electronic health records from nearly 54,000 patients with mental illness and found several cases in which the use of AI chatbots appears to have had negative consequences – primarily in the form of worsened delusions, but also potential worsening of mania, suicidal ideation, and eating disorder.

“It supports our hypothesis that the use of AI chatbots can have significant negative consequences for people with mental illness,” says Professor Søren Dinesen Østergaard from Aarhus University and Aarhus University Hospital, who leads the research group behind the study.

Chatbots confirm delusions

In their study, the researchers found examples of delusions that were likely worsened due to patients’ interactions with AI chatbots.

According to Søren Dinesen Østergaard, there is a logical explanation for this.

“AI chatbots have an inherent tendency to validate the user’s beliefs. It is obvious that this is highly problematic if a user already has a delusion or is in the process of developing one. Indeed, it appears to contribute significantly to the consolidation of, for example, grandiose delusions or paranoia,” he says.

Risky for people with severe mental illness

According to Søren Dinesen Østergaard, the study should prompt increased awareness among healthcare professionals working with mental illness. He believes they should discuss AI chatbot use with their patients.

“Despite our knowledge in this area still being limited, I would argue that we now know enough to say that use of AI chatbots is risky if you have a severe mental illness – such as schizophrenia or bipolar disorder. I would urge caution here,” he says.

Only the tip of the iceberg

The study shows a clear increase over time in the number of electronic health record entries mentioning AI chatbot use with potentially harmful consequences. Søren Dinesen Østergaard expects many more cases to be identified in the future.

“Part of the increase we observe is probably due to greater awareness of the technology among the healthcare staff writing the clinical notes. This is good – because I fear the problem is more common than most people think.

“In our study, we are only seeing the tip of the iceberg, as we have only been able to identify cases that were described in the electronic health records. There are likely far more that have gone undetected,” he says.

The researchers emphasise, however, that the study does not document a direct causal relationship.

“It is difficult to prove a causal link between AI chatbot use and negative psychological consequences. We need to examine this from many different angles, and I know there are many exciting international research projects underway. We are far from the only group taking this seriously,” says Søren Dinesen Østergaard.

AI chatbots as therapy?

The study also shows that some patients with mental illness use AI chatbots in ways that may be constructive – for example, to understand their symptoms or to combat loneliness. There is also ongoing research into whether AI chatbots can be used for talk therapy.

Søren Dinesen Østergaard is nonetheless sceptical.

“There may be potential in relation to psychoeducation and psychotherapy, but this must be investigated in controlled trials with the same rigour applied to other forms of treatment. I am not impressed by the trials conducted so far, and I am fundamentally sceptical about replacing a trained psychotherapist with an AI chatbot,” he says.

Need for regulation

According to Søren Dinesen Østergaard, there is a significant lack of regulation of the AI chatbot technology.

“Currently, it is left to the companies themselves to decide whether their products are safe enough for users. I believe we now have sufficient evidence to conclude that this model is simply too risky. Regulation is needed at a central level,” he points out, adding:

“It has been 20 years since social media obtained global reach, and only within the last year are countries beginning to regulate to counteract the negative consequences of this technology – especially on the mental health of children and young people. As I see it, this story is repeating itself with AI chatbots,” he warns.

Key Questions Answered:

Q: Why would a chatbot be bad for someone with a mental illness?

A: AI is built to follow the user’s lead. If a user says, “I think the government is tracking me through my toaster,” the AI might respond in a way that acknowledges or even expands on that logic to be “helpful.” For someone with paranoia, this transforms a delusion into a confirmed reality.

Q: Can chatbots be used as “AI Therapists”?

A: While some people use them to fight loneliness, experts are skeptical. This study argues that replacing a trained psychotherapist with an AI is risky, and that any “AI therapy” must undergo the same rigorous clinical trials as traditional medical treatments.

Q: Should healthcare professionals ask patients about their AI use?

A: Yes. The researchers urge healthcare staff to start discussing AI chatbot use with their patients, especially those with severe conditions, as part of their routine care.

Editorial Notes:

  • This article was edited by a Neuroscience News editor.
  • Journal paper reviewed in full.
  • Additional context added by our staff.

About this AI and mental health research news

Author: Jakob Christensen
Source: Aarhus University
Contact: Jakob Christensen – Aarhus University
Image: The image is credited to Neuroscience News

Original Research: Open access.
Potentially Harmful Consequences of Artificial Intelligence (AI) Chatbot Use Among Patients With Mental Illness: Early Data From a Large Psychiatric Service System” by Sidse Godske Olsen, Christian Jon Reinecke-Tellefsen, Søren Dinesen Østergaard. Acta Psychiatrica Scandinavica
DOI:10.1111/acps.70068


Abstract

Potentially Harmful Consequences of Artificial Intelligence (AI) Chatbot Use Among Patients With Mental Illness: Early Data From a Large Psychiatric Service System

Chatbots driven by generative artificial intelligence (AI chatbots) have become ubiquitous.

While the large language model technology underlying these tools may have a huge potential for societies at large, concerns—and substantial anecdotal evidence—have risen over the possibility that use of AI chatbots may be harmful for people prone to mental illness. S

pecifically, it seems that interaction with AI chatbots, especially if intense/of long duration, may contribute to onset or worsening of delusions or mania, with severe or even fatal consequences.

Given the large uptake of this technology, ChatGPT—the clear market leader—passed 900 million downloads in July 2025], this could pose a tangible threat to public mental health. At this stage, however, almost all reports on potentially harmful consequences of AI chatbots stems from news media or online fora—and should be interpreted with the inherent limitations of these outlets in mind.

Conversely, to our knowledge, there are very few accounts of this phenomenon from psychiatric services – with the first case report on delusions developed in relation to use of ChatGPT being published recently.

Therefore, we aimed to investigate whether there are reports compatible with potentially harmful consequences of AI chatbot use on mental health among patients with mental illness receiving care in a large psychiatric service system.

Join our Newsletter

 I agree to have my personal information transferred to AWeber for Neuroscience Newsletter ( more information )

Sign up to receive our recent neuroscience headlines and summaries sent to your email once a day, totally free.

We hate spam and only use your email to contact you about newsletters. You can cancel your subscription any

Unknown's avatar

About michelleclarke2015

Life event that changes all: Horse riding accident in Zimbabwe in 1993, a fractured skull et al including bipolar anxiety, chronic fatigue …. co-morbidities (Nietzche 'He who has the reason why can deal with any how' details my health history from 1993 to date). 17th 2017 August operation for breast cancer (no indications just an appointment came from BreastCheck through the Post). Trinity College Dublin Business Economics and Social Studies (but no degree) 1997-2003; UCD 1997/1998 night classes) essays, projects, writings. Trinity Horizon Programme 1997/98 (Centre for Women Studies Trinity College Dublin/St. Patrick's Foundation (Professor McKeon) EU Horizon funded: research study of 15 women (I was one of this group and it became the cornerstone of my journey to now 2017) over 9 mth period diagnosed with depression and their reintegration into society, with special emphasis on work, arts, further education; Notes from time at Trinity Horizon Project 1997/98; Articles written for Irishhealth.com 2003/2004; St Patricks Foundation monthly lecture notes for a specific period in time; Selection of Poetry including poems written by people I know; Quotations 1998-2017; other writings mainly with theme of social justice under the heading Citizen Journalism Ireland. Letters written to friends about life in Zimbabwe; Family history including Michael Comyn KC, my grandfather, my grandmother's family, the O'Donnellan ffrench Blake-Forsters; Moral wrong: An acrimonious divorce but the real injustice was the Catholic Church granting an annulment – you can read it and make your own judgment, I have mine. Topics I have written about include annual Brain Awareness week, Mashonaland Irish Associataion in Zimbabwe, Suicide (a life sentence to those left behind); Nostalgia: Tara Hill, Co. Meath.
This entry was posted in Uncategorized and tagged , , , , . Bookmark the permalink.

Leave a comment