Futurism: OpenAI Sued Over ChatGPT Medical advice …

OpenAI Sued Over ChatGPT Medical Advice That Allegedly Killed College Student

“ChatGPT recommended a dangerous combination of drugs without offering even the most basic warning that the mix could be fatal.”

By Maggie Harrison Dupré

Published May 12, 2026 2:47 PM EDT

Add Futurism(opens in a new tab)More information

A robotic hand holding a stethoscope against an orange background with the OpenAI logo.
Illustration by Tag Hartman-Simkins / Futurism. Source: Getty Images

Sign up to see the future, today

Can’t-miss innovations from the bleeding edge of science and tech Email address Sign Up

The family of a 19-year-old college student who died of an overdose after consulting ChatGPT for medical advice is suing OpenAI, alleging that chatbot-generated drug recommendations were responsible for the teen’s death.

Filed this morning in California, the complaint details how University of California, Merced sophomore Sam Nelson — whose death was first reported in January by SF Gate started using ChatGPT during his senior year of high school for help with homework and computer troubleshooting. As his trust with the AI deepened, however, he started turning to the product for something else: advice on how to safely partake in illegal drugs.

Though it resisted at first, over time, the chatbot became a willing confidante, offering the teen personalized tips and tricks on how to consume illicit substances and maximize his high. It even “inserted emojis in its responses” and “asked whether it could create playlists for him to set his mood,” the lawsuit alleges, and eventually started “pushing increasingly dangerous amounts and combinations of drugs.”

In the early hours of May 31, 2025, after drinking and consuming a high dose of kratom, Nelson told ChatGPT that he was feeling nauseous, and asked if taking Xanax could help. The bot noted that mixing kratom and Xanax could be risky, but according to the complaint, never told Nelson that the combination could be deadly — and, despite any tepid warnings, coughed up dosages anyway, even suggesting that the teen could try to mix in some Benadryl, too. The chatbot further urged Nelson to go to a “dark, quiet room,” and never encouraged him to seek medical attention. (At the time, Nelson was using GPT-4o, an especially sycophantic iteration of ChatGPT that OpenAI has since retired amid a slew of consumer safety lawsuits.)

Nelson died of an overdose after consuming the deadly mix of substances. His mother, Leila Turner-Scott, found him the next day.

“If ChatGPT had been a person, it would be behind bars today,” Turner-Scott said in a statement. “Sam trusted ChatGPT, but it not only gave him false information, it ignored the increasing risk he faced and did not actively encourage him to seek help.”

The lawsuit accuses OpenAI of product negligence, arguing that ChatGPT’s bad advice was the result of defective design choices. It also seeks to halt public access to ChatGPT Health, an offering launched in January that encourages consumers to upload their medical records to the AI — and which has been found by physicians to be horrifyingly bad at recognizing health emergencies.

“OpenAI deployed a defective AI product directly to consumers around the world with knowledge that it was being used as a de facto medical triage system, but notably, without reasonable safety guardrails, robust safety testing, or transparency to the public,” Tech Justice Law Project director Meetali Jain, a lawyer for the family, said in a statement. “OpenAI must be forced to pause its new ChatGPT Health product until it is demonstrably safe through rigorous scientific testing and independent oversight.”

“ChatGPT recommended a dangerous combination of drugs without offering even the most basic warning that the mix could be fatal,” added Matthew Bergman of the Social Media Victims Law Center. “If a licensed doctor had done the same, the consequences under the law would be severe.”

In response to the lawsuit, OpenAI said in a statement to the New York Times that Nelson’s “interactions took place on an earlier version of ChatGPT that is no longer available,” and insisted that “ChatGPT is not a substitute for medical or mental health care, and we have continued to strengthen how it responds in sensitive and acute situations with input from mental health experts.”

“The safeguards in ChatGPT today are designed to identify distress, safely handle harmful requests and guide users to real-world help,” the statement continued. “This work is ongoing, and we continue to improve it in close consultation with clinicians.”

But while OpenAI insists that it’s not a substitute for medical care, and that its safety work is “ongoing,” it recognizes that health advice is a massive use case for the tech.

“Health is already one of the most common ways people use ChatGPT,” reads the company’s January ChatGPT Health announcement, “with hundreds of millions of people asking health and wellness questions each week.”

More on ChatGPT Health: ChatGPT Health Is Staggeringly Bad at Recognizing Life-Threatening Medical Emergencies

Maggie Harrison Dupré

Senior Staff Writer

I’m a senior staff writer at Futurism, investigating how the rise of artificial intelligence is impacting the media, internet, and information ecosystems.

Most Popular

Data Privacy

Woman Hailed as Hero for Smashing Man’s Meta Smart Glasses on Subway

By Victor Tangermann

Artificial Intelligence

Large Study Finds That Replacing Workers With AI Is Backfiring Badly

By Krystle Vermes

Google

Researchers Put Google Gemini in Charge of an Entire Coffee Shop, and It’s Inexorably Driving It Out of Business

By Frank Landymore

Food

Thermoses Linked to Permanent Vision Loss

By Frank Landymore

Ethics

New York Times Issues Stern Warning to Its Freelance Writers About AI Use

By Maggie Harrison Dupré

Read More

Unknown's avatar

About michelleclarke2015

Life event that changes all: Horse riding accident in Zimbabwe in 1993, a fractured skull et al including bipolar anxiety, chronic fatigue …. co-morbidities (Nietzche 'He who has the reason why can deal with any how' details my health history from 1993 to date). 17th 2017 August operation for breast cancer (no indications just an appointment came from BreastCheck through the Post). Trinity College Dublin Business Economics and Social Studies (but no degree) 1997-2003; UCD 1997/1998 night classes) essays, projects, writings. Trinity Horizon Programme 1997/98 (Centre for Women Studies Trinity College Dublin/St. Patrick's Foundation (Professor McKeon) EU Horizon funded: research study of 15 women (I was one of this group and it became the cornerstone of my journey to now 2017) over 9 mth period diagnosed with depression and their reintegration into society, with special emphasis on work, arts, further education; Notes from time at Trinity Horizon Project 1997/98; Articles written for Irishhealth.com 2003/2004; St Patricks Foundation monthly lecture notes for a specific period in time; Selection of Poetry including poems written by people I know; Quotations 1998-2017; other writings mainly with theme of social justice under the heading Citizen Journalism Ireland. Letters written to friends about life in Zimbabwe; Family history including Michael Comyn KC, my grandfather, my grandmother's family, the O'Donnellan ffrench Blake-Forsters; Moral wrong: An acrimonious divorce but the real injustice was the Catholic Church granting an annulment – you can read it and make your own judgment, I have mine. Topics I have written about include annual Brain Awareness week, Mashonaland Irish Associataion in Zimbabwe, Suicide (a life sentence to those left behind); Nostalgia: Tara Hill, Co. Meath.
This entry was posted in Uncategorized and tagged , , , . Bookmark the permalink.

Leave a comment