Futurism: OpenAI Launches ChatGPT Health….

OpenAI Launches ChatGPT Health, Which Ingests Your Entire Medical Records, But Warns Not to Use It for “Diagnosis or Treatment”

Wait, what?

By Victor Tangermann

Published Jan 9, 2026 12:17 PM EST

OpenAI's ChatGPT Health will ingest your medical records to generate more relevant responses that shouldn't be acted upon.
Getty Images

AI chatbots may be explosively popular, but they’re known to dispense some seriously wacky— and potentially dangerous — health advice, in a flood of easily accessible misinformation that has alarmed experts.

Their advent has turned countless users into armchair experts, who often end up relying on obsolete, misattributed, or completely made-up advice.

https://4ecf75b774f350a877bf5598d95f2e77.safeframe.googlesyndication.com/safeframe/1-0-45/html/container.html

recent investigation by The Guardian, for instance, found that Google’s AI Overviews, which accompany most search results pages, doled out plenty of inaccurate health information that could lead to grave health risks if followed.

But seemingly unperturbed by experts’ repeated warnings that AI’s health advice shouldn’t be trusted, OpenAI is doubling down by launching a new feature called ChatGPT Health, which will ingest your medical records to generate responses “more relevant and useful to you.”

Yet despite being “designed in close collaboration with physicians” and built on “strong privacy, security, and data controls,” the feature is “designed to support, not replace, medical care.” In fact, it’s shipping with a ludicrously self-defeating caveat: that the bespoke health feature is “not intended for diagnosis or treatment.”

“ChatGPT Health helps people take a more active role in understanding and managing their health and wellness — while supporting, not replacing, care from clinicians,” the company’s website reads.

In reality, users are certain to use it for exactly the type of health advice that OpenAI is warning against in the fine print, which is likely to bring fresh new embarrassments for the company.

It’ll only be heightening existing problems for the company. As Business Insider reports, ChatGPT is “making amateur lawyers and doctors out of everyone,” to the dismay of legal and medical professionals.

Miami-based medical malpractice attorney Jonathan Freidin told the publication that people will use chatbots like ChatGPT to fill out his firm’s client contact sheet.

https://4ecf75b774f350a877bf5598d95f2e77.safeframe.googlesyndication.com/safeframe/1-0-45/html/container.html

“We’re seeing a lot more callers who feel like they have a case because ChatGPT or Gemini told them that the doctors or nurses fell below the standard of care in multiple different ways,” he said. “While that may be true, it doesn’t necessarily translate into a viable case.”

Then there’s the fact that users are willing to surrender medical histories, including highly sensitive and personal information — a decision that OpenAI is now encouraging with ChatGPT Health — despite federal law, like HIPAA, not applying to consumer AI products.

Case in point, billionaire Elon Musk encouraged people last year to upload their medical data to his ChatGPT competitor Grok, leading to a flood of confusion as users received hallucinated diagnoses after sharing their X-rays and PET scans.

Given the AI industry’s track record when it comes to privacy protection and struggles with significant data leaks, all these risks are as pertinent as ever.

“New AI health tools offer the promise of empowering patients and promoting better health outcomes, but health data is some of the most sensitive information people can share and it must be protected,” Center for Democracy and Technology senior counsel Andrew Crawford told the BBC.

“Especially as OpenAI moves to explore advertising as a business model, it’s crucial that separation between this sort of health data and memories that ChatGPT captures from other conversations is airtight,” he added. “Since it’s up to each company to set the rules for how health data is collected, used, shared, and stored, inadequate data protections and policies can put sensitive health information in real danger.”

“ChatGPT is only bound by its own disclosures and promises, so without any meaningful limitation on that, like regulation or a law, ChatGPT can change the terms of its service at any time,” Electronic Privacy Information Center senior counsel Sara Geoghegan told The Record.

https://4ecf75b774f350a877bf5598d95f2e77.safeframe.googlesyndication.com/safeframe/1-0-45/html/container.html

Then there are concerns over highly sensitive data, like reproductive health information, being passed on to the police against the user’s wishes.

“How does OpenAI handle [law enforcement] requests?” Crawford told The Record. “Do they just turn over the information? Is the user in any way informed?”

“There’s lots of questions there that I still don’t have great answers to,” he added.

More on AI and health advice: Google’s AI Overviews Caught Giving Dangerous “Health” Advice

https://4ecf75b774f350a877bf5598d95f2e77.safeframe.googlesyndication.com/safeframe/1-0-45/html/container.html

Victor Tangermann

Senior Editor

I’m a senior editor at Futurism, where I edit and write about NASA and the private space sector, as well as topics ranging from SETI and artificial intelligence to tech and medical policy.

Unknown's avatar

About michelleclarke2015

Life event that changes all: Horse riding accident in Zimbabwe in 1993, a fractured skull et al including bipolar anxiety, chronic fatigue …. co-morbidities (Nietzche 'He who has the reason why can deal with any how' details my health history from 1993 to date). 17th 2017 August operation for breast cancer (no indications just an appointment came from BreastCheck through the Post). Trinity College Dublin Business Economics and Social Studies (but no degree) 1997-2003; UCD 1997/1998 night classes) essays, projects, writings. Trinity Horizon Programme 1997/98 (Centre for Women Studies Trinity College Dublin/St. Patrick's Foundation (Professor McKeon) EU Horizon funded: research study of 15 women (I was one of this group and it became the cornerstone of my journey to now 2017) over 9 mth period diagnosed with depression and their reintegration into society, with special emphasis on work, arts, further education; Notes from time at Trinity Horizon Project 1997/98; Articles written for Irishhealth.com 2003/2004; St Patricks Foundation monthly lecture notes for a specific period in time; Selection of Poetry including poems written by people I know; Quotations 1998-2017; other writings mainly with theme of social justice under the heading Citizen Journalism Ireland. Letters written to friends about life in Zimbabwe; Family history including Michael Comyn KC, my grandfather, my grandmother's family, the O'Donnellan ffrench Blake-Forsters; Moral wrong: An acrimonious divorce but the real injustice was the Catholic Church granting an annulment – you can read it and make your own judgment, I have mine. Topics I have written about include annual Brain Awareness week, Mashonaland Irish Associataion in Zimbabwe, Suicide (a life sentence to those left behind); Nostalgia: Tara Hill, Co. Meath.
This entry was posted in Uncategorized and tagged , , , , . Bookmark the permalink.

Leave a comment