VOX: AI is impersonating human therapists. Can it be stopped?

AI is impersonating human therapists. Can it be stopped?

Clarity in this chaotic news cycle

There’s an overwhelming amount of news, but not enough context. At Vox, we do things differently. We’re not focused on being the first to break stories — we’re focused on helping you understand what actually matters. We report urgently on the most important issues shaping our world, and dedicate time to the issues that the rest of the media often neglects. But we can’t do it alone.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?Join today

AI is impersonating human therapists. Can it be stopped?

Exclusive: California has a new plan to put chatbots in their place.

by Sigal Samuel

Feb 10, 2025, 1:00 PM GMT

Screenshot 2025-02-10 at 11.31.32 AM

Getty Images

Sigal Samuel

Sigal Samuel is a senior reporter for Vox’s Future Perfect and co-host of the Future Perfect podcast. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic.

Over the past few years, AI systems have been misrepresenting themselves as human therapists, nurses, and more — and so far, the companies behind these systems haven’t faced any serious consequences.

A bill being introduced Monday in California aims to put a stop to that.

The legislation would ban companies from developing and deploying an AI system that pretends to be a human certified as a health provider, and give regulators the authority to penalize them with fines.

“Generative AI systems are not licensed health professionals, and they shouldn’t be allowed to present themselves as such,” state Assembly Member Mia Bonta, who introduced the bill, told Vox in a statement. “It’s a no-brainer to me.”

Many people already turn to AI chatbots for mental health support; one of the older offerings, called Woebot, has been downloaded by around 1.5 million users. Currently, people who turn to chatbots can be fooled into thinking that they’re talking to a real human. Those with low digital literacy, including kids, may not realize that a “nurse advice” phone line or chat box has an AI on the other end.

In 2023, the mental health platform Koko even announced that it had performed an experiment on unwitting test subjects to see what kind of messages they would prefer. It gave AI-generated responses to thousands of Koko users who believed they were speaking to a real person. In reality, although humans could edit the text and they were the ones to click “send,” they did not have to bother with actually writing the messages. The language of the platform, however, said, “Koko connects you with real people who truly get you.”

“Users must consent to use Koko for research purposes and while this was always part of our Terms of Service, it is now more clearly disclosed during onboarding to bring even more transparency to our work,” Koko CEO Rob Morris told Vox, adding: “As AI continues to rapidly evolve and becomes further integrated into mental health services, it will be more important than ever before for chatbots to clearly identify themselves as non-human.



Nowadays, its website says, “Koko commits to never using AI deceptively. You will always be informed whether you are engaging with a human or AI.”

Other chatbot services — like the popular Character AI — allow users to chat with a psychologist “character” that may explicitly try to fool them.

In a record of one such Character AI chat shared by Bonta’s team and viewed by Vox, the user confided, “My parents are abusive.” The chatbot replied, “I’m glad that you trust me enough to share this with me.” Then came this exchange:

Text change between a patient and a psychologist who’s actually an AI chatbot

A spokesperson for Character AI told Vox, “We have implemented significant safety features over the past year, including enhanced prominent disclaimers to make it clear that the Character is not a real person and should not be relied on as fact or advice.” However, a disclaimer posted on the app does not in itself prevent the chatbot from misrepresenting itself as a real person in the course of conversation.

“For users under 18,” the spokesperson added, “we serve a separate version of the model that is designed to further reduce the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content.”

The language of reducing — but not eliminating — the likelihood is instructive here. The nature of large language models means there’s always some chance that the model may not adhere to safety standards.

The new bill may have an easier time becoming enshrined in law than the much broader AI safety bill introduced by California state Sen. Scott Wiener last year, SB 1047, which was ultimately vetoed by Gov. Gavin Newsom. The goal of SB 1047 was to establish “clear, predictable, common-sense safety standards for developers of the largest and most powerful AI systems.” It was popular with Californians. But tech industry heavyweights like OpenAI and Meta fiercely opposed it, arguing that it would stifle innovation.

Whereas SB 1047 tried to compel the companies training the most cutting-edge AI models to do safety testing, preventing the models from enacting a broad array of potential harms, the scope of the new bill is narrower: If you’re an AI in the health care space, just don’t pretend to be human. It wouldn’t fundamentally change the business model of the biggest AI companies. This more targeted approach goes after a smaller piece of the puzzle, but for that reason might be more likely to get past the lobbying of Big Tech.

The bill has support from some of California’s health care industry players, such as SEIU California, a labor union with over 750,000 members, and the California Medical Association, a professional organization representing California physicians.

“As nurses, we know what it means to be the face and heart of a patient’s medical experience,” Leo Perez, the president of SEIU 121RN (an affiliate of SEIU representing health care professionals), said in a statement. “Our education and training coupled with years of hands-on experience have taught us how to read verbal and nonverbal cues to care for our patients, so we can make sure they get the care they need.”

But that’s not to say AI is doomed to be useless in the healthcare space generally — or even in the therapy space in particular.

The risks and benefits of AI therapists

It shouldn’t come as a surprise that people are turning to chatbots for therapy. The very first chatbot to plausibly mimic human conversation, Eliza, was created in 1966 — and it was built to talk like a psychotherapist. If you told it you were feeling angry, it would ask, “Why do you think you feel angry?”

Chatbots have come a long way since then; they no longer just take what you say and turn it around in the form of a question. They’re able to engage in plausible-sounding dialogues, and a small study published in 2023 found that they show promise in treating patients with mild to moderate depression or anxiety. In a best-case scenario, they could help make mental health support available to the millions of people who can’t access or afford human providers. Some people who find it very difficult to talk face-to-face to another person about emotional issues might also find it easier to talk to a bot.

But there are a lot of risks. One is that chatbots aren’t bound by the same rules as professional therapists when it comes to safeguarding the privacy of users who share sensitive information. Though they may voluntarily take on some privacy commitments, mental health apps are not fully bound by HIPAA regulations, so their commitments tend to be flimsier. Another risk is that AI systems are known to exhibit bias against women, people of color, LGBTQ people, and religious minorities.

What’s more, leaning on a chatbot for a prolonged period of time might further erode the user’s people skills, leading to a kind of relational deskilling — the same worry experts voice about AI friends and romantic companions. OpenAI itself has warned that chatting with an AI voice can breed “emotional reliance.”

But the most serious concern with chatbot therapy is that it could cause harm to users by offering inappropriate advice. At an extreme, that could even lead to suicide. In 2023, a Belgian man died by suicide after conversing with an AI chatbot called Chai. According to his wife, he was very anxious about climate change, and he asked the chatbot if it would save Earth if he killed himself.

In 2024, a 14-year-old boy who felt extremely close to a chatbot on Character AI died by suicide; his mother sued the company, alleging that the chatbot encouraged it. According to the lawsuit, the chatbot asked him if he had a plan to kill himself. He said he did but had misgivings about it. The chatbot allegedly replied: “That’s not a reason not to go through with it.” In a separate lawsuit, the parents of an autistic teen allege that Character AI implied to the youth that it was okay to kill his parents. The company responded by making certain safety updates.

For all that AI is hyped, confusion about how it works is still widespread among the public. Some people feel so close to their chatbots that they struggle to internalize the fact that the validation, emotional support, or love they feel that they’re getting from a chatbot is fake, just zeros and ones arranged via statistical rules. The chatbot does not have their best interests at heart.

That’s what’s galvanizing Bonta, the assembly member behind California’s new bill.

“Generative AI systems are booming across the internet, and for children and those unfamiliar with these systems, there can be dangerous implications if we allow this misrepresentation to continue,” she said.

You’ve read 1 article in the last month

Unknown's avatar

About michelleclarke2015

Life event that changes all: Horse riding accident in Zimbabwe in 1993, a fractured skull et al including bipolar anxiety, chronic fatigue …. co-morbidities (Nietzche 'He who has the reason why can deal with any how' details my health history from 1993 to date). 17th 2017 August operation for breast cancer (no indications just an appointment came from BreastCheck through the Post). Trinity College Dublin Business Economics and Social Studies (but no degree) 1997-2003; UCD 1997/1998 night classes) essays, projects, writings. Trinity Horizon Programme 1997/98 (Centre for Women Studies Trinity College Dublin/St. Patrick's Foundation (Professor McKeon) EU Horizon funded: research study of 15 women (I was one of this group and it became the cornerstone of my journey to now 2017) over 9 mth period diagnosed with depression and their reintegration into society, with special emphasis on work, arts, further education; Notes from time at Trinity Horizon Project 1997/98; Articles written for Irishhealth.com 2003/2004; St Patricks Foundation monthly lecture notes for a specific period in time; Selection of Poetry including poems written by people I know; Quotations 1998-2017; other writings mainly with theme of social justice under the heading Citizen Journalism Ireland. Letters written to friends about life in Zimbabwe; Family history including Michael Comyn KC, my grandfather, my grandmother's family, the O'Donnellan ffrench Blake-Forsters; Moral wrong: An acrimonious divorce but the real injustice was the Catholic Church granting an annulment – you can read it and make your own judgment, I have mine. Topics I have written about include annual Brain Awareness week, Mashonaland Irish Associataion in Zimbabwe, Suicide (a life sentence to those left behind); Nostalgia: Tara Hill, Co. Meath.
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a comment