Who owns your voice? Scarlett Johansson OpenAI complaint raises questions. Source: nature

Advertisement

Nature

  • NEWS EXPLAINER
  • 29 May 2024

Who owns your voice? Scarlett Johansson OpenAI complaint raises questions

In the age of artificial intelligence, situations are emerging that challenge the laws over rights to a persona.

Facebook Email

Scarlett Johansson poses in a pink dress on the red carpet at the Cannes film festival
Scarlett Johansson has said she believes the OpenAI chatbot voice was intended to imitate her. Credit: Samir Hussein/WireImage via Getty

A kerfuffle erupted last week after actor Scarlett Johansson complained that one of OpenAI’s chatbot voices sounded a lot like her. It isn’t hers — the company created it using recordings from someone else. Nevertheless, the firm has suspended the voice out of respect for Johansson’s concerns. But the media flurry has cracked open a broader discussion about peoples’ rights to their own personas. In the age of generative artificial intelligence (genAI), are existing laws sufficient to protect the use of a person’s appearance and voice?

The answer isn’t always clear, says Carys Craig, an intellectual-property scholar at York University in Toronto, Canada, who will be speaking on this topic next month during a Canadian Bar Association webcast.

Several members of the US Congress have, in the past year, called for a federal law to enshrine such protections at the national level. And some legal scholars say that action is needed to improve privacy rights in the United States. But they also caution that hastily written laws might infringe on freedom of speech or create other problems. “It’s complicated,” says Meredith Rose, a legal analyst at the non-profit consumer-advocacy group Public Knowledge in Washington DC. “There’s a lot that can go wrong.”

“Rushing to regulate this might be a mistake,” Craig says.

Fake me

GenAI can be used to easily clone voices or faces to create deepfakes, in which a person’s likeness is imitated digitally. People have made deepfakes for fun and to promote education or research. However, they’ve also been used to sow disinformation, attempt to sway elections, create non-consensual sexual imagery or scam people out of money.

Many countries have laws that prevent these kinds of harmful and nefarious activities, regardless of whether they involve AI, Craig says. But when it comes to specifically protecting a persona, existing laws might or might not be sufficient.

Copyright does not apply, says Craig, because it was designed to protect specific works. “From an intellectual-property perspective, the answer to whether we have rights over our voice, for example, is no,” she says. Most discussions about copyright and AI focus instead on whether and how copyrighted material can be used to train the technology, and whether new material that it produces can be copyrighted.

Aside from copyright laws, some regions, including some US states, have ‘publicity rights’ that allow an individual to control the commercial use of their image, to protect celebrities against financial loss. For example, in 1988, long before AI entered the scene, singer and actor Bette Midler won a ‘voice appropriation’ case against the Ford Motor Company, which had used a sound-alike singer to cover one of her songs in a commercial. And in 1992, game-show host Vanna White won a case against the US division of Samsung when it put a robot dressed as her in a commercial.

“We have a case about a person who won against a literal robot already,” says Rose. With AI entering the arena, she says, cases will become “increasingly bananas”.How to stop AI deepfakes from sinking society — and science

Much remains to be tested in court. The rapper Drake, for example, last month released a song featuring AI-generated voice clips of the late rapper Tupac Shakur. Drake removed the song from streaming services after receiving a cease-and-desist letter from Shakur’s estate. But it’s unclear, says Craig, whether the song’s AI component was unlawful. In Tennessee, a law passed this year, called the Ensuring Likeness Voice and Image Security (ELVIS) Act, seeks to protect voice actors at all levels of fame from “the unfair exploitation of their voices”, including the use of AI clones.

In the United States, actors have some contractual protection against AI — the agreement that in December ended the Hollywood strike of the Screen Actors Guild-American Federation of Television and Radio Artists included provisions to stop filmmakers from using a digital replica of an actor without explicit consent from the individual in each case.

Meanwhile, individual tech companies have their own policies to help prevent genAI misuse. For example, OpenAI, based in San Francisco, California, has not released to the general public the voice-cloning software that was used to make its chatbot voices, acknowledging that “generating speech that resembles people’s voices has serious risks”. Usage policies for partners testing the technology “prohibit the impersonation of another individual or organization without consent or legal right”.

Others are pursuing technological approaches to stemming misuse: last month, the US Federal Trade Commission announced the winners of its challenge to “protect consumers from the misuse of artificial intelligence-enabled voice cloning for fraud and other harms”. These include ways to watermark real audio at the time of recording and tools for detecting genAI-produced audio.

Broad scope

More worrying than loss of income for actors, say Rose and Craig, is the use of AI to clone people’s likenesses for uses including non-consensual pornography. “We have very spare, inadequate laws about non-consensual imagery in the first place, let alone with AI,” says Rose. The fact that deepfake porn is now easy to generate, including with minors’ likenesses, should be serious cause for alarm, she adds. Some legal scholars, including Danielle Citron at the University of Virginia in Charlottesville, are advocating for legal reforms that would recognize ‘intimate privacy’ as a US civil right — comparable to the right to vote or the right to a fair trial.

Current publicity-rights laws aren’t well suited to covering non-famous people, Rose says. “Right to publicity is built around recognizable, distinctive people in commercial applications,” she says. “That makes sense for Scarlett Johansson, but not for a 16-year-old girl being used in non-consensual imagery.”

However, proposals to extend publicity rights to private individuals in the United States might have unintended consequences, says Rose. She has written to the US Congress expressing concern that some of the proposed legislation could allow misuse by powerful companies. A smartphone app for creating novelty photos, for example, could insert a provision into its terms of service that “grants the app an unrestricted, irrevocable license to make use of the user’s likeness”.

There’s also a doppelganger problem, says Rose: an image or voice of a person randomly generated by AI is bound to look and sound like at least one real person, who might then seek compensation.

Laws designed to protect people can run the risk of going too far and threatening free speech. “When you have rights that are too expansive, you limit free expression,” Craig says. “The limits on what we allow copyright owners to control are there for a reason; to allow people to be inspired and create new things and contribute to the cultural conversation,” she says. Parody and other works that build on and transform an original often fall into the sphere of lawful fair use, as they should, she says. “An overly tight version [of these laws] would annihilate parody,” says Rose.

doi: https://doi.org/10.1038/d41586-024-01578-4

Reprints and permissions

Latest on:

AI assistance for planning cancer treatment Outlook Anglo-American bias could make generative AI an invisible intellectual cage Correspondence

Unknown's avatar

About michelleclarke2015

Life event that changes all: Horse riding accident in Zimbabwe in 1993, a fractured skull et al including bipolar anxiety, chronic fatigue …. co-morbidities (Nietzche 'He who has the reason why can deal with any how' details my health history from 1993 to date). 17th 2017 August operation for breast cancer (no indications just an appointment came from BreastCheck through the Post). Trinity College Dublin Business Economics and Social Studies (but no degree) 1997-2003; UCD 1997/1998 night classes) essays, projects, writings. Trinity Horizon Programme 1997/98 (Centre for Women Studies Trinity College Dublin/St. Patrick's Foundation (Professor McKeon) EU Horizon funded: research study of 15 women (I was one of this group and it became the cornerstone of my journey to now 2017) over 9 mth period diagnosed with depression and their reintegration into society, with special emphasis on work, arts, further education; Notes from time at Trinity Horizon Project 1997/98; Articles written for Irishhealth.com 2003/2004; St Patricks Foundation monthly lecture notes for a specific period in time; Selection of Poetry including poems written by people I know; Quotations 1998-2017; other writings mainly with theme of social justice under the heading Citizen Journalism Ireland. Letters written to friends about life in Zimbabwe; Family history including Michael Comyn KC, my grandfather, my grandmother's family, the O'Donnellan ffrench Blake-Forsters; Moral wrong: An acrimonious divorce but the real injustice was the Catholic Church granting an annulment – you can read it and make your own judgment, I have mine. Topics I have written about include annual Brain Awareness week, Mashonaland Irish Associataion in Zimbabwe, Suicide (a life sentence to those left behind); Nostalgia: Tara Hill, Co. Meath.
This entry was posted in Uncategorized and tagged , , , , . Bookmark the permalink.

Leave a comment