Introduction
With the proliferation of artificial intelligence (AI) technology, how developers are utilizing data for training purposes has come under scrutiny. Recently, voice-over actors filed a proposed class action in the Southern District of New York alleging that LOVO, Inc. – a technology company that enables customers to create and edit voice-over narrations adapted from real actors through generative AI – stole their voices and identities without permission or compensation. In their suit, the actors accuse LOVO of violating state and federal law by purchasing their voice clips without disclosing how their voices would be used. Specifically, the actors allege that without their permission, LOVO used their voices in both its AI text-to-speech software and subscription services. While courts are reviewing a number of cases involving the use of data in training generative AI (GenAI) tools, this is one of the first cases to be based on rights of publicity.
Below we break down the plaintiffs’ key allegations and the lawsuit’s potential impact in several areas, ranging from intellectual property to insurance recovery.
Background
Plaintiffs allege that LOVO violated New York state civil and publicity laws, as well as the federal Lanham Act, through false advertising, unfair competition and unjust enrichment, among other acts, when it failed to disclose that plaintiffs’ voices would be used to train AI technologies and subsequently used their voices to produce AI-generated voice-overs for commercial purposes without permission or consent. The class includes other actors whose voices were misused by LOVO. Plaintiffs are seeking compensatory and punitive damages exceeding $5 million, as well as injunctive relief to prevent further misuse of their voices and recovery of LOVO’s profits from the alleged scheme.
Potential impacts
While this case is one of the first where voice actors accuse a GenAI developer of voice misappropriation, we expect to see more lawsuits and regulations involving AI-generated soundalikes and images. Many states have proposed laws to address these activities in varying capacities, with Tennessee being the first to update its right of privacy law (Ensuring Likeness, Voice and Image Security (ELVIS) Act) to protect artists from GenAI imitations and deepfakes, which we discuss in a May 8th client alert. Notably, this case gives rise to a number of considerations, including how rights of publicity and IP protections may apply to the use of voice in AI training, as well as a reminder of the importance of proper licensing terms and permissions. Businesses that develop or deploy AI should explore how to work with such technologies to avoid or limit exposure, such as through media liability, cyber, and general liability insurance coverage.
I. Rights of privacy, publicity and intellectual property
The production of human-like voices by GenAI raises significant concerns for privacy and intellectual property rights. While many of the voice samples used to train GenAI systems are supposed to originate from “publicly available” sources, serious questions continue to be raised about adequate consent and misuse of personal data. In LOVO, the plaintiffs assert that their voices were literally stolen without permission, posing a threat to their livelihood, privacy and identity.
While claims of digital appropriation of celebrity likenesses are not new, as GenAI evolves and becomes ever more prevalent, the protection of the privacy and intellectual property rights of natural persons will need to be further addressed. In addition to Tennessee’s ELVIS Act, a number of states have enacted or are reviewing legislation targeting the use of biometric data, including voiceprints, in AI systems. The California Consumer Privacy Act grants individuals significant rights over the use and sale of their personal data, such as the right to access, request deletion and limit the processing of their information. Illinois has enacted a Biometric Information Privacy Act to help regulate the collection, use, handling, storage, retention and destruction of biometric identifiers and information. These statutes have been prominently featured in recent litigation relating to the unauthorized collection, retention and use of voiceprints.
State consumer protection laws, which the LOVO plaintiffs rely on extensively in their lawsuit, will also continue to play a prominent role in combatting the unauthorized use of GenAI-produced voices, including through the prohibition of false advertising and misrepresentation, where companies do not have ownership of the voice data used to train these voices or misled consumers about the source of these voices.
Ultimately, the federal government will be taking a broader role in regulating the production of synthesized likenesses and voices by GenAI. To date, aggrieved parties in AI misappropriation litigation have generally relied on the Lanham Act and its protection against false advertising. As noted in our May 8th client update on the ELVIS Act, Congress recently held hearings on the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act, which is aimed at protecting individuals’ voices, images and likenesses from deepfakes and other digital replicas. The Federal Trade Commission has also proposed revisions to its Government and Business Impersonation Rule that would expand the rule to prohibit impersonation of individuals and declare it unfair and deceptive practice for companies, including GenAI platforms.
Artists and individuals will also continue to rely on laws relating to rights of publicity to provide protections against the replication of their voices without authorization. While there is no federal law in the United States regarding rights of publicity, various state laws and the common law may provide some protection in this area, including the ruling in Waits v. Frito Lay, Inc. In that case, the Ninth Circuit confirmed that “when voice is a sufficient indicia of a celebrity’s identity, the right of publicity protects against its imitation for commercial purposes without the celebrity’s consent,” and clarified the common law rule that for a voice to be misappropriated, it must be distinctive, widely known and deliberately imitated for commercial use. See Reed Smith’s Entertainment and Media Guide to AI, specifically, the “Rights of publicity” section.
The status of post-mortem rights of publicity varies widely. States like California and Tennessee offer strong post-mortem publicity rights, while other states provide limited or no protection. Recent legislation in states like New York is beginning to address this. However, depending on an artist’s domicile at their time of death, there may be no post-mortem rights of publicity, which could prevent their estates from contesting the commercial use of GenAI likenesses of the artist’s voice.
II. Language permissions, licensing terms and disclosure
The LOVO complaint states that plaintiffs were originally paid to provide voice-over work for specific uses, and defendant’s employees represented that submitted voice-overs would be used solely for the disclosed purposes – research purposes or as tests for radio advertisements in the case of the two named plaintiffs. Further it was never disclosed that the voice-overs might be used for other purposes, such as to generate simulated voice-overs. The alleged misrepresentation highlights generally one of the risks associated with any training data, and specifically whether AI training was contemplated in any authorizations or permissions under which data was collected. This principle certainly comes up in the context of the collection and use of personal information, but revisits prior lawsuits and complaints concerning the use of name, image and likeness outside the scope of rights granted. Particularly in the scope of AI, companies providing rights to information and data should carefully investigate what may be captured by use of data or content for “internal” purposes or to “improve services, technology, and offerings.” Similarly, when procuring AI tools, it is prudent to ensure that models have been trained and fine-tuned with data where appliable rights were specifically granted for such purposes, including to create and use outputs for the intended purpose.
III. Insurance coverage
In the face of novel issues and claims like those presented in this lawsuit, it is important that businesses that develop or deploy AI explore how to work with such technologies in a manner that avoids or limits their exposure.
It is critical that companies evaluate and consider their insurance coverage programs when planning for and responding to these and other AI risks. Although it is important to evaluate the entirety of the company’s insurance program for potential sources of coverage, this article focuses on a few in particular: media liability, cyber and general liability insurance.
a. Media liability insurance
Media liability insurance provides professional liability coverage geared toward companies operating in the media and advertising spaces. Although this type of insurance historically was developed with more “traditional” media companies in mind – publishers, broadcasters, advertising agencies and the like – in a digital era where virtually every company is collecting and disseminating information and engaging with consumers and the broader public, it has become a staple for business of all sizes and industries.
Although terms can differ across policies, generally speaking, media liability insurance provides coverage for lawsuits or other third-party proceedings against a company related to its “errors and omissions” in the course of “recording, editing, publication, dissemination, exhibition, broadcast or release” of content, productions or communications. The allegations in the LOVO lawsuit which assert, among other things, that plaintiffs were harmed by virtue of LOVO's publication of media based on use of their voices without consent, conceivably fall within these definitions such that media liability insurance might respond.
b. Cyber insurance
Cyber insurance is a critical line of defense against cyberattacks, data breaches and other online and digital risks for businesses. Like media liability, cyber insurance is not a standardized product, so the terms can vary greatly across policies. Moreover, not all cyber policies provide both first-party and third-party coverages, so it is important to closely examine the specific terms and conditions.
As relevant here, cyber policies may provide coverage for lawsuits alleging “multimedia wrongful acts,” including infringement, misappropriation, or similar, through the production, publication or dissemination of “sounds, images, or advertisements.” As such, like with media liability insurance, companies should look to coverage provisions like the foregoing when faced with allegations of injury related to the publication and dissemination of media, such as those injuries alleged in the LOVO lawsuit.
c. Commercial general liability insurance
Commercial general liability (CGL) insurance is a pillar of virtually every company’s insurance program. It provides broad liability coverage for third-party bodily injury, property damage and personal and advertising injury resulting from the insured’s business operations. Notably, CGL policies are written on standardized forms, so many core terms and conditions are the same across policies.
CGL policies might cover claims like those alleged in the LOVO lawsuit under the “personal and advertising injury” coverage section, which often includes coverage for lawsuits involving “wrongful acts” committed by a company with respect to “oral or written publication” or “advertisement” that, for example, slanders or disparages a person or infringes upon another’s intellectual property rights. That said, due in part to the rise in insurance products specifically tailored to these types of injuries (e.g., media liability), businesses should look closely at exclusions and endorsements in their CGL policies to confirm coverages that might otherwise apply to such lawsuits have not been modified or excluded.
d. Exclusions
It is bedrock insurance law that exclusionary language in an insurance policy is construed narrowly against the insurer, and it is the insurer’s burden to prove that such exclusion applies. Nonetheless, it is important for businesses to closely review and understand their policy exclusions, particularly in the face of new and untested claims, as here.
For example, many insurance policies include exclusions where there is fraud, knowing or willful violations of the law, “expected or intended” injury, and so forth. Allegations in the LOVO lawsuit include “malicious intent,” theft, intentional acts of deception, and a cause of action for fraud. Although application of these exclusions generally is not ripe before these issues have been borne out through discovery or there is a judicial determination, as LOVO progresses, insurers will look to assert such exclusions, among others.
Overall, the rapid evolution of AI technology, particularly in the production of audio/visual content, will continue to present legal and regulatory challenges across several areas. The LOVO lawsuit offers key insights for businesses venturing into GenAI development and deployment, such as the importance of addressing users' rights of publicity that provide protections against the replication of their voice without authorization, as well as securing proper licensing when using AI training data. To limit potential exposure against similar risks presented in the LOVO lawsuit, businesses can look to media liability, cyber, and general liability insurance coverage.
This article was republished by Reuters. A PDF version is available for download below.
In-depth 2024-119