United Kingdom
In the UK, the closest equivalent to the right of publicity is a legal concept known as passing off. Initially developed by courts to prevent individuals from falsely claiming that they are selling goods belonging to someone else, passing off has evolved to protect celebrities’ images or names from unauthorized use in commercial contexts.
In most passing off cases, the claimant must satisfy a three-part test called the “classical trinity” test. This test requires the claimant to demonstrate the following:
- They have a reputation or goodwill associated with their name or image.
- There has been a misrepresentation to the public, leading them to believe that the goods or services being offered are associated with the claimant.
- The claimant has suffered some form of harm or damage.
Recent passing off cases involving celebrities have predominantly focused on false endorsement claims. Passing off can potentially assist a celebrity in challenging false product or brand endorsement through the unauthorized use of an AI-generated emulation of their likeness. However, when AI is used to create a synthetic performance that resembles an artist’s voice or likeness, the situation becomes more complex.
Previous passing off cases involving false attribution by authors are particularly relevant to these uses.
In the case of Sim v Heinz, a court dismissed an actor’s request for an injunction to prevent a food advertisement from using an imitation of his voice. However, the judge acknowledged the concern surrounding the use of someone’s voice without consent and highlighted that allowing such actions solely for commercial gain would be a significant flaw in the law.
Proving passing off is notoriously challenging. Unfortunately, unless the law adapts to these evolving technological developments, relying on passing off to object to the synthesized use of an artist's voice or likeness will be an uphill battle. The artist would need to demonstrate sufficient reputation or goodwill associated with their voice or likeness, which is typically limited to highly famous artists, and that a substantial portion of those accessing the AI-generated content would be deceived into believing it is authentic. This task becomes more difficult when the content explicitly states that it is not the work of a specific artist but an AI performance.
Since passing off is unlikely to assist the majority of performers who are not widely known to the public, these individuals are potentially exposed to having their image or voice or likeness used commercially without authorization. For instance, a successful DJ could utilize a synthesized voice trained on a talented but unknown singer to produce a new track.
United States
In the U.S., no federal law governs rights of publicity, but instead a patchwork of state legislation and common law does. Prior to 1988, vocal imitation was not considered an infringement on a celebrity’s rights of publicity. However, in a landmark case in 1988, the Court of Appeals for the Ninth Circuit held that Ford Motor Co. misappropriated singer Bette Midler’s distinctive voice when it hired one of her former backup singers to imitate her performance of a song for use in a TV commercial. The court rejected Midler’s claim under California’s rights of publicity statute California Civil Code §3344, holding that the statute only protects against the misappropriation of one’s actual voice (as opposed to an imitation), but it allowed Midler to maintain a claim under common law. Four years later, in Waits v. Frito-Lay, Inc., the Ninth Circuit confirmed that “when voice is a sufficient indicia of a celebrity’s identity, the right of publicity protects against its imitation for commercial purposes without the celebrity’s consent,” and clarified the common law rule that for a voice to be misappropriated, it must be (1) distinctive, (2) widely known, and (3) deliberately imitated for commercial use.
While these rulings may have established a legal framework to combat AI-powered sound-alikes, significant questions remain. For instance, would artists be able to recover attorney fees under California Civil Code §3344 in cases where an AI was trained on their recordings or would they be relegated to only pursing common law claims which don’t afford the opportunity to recover attorney’s fees?
The legal landscape is further complicated by the variation in rights coverage across different states. For example, the First Circuit and New York courts initially rejected extending New York’s statutory right of publicity law to cover soundalikes. However, “voice” has since been included in New York’s private cause of action for a violation of the right of publicity, although it was not added to the criminal arm of the statute.
Post-mortem rights of publicity also present a unique challenge. These rights differ significantly from those of living individuals in terms of range, duration, and accessibility. Depending on an artist’s domicile at the time of death, there may be no post-mortem rights of publicity, leaving the estates of deceased artists without the authority to prevent AI-generated imitation of the artist’s voice in a commercial context.
Lanham Act / Unfair Competition
Another angle to consider is the Lanham Act, a U.S. federal law often applied in connection with trademarks. The Act’s primary aim is to protect against unfair competition among commercial parties, with Section 43(a) prohibiting the use of any symbol or device that could deceive consumers about the association, sponsorship, or approval of goods or services by another person. The Act’s applicability to AI sound-alikes is contingent on whether the imitation is likely to mislead consumers about the original artist's association with the new work. If the AI-generated voice causes confusion, the Act could potentially be used to protect artists’ rights. However, liability could be avoided if AI sound-alike artists explicitly disclaim in their recordings, titles, or marketing materials that the tracks are not by the artist whose voice they’ve replicated.
Successful claims under the Lanham Act could lead to remedies including injunctions, actual damages, defendant’s profits attributable to the violation, costs of the action, and in exceptional cases, recovery of attorney’s fees.
Defenses and other issues
However, most voice misappropriation cases involve sound-alikes in a purely commercial context, such as to sell products. It remains unclear whether courts will extend rights of publicity and Lanham Act claims to the use of sound-alikes in original music, which as a form of creative expression, would receive stronger First Amendment protections than pure commercial speech.
Additionally, some legal scholars have suggested that the Copyright Act should preempt rights of publicity and Lanham Act claims altogether when the material allegedly infringing is expressly authorized under the U.S. Copyright Act, Section 114 of which explicitly permits “sound-alike” recordings. While this viewpoint has been adopted by some courts, it has been avoided by the Ninth Circuit thus far.