As we previously discussed, AI can have severe implications for intellectual property (IP) law, and it is safe to assume that IP is the area of law most affected by AI. One of the first significant issues recently has been with songs that use AI-generated voices. These sound like famous artists (like Drake, The Weeknd, Rhianna, etc.) or characters (like Sponge Bob, Eric Cartman, Goku, etc.).
The story begins with the song “Heart on My Sleeve.” The song is, as far as we know, an original. But it features AI-generated voices made to sound like Drake and The Weeknd. Drake and UMG (Drake’s label) were unhappy about this. However, since representatives aren’t copyrightable, they did not have the option to take the song down from YouTube.
This is a new issue; therefore, no case law can help solve this situation. However, this isn’t the first time artists have to deal with someone else using their voices; they are not entirely defenseless. To what extent can the existing case law or legal principles help?
In some US states, artists might have the option to sue the creators of these AI-voiced songs based on the “right to publicity,” a term established in the Haelan Laboratories, Inc. v Topps Chewing Gum, Inc. case, which protects the likeness of celebrities. In contrast, artists can still rely on unfair competition in other countries.
In this case, Ford tried to hire a famous singer, Bette Midler, for a commercial. Once Midler declined, Ford instead hired one of her backup singers, Ula Hedwig, to sing Midler’s song and imitate her voice. Notably, Ford obtained a license from the copyright holder to use the music. Therefore, the core issue in this case was only the protection of Midler’s voice.
The Court had several significant findings in this case.
Firstly, the Court found that the First Amendment (right to freedom of expression) protects the reproduction of likeness and sound if the purpose of the copy is informative or cultural. However, if reproducing is to exploit the individual portrayed, then the First Amendment will not apply. Similarly, in Apple Corp v Leber et al. and Estate of Presley v Russen, the courts found that the First Amendment does not protect imitative entertainment without creative components.
Secondly, the Court stated that imitating a recorded performance does not constitute copyright infringement, even where one performer deliberately sets out to replicate another’s performance.
Lastly, the Court found that an injury may exist from appropriating attributes of a person’s identity. Although an individual’s voice is not copyrightable, the Court determined that a person’s voice is as distinctive and personal as their face. When the unique voice of a professional singer is widely known and is purposely imitated in commercial use, the sellers have appropriated what is not theirs and have committed a tort in California.
Tom Waits was an acclaimed singer with a distinctively gravelly voice. Frito-Lay, Inc. was a large food manufacturer and distributor. To launch a new salsa flavor of corn chips, Frito-Lay (their advertisement agency, to be more precise) hired Stephen Carter, a professional musician who had spent years covering songs by Waits, to record a jingle for a commercial. After the commercial aired, Waits sued Frito-Lay and the advertisement agency.
This case confirmed the findings of the Court in the Midler case that “when voice is a sufficient indicia of a celebrity’s identity, the right of publicity protects against its imitation for commercial purposes without the celebrity’s consent” and clarified the common law rule that for a voice to be misappropriated, it must be (1) distinctive, (2) widely known and (3) deliberately imitated for commercial use.
Furthermore, the Jury found that since the voices of Stephen Carter and Tom Waits are similar, they may mislead consumers to believe that Waits endorsed the corn chips. This entitled Waits to damages according to the Lanham Act (on the grounds of unfair competition).
As we can see, this issue has been present (without AI) in the USA. However, a similar case in Serbia involves creating AI-generated media content using the voices of known politicians. The lack of other legal resorts has tackled the issue on a regulatory basis. The Regulatory Authority for Electronic Media of the Republic of Serbia (REM) warned media service providers. According to the Electronic Media Act, providers should not broadcast content that could exploit the credulity of viewers and listeners. Further, the audio-visual content should not mislead the public regarding the identities, events, or occurrences. Failure to comply with those above can result in license suspension or revocation. Here, we can see similarities between Serbian and US law. Misleading the public is behind the damages awarded by the Lanham Act in the Waits v Frito-Lay, Inc. case.
REM further stated that AI content should feature an explicit notification before, during, and at its end. It should display that the program uses AI. Additionally, media service providers must prevent the misuse of personal data or identities, including voice or likeness.
Ultimately, we can see that artists and record labels are not entirely helpless against AI-generated voices that imitate famous artists. When IP laws do not protect against IP infringement, unfair competition rules and protection from misleading content do. Certain states in the US allow artists to assert their right to publicity, while in some countries, such as Serbia, relevant bodies can revoke a broadcast license if the AI is used to mislead the public. Nearly all nations acknowledge artists’ entitlement to claim unfair competition if there is a real chance of misleading the consumers. The struggle in Serbia might be more challenging. Protection against unfair competition is provided under the Trading Act in a narrowed manner.
But this doesn’t solve this issue for YouTube (and other platforms) since they need an efficient platform-wide solution to this issue. This is why YouTube is in talks with UMG to solve this issue.
One possible solution to this issue is, for example, that YouTube might allow users to block AI from training on their songs. Other companies also follow this trend, notably Spotify and Zoom, which recently changed their Terms of Service so that users can opt out of using their data to train AI. Another potential solution might be to give royalties whenever an artist’s voice is used to create a song.
Author: Bojan Tutić
Image made by Bing Image Creator