Skip to content




Celebrities like Taylor Swift are setting the guardrails for the AI age 

Featured Replies

rssImage-893ecca146efc4b1d307539a7961183f.webp

Taylor Swift recently filed a series of trademark applications designed to protect the star from AI-enabled impersonations. Swift already holds a wide array of trademarks, but these latest filings, at least one intellectual property firm suggests, serve a new purpose: protecting the timbre and character of her voice itself through what is known as a “sound mark.”

In two recent filings, posted April 24 by Swift’s company, the celebrity applied to trademark two recordings. In one, she says, “Hey, it’s Taylor,” and in the other, “Hey, it’s Taylor Swift.” The recordings themselves are not particularly novel, but that is likely beside the point.

“The concept of protecting sound as a trademark is not new, though it remains relatively rare,” wrote Josh Gerben, the Gerben IP attorney who spotted the trademarks on the law firm’s  website. “Historically, singers relied on copyright law to protect their recorded music. But AI technologies now allow users to generate entirely new content that mimics an artist’s voice without copying an existing recording, creating a gap that trademarks may help fill.” 

Gerben added that, in theory, if an AI-generated imitation of Swift’s voice became the subject of litigation, she could argue that uses resembling her registered vocal trademarks infringe on her intellectual property rights.

Gerben surmises that the goal is to protect the sound of Taylor Swift’s voice much like NBC protects its signature chimes. The strategy, which Matthew McConaughey has also pursued, reflects a novel approach for the AI age, though it remains untested in court.

Celebrities are among those most vulnerable to AI-enabled impersonations and broader unauthorized uses of their likenesses. While top artists and actors already face an enduring, whack-a-mole-style battle against fakes, the latest generation of AI models has made producing these imitations unnervingly easy and scalable.

For similar reasons, celebrities, particularly women, are frequently targeted by deepfake operations that use their faces and bodies in nonconsensual pornographic imagery. Swift herself has been subjected to such campaigns, including in early 2024, when illicit AI-generated images of her spread widely on platforms like 4chan.

In response, and for better or for worse, celebrities are racing to install guardrails of the AI age—or at least, trying to figure out how to build them. 

Swift’s attempt to protect herself from AI via sound marks is only the latest example. In 2024, OpenAI paused the rollout of a ChatGPT voice that closely resembled Scarlett Johansson’s—and, in an especially recursive twist, her performance as the chatbot in Her—after Johansson publicly criticized the company for allegedly imitating her voice. (OpenAI has said it used a different actor for the feature.)

In another example, the family of Martin Luther King Jr. pressured OpenAI to remove likenesses of the civil rights leader from its video generation platform, Sora, before it was shut down.

And, no doubt under pressure from talent agencies, YouTube recently said that it would expand its deepfake detection service to Hollywood, and celebrities will now have the option to request that certain videos featuring AI generations of them be. 

“With support from leading talent agencies and management companies, including CAA, UTA, WME, and Untitled Management, we’ve worked to refine how likeness detection can best serve talent,” the platform said in a statement. “We’re excited that celebrities and entertainers are now eligible to access this tool, regardless of whether they have a YouTube channel.”

In a market where appearance and likeness are everything, AI presents, at minimum, a new annoyance for artists seeking control, including financial control, over how their face and voice are used. That tension will likely continue to frustrate celebrities. Last year, more than 400 Hollywood leaders wrote to OpenAI and Google opposing the use of copyrighted work to train models without permission.

It’s notable that celebrities are pushing for protections against some of AI’s most noxious abuses. What remains unclear is whether those protections will extend to the rest of us, who also face the growing risk of digital impersonation, or simply allow the Hollywood elite to opt out of a new internet increasingly stuffed with endless uncanny mimicry.

View the full article





Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.