Taylor Swift has spent the last decade rewriting the rules of the music industry. She just filed the paperwork to do it again.
At the end of April 2026, Taylor Swift’s company TAS Rights Management filed three trademark applications with the US Patent and Trademark Office. Two of them covered audio clips of her voice, specifically recordings of her saying phrases like “Hey, it’s Taylor Swift” tied to promoting her album. The third covered an image of her onstage in a sequined jumpsuit holding a pink guitar.
Read it quickly and it looks like a celebrity protecting her brand. But trademark attorney Josh Gerben, who first reported the filings, framed it differently: these applications are designed to fill a specific gap that copyright law leaves open. Copyright protects existing recordings, what it leaves unprotected is new AI-generated content that imitates an artist’s voice. Trademark law, applied this way, potentially does.
Swift isn’t the first person to try this – actor Matthew McConaughey filed similar voice trademarks earlier – but she is by far the most high-profile, and that ups the ante significantly.
Why Copyright Alone Doesn’t Cut It
The problem Swift is trying to solve is specific and well-documented.
When an AI model trains on thousands of hours of an artist’s recordings and generates a convincing imitation, the output is technically new. The artist never recorded it, no existing copyright was directly copied and the AI company never sampled the track. Under current law, that creates a situation where the imitation can be commercially used without the artist’s consent and without payment.
Swift’s voice has already been misused in AI deepfakes for advertisements, political content and explicit material. The RIAA has filed suits against AI music companies Suno and Udio for training on copyrighted music, and Tennessee’s ELVIS Act – named after the most famous victim of posthumous likeness exploitation – already gives artists stronger grounds to pursue AI likeness misuse for commercial gain. The trademark route is an additional layer on top of all of this: a way to claim ownership not just of recordings that exist, but of the voice itself as a commercial identifier.
More from News
- Converge Bio Designs Stronger Cancer Antibody With AI In Hours Using a Single Prompt, Signaling Shift In Drug Discovery
- DeepSeek Releases New AI Model – But What Makes It So Powerful?
- Why Gen Z Is Choosing To Work At Startups Over Tech Giants?
- Instagram Just Launched A Disappearing Photo App In 2026 – And Yes, That’s Exactly What It Sounds Like
- Inside VTEX Day 2026: Can The Brazilian Powerhouse Compete On The Global Digital Commerce Stage?
- Why Is The Education Sector Being Targeted In Cyber Attacks Lately?
- Experts Share: How Does The Tech World Feel About Apple’s New CEO
- What Do The April 2026 ONS Market Figures Mean For UK Businesses?
The Part The AI Music Industry Won’t Like
The voice cloning and AI music generation market is projected to grow from around $1.2 billion in 2026 to over $20 billion by 2031.
A large chunk of that growth depends on the ability to use real voices, real artists and real likenesses as training data or as outputs. If courts start upholding trademark protections of the kind Swift is seeking, the legal risk profile for companies in this space changes.
The most immediate effect would be on tools that allow users to generate audio in the style of specific named artists. Some platforms already operate in this territory, relying on the absence of a clear legal framework to avoid liability. A successful trademark enforcement by Swift’s team would establish a precedent that these tools are compelled to address. Licensing deals, consent requirements and ethical filters would stop being optional considerations and start being legal necessities.
For companies innovating in this sector, it’s clear where we are heading even if the final legal framework is not. Universal Music Group has already moved toward cautious AI collaborations rather than open licensing. The regulatory environment in both the US and EU is tightening. Building a voice cloning or AI music product that relies on replicating identifiable artists without consent is a bet that the legal window stays open – and Swift just filed the paperwork to help close it.
This Is Bigger Than Taylor Swift
Beyond the music industry, the Swift filings put a question back on the table that the entire creative sector has been circling for two years: who owns a person’s voice, face and likeness when AI can replicate them at scale? Copyright was built for a world where copying required effort, AI has made that effort disappear and the legal frameworks haven’t kept up.
Trademark law is an imperfect tool for this problem – it was designed to protect commercial identifiers rather than personal identity, but it’s available right now, which is why Swift and her legal team are using it. Time will tell if legislators in the UK and EU will use cases like this as a catalyst to build something more fit for purpose, or whether the creative industry will continue reaching for whatever legal instruments happen to be within reach.
Taylor Swift has been here before, she has spent a decade fighting battles over her own catalogue that the industry eventually had to reckon with. The dispute over AI usage has reached a new milestone with its initial major filing.