Site icon TechRound

Experts Comments: What Should Other Countries Take From Denmark’s New AI Laws?

Denmark is getting ready to update its copyright laws to give people more control over how their faces, voices and bodies are used online. The new law would make it illegal to use someone’s image or voice in deepfakes without their permission.

Culture minister Jakob Engel-Schmidt says the change is about sending a clear message, and that is… people should have the right to decide how they appear and sound in the digital world.

Right now, he says, the law does not fully protect that. With AI making it easy to copy someone’s appearance or voice, Denmark wants to make sure those copies can’t be used without consent.

The law has support from most political parties, and the plan is to bring it to parliament in the autumn. If it passes, anyone in Denmark will be able to ask for deepfake content to be removed if it was shared without their agreement.

 

What Exactly Will The New Rules Cover?

 

The proposed law focuses on “realistic, digitally generated imitations.” That includes AI-generated videos, photos and audio that closely copy how someone looks or sounds. It also protects performances such as those in music or acting.

What these rules don’t apply to are satire or parody, as long as the content isn’t misleading or harmful. But if someone uses a fake video to spread lies or damage someone’s reputation, that would count.

The law also applies to people from other European Economic Area countries, so it could have an effect even outside Denmark. Engel-Schmidt has said that if tech platforms don’t co-operate, they could face fines or action from the European Commission.

 

What Should Countries Take From Denmark’s Law?

 

Our Experts:

 

 

James Kirkham, Founder, ICONIC

 

 

“More countries should urgently follow Denmark’s lead in enshrining AI laws that give people copyright over their own faces, voices and bodies to help fight the spread of deepfakes. This means legally recognising a person’s likeness as intellectual property—just like music writing or inventions—and putting clear penalties in place for unauthorised use by AI systems, platforms or third parties.

“Giving people copyright over their own face voice and body is the most powerful line in the sand we’ve seen so far. It reframes identity as personal property not something to be scraped borrowed or cloned at will.It forces platforms and AI developers to think twice before building on stolen likeness and—crucially—it gives people not corporations the default rights to who they are. I think that’s a profound shift. This isn’t just about protecting privacy but about rebuilding trust in the entire system.

“If people don’t feel safe and if they don’t feel seen as real they’ll opt out entirely from content culture and participation. Copyrighting the self won’t stop deepfakes overnight but it gives the next generation a foundation of ownership in this overly synthetic world. And in a time when everything can be replicated edited re-skinned and faked then owning your own image might be the most human act we have.”

 

Eugeny Malyutin, Head of LLM, Sumsub

 

 

“Denmark’s move to expand copyright and intellectual property law to protect individuals’ likeness – including their face, voice, and other biometric traits – is a bold and novel approach to identity protection.

“Fraudsters use a range of tactics – often either using fully synthetic identities and documents or hijacking real identities and enhancing them with AI-generated content, such as deepfakes. As a result, they often attempt to use these to bypass verification checks.

“Synthetic identities are often easier to detect, especially in systems with strong document-free verification and access to authoritative databases. Identity hijacking, on the other hand, is more complex and harder to catch, particularly when combined with convincing AI-driven synthetic identity document forgeries.

“Our data shows that this ‘synthetic document fraud’ has grown by 378% in Europe from the first quarter of 2024 to 2025 – more than all other regions, and by 275% in the UK. Worryingly, deepfake fraud increased by 900% in the UK over the same time period. Denmark’s new law may help individuals reclaim control over their identity but still remains a ‘reactive’ measure and doesn’t replace technical or biometric safeguards.”

 

Natália Fritzen, AI Compliance & Policy Specialist, Sumsub

 

 

“Denmark is trying to amend its copyright laws to give people legal ownership over their likeness (face features, voice, etc) in an attempt to curb a continuing rise in deepfakes – making it illegal to publish such content without consent. Anyone would have the right to request its removal from online platforms, and in certain cases, seek financial compensation – while platforms that do not comply could be fined.

“This is a truly inventive and uniquely proactive approach. Most other bills try to curb deepfakes by focusing on watermarks and other types of transparency requirements – but this puts responsibility onto the creators of deepfakes, rather than the websites used to share them, and offer little remedies to the victims. Denmark’s strategy is a clear contrast to the EU AI Act, which takes a more technical and disclosure-based approach that lacks strong enforcement or remediation for individuals.

“Although not the first to try this (a bill introduced in US congress, the No Fakes Act, tried something similar but it hasn’t yet moved in the legislative process ), Denmark’s political landscape makes it more likely that this Bill will come into force. Furthermore, the country’s new position as President of the Council of the EU may give them the influence to push this solution as a universal benchmark.”

 

Oana Leonte, IP Expert and Founder, Unmtchd

 

 

“Denmark’s move to give people copyright over their faces and voices isn’t just about protecting individuals, it signals a monumental shift in how we think about identity as intellectual property in the AI age.

“More countries absolutely should follow Denmark’s lead. When AI can replicate anyone’s likeness in seconds, personal identity becomes a business asset that needs systematic protection. What Denmark recognises is that in an AI-first world, if you don’t own your assets, someone else will.

“But this thinking needs to extend beyond personal identity. If individuals need copyright protection over their faces, brands desperately need the same systematic protection for their assets. Most companies still have their brand IP scattered across folders, legal docs, and forgotten systems, completely vulnerable to AI replication.

“Denmark’s law shows that governments are starting to understand that in the age of AI, everything becomes code, and everything that’s coded needs protection. Personal identity, brand identity, it’s all strategic infrastructure now.”
 

 

Arshad Khalid, Technology Advisor, No Strings Public Relations

 

 

“Giving individuals copyright-like control over their own likeness is a necessary step in the age of AI. Denmark’s approach recognises that people should have a say in how their face, voice or identity is replicated or manipulated by machines. Without clear legal ownership, there’s no effective recourse when deepfakes are used to spread misinformation, humiliate someone, or exploit their image commercially.

“This isn’t just a celebrity problem anymore. Anyone can be targeted. What Denmark is proposing could help set a standard where platforms and AI developers are legally required to respect people’s digital identities – and remove unauthorised content when asked.

“More countries should follow suit by embedding these rights into copyright, privacy or personality laws. It’s about modernising regulation to match the speed of the technology. Without this, people are left to deal with the fallout of deepfakes on their own, while those creating or hosting them face little accountability.”

 

Camden Woollven, Group Head of AI Product Marketing, GRC International Group

 

 

What impact could Denmark’s new AI law—giving people copyright over their face, voice, and body—have on global biometric data protection standards?

“It’s a big shift. If this passes, Denmark becomes the first country to treat your face, voice and body as something you legally own. That goes beyond GDPR and directly tackles how AI models use biometric data, especially in deepfakes.

“The timing matters. Denmark just took over the EU Council presidency, so it’s in a strong position to push this across Europe. If that happens, it could set a new baseline for biometric protections that puts pressure on other regions to follow.

“It also changes the conversation globally. Most laws treat biometric data as something that needs safeguarding, not something people own. This flips that and puts platforms on the hook to take down unauthorised deepfakes or face penalties. That level of accountability will stand out.

“Long term, this could move us toward treating personal likeness more like intellectual property. That has real implications for how AI training data is sourced and how platforms handle content that blurs the line between creativity and identity theft.”

Would classifying biometric data as copyrighted material help strengthen personal data rights, or could it conflict with laws like GDPR or CCPA?

“It could do both. Giving people legal ownership of their likeness makes it easier to go after misuse, especially with deepfakes or AI-generated content. It gives clearer ground to demand takedowns or compensation, which is something privacy laws often struggle with. But it does create friction with existing laws.

“GDPR treats biometric data as personal, not owned. It’s about consent, access and erasure. Copyright’s about control and permanence, which doesn’t always line up. You could end up in situations where someone has the right to delete their data but also holds permanent copyright over it. That gets messy fast.

“There’s also a practical side. Adding copyright protections on top of GDPR and CCPA could make day-to-day things like biometric logins harder to manage, especially if you need licensing just to use someone’s face or voice in a system.

“So while the idea makes sense for specific harms like deepfakes, it probably needs to be more targeted. You want stronger enforcement without creating legal overlap that slows everything down.”

What steps could help countries work together to protect people’s digital identities across borders from AI misuse?

“It starts with getting aligned. Right now, countries are handling digital identity and biometric data in totally different ways, which leaves gaps that are easy to exploit. We need shared standards, clearer rules and actual ways to enforce them across borders.

“That includes agreeing on what counts as secure infrastructure, how platforms handle biometric data and where the red lines are. The EU’s already pushing on this with the AI Act and digital ID wallet. Other regions could build on that and make sure systems are interoperable.

“There’s a tech angle too. Tools like decentralised ID, better authentication and smarter deepfake detection already exist. Countries should be investing in those together and sharing intel on how threats are evolving.

“But none of that works without proper oversight. It’s not just about fraud. It’s about rights. We need accountability for platforms, but also for how governments use this data. If countries want to keep people safe from cross-border AI misuse, they need to act like a team.”

What are the main pros and cons of giving people legal control over their likeness to fight deepfakes?

“The biggest upside is control. If you own your likeness, you don’t have to wait for a platform or regulator to step in. You can go after misuse directly and stop content at the source. It also gives you stronger tools. You don’t have to prove harm. You can demand takedowns, claim compensation and push for enforcement without jumping through hoops. That creates a deterrent, not just for the people making deepfakes but for the platforms hosting them.

“But there are risks. It’s not always clear what counts as a likeness. Does a voice pattern count? What about parody or satire? If the rules aren’t clear, platforms might over-remove just to stay on the safe side.

“It also complicates legitimate use. Biometric data’s already heavily regulated in places like the EU. Add copyright-style protections and you make things harder for researchers, artists and companies using face or voice tech in normal ways.

“And then there’s enforcement. Most deepfakes come from anonymous users in other countries. Having rights is one thing, but getting anyone to respect them is another. So in theory, it’s a strong step forward. But it needs clear limits, solid definitions and proper enforcement. Otherwise it risks overreaching and doing more harm than good.”

How realistic is it for other countries to follow Denmark’s lead, and what legal or tech obstacles would need to be addressed?

“Some will move faster than others, but this isn’t going global any time soon. EU countries are the most likely to adopt it first, especially with Denmark leading the Council this year. The AI Act has already laid the groundwork by classifying biometric use as high risk.

“Outside the EU, it’s harder. Countries like the US and UK don’t have a legal tradition of treating personal likeness as copyrightable. In the US, it would clash with First Amendment rights. And in countries with limited resources or weaker legal systems, enforcement just isn’t a priority.

“Tech’s another hurdle. Saying people own their likeness is one thing. Detecting and acting on misuse is another. Deepfake detection is still patchy, false positives are a problem and platforms aren’t aligned on how to handle this across borders.

“What’s more realistic is phased rollout. Start with the EU, maybe a few other privacy-focused countries, and focus on the worst cases like non-consensual deepfakes. Build from there once the systems and legal clarity are in place.”

Exit mobile version