Denmark is trying to update their copyright law to give every citizen ownership over their own face, voice, and body - making it illegal to use someone’s likeness in a deepfake without consent.
It’s being called one of the most aggressive legal steps against AI misuse in the world.
What the Danish Minister of Culture Is Saying
“It’s about giving people back control over how they appear in the digital world.”
– Jakob Engel-Schmidt, Danish Minister of Culture
Denmark says deepfakes threaten public trust, individual safety, and democracy. They’re making this about rights, not tech panic - and the new law is already gaining broad political support.
What That Means (In Human Words)
If someone creates an AI-generated video of you - your face, your voice - and you didn’t agree to it?
Now you can actually do something about it.
You can ask for it to be removed. You can seek compensation.
And if a platform doesn’t take it down?
They could be fined.
This flips the conversation from “We don’t know what to do about deepfakes”
to:
Now humans are able to protect themselves.
Connecting the Dots
To understand the shift this Danish law proposes, we need to define two key terms:
What Is Copyright?
Copyright is a legal right that protects original works of authorship - like books, music, films, artwork, and more. It gives the creator control over how their work is used, copied, distributed, or sold. Only the copyright holder can give permission for others to use the work, and they can take legal action if someone uses it without consent.
What Is a Deepfake?
A deepfake is a piece of media - usually a video, image, or audio - that’s been created or altered using AI to make it look or sound like someone said or did something they never actually did. It can copy a person’s face, voice, or expressions in a way that looks real, even though it’s completely fake. Some deepfakes are used for entertainment or satire, but others can cause serious harm - spreading false information, damaging reputations, or violating someone’s identity.
Global Legal Moves Against Deepfakes & AI Harms
-
Signed into law: May 19, 2025
-
Scope: Criminalizes non-consensual intimate AI deepfakes (revenge porn). Platforms must remove reported content within 48 hours. Penalties include prison up to 3 years.
-
Status: Enacted-enforced by the FTC.
Source: U.S Congressman Ryan Mackenzie - Facebook
2. No Fakes Act (U.S.)
-
Introduced: May 2025 (Senate hearing May 21)
-
Scope: Would hold individuals and platforms accountable for unauthorized AI-generated likenesses (voices, images) and establish notice‑and‑takedown processes.
-
Status: Proposal stage; bipartisan support from RIAA, artists, and YouTube.
Source: tubefilter
3. State Deepfake Laws (U.S.)
-
Examples:
-
New York “Stop Deepfakes Act” introduced Mar 2025-requires metadata labeling for political deepfakes.
-
New York NCII law (Hinchey) criminalizes non-consensual sexual deepfakes.
-
Tennessee ELVIS Act (effective July 1, 2024) protects artists from voice/likeness cloning.
Minnesota & others have laws penalizing political or intimate deepfakes.
4. Minnesota Deepfake Ban
-
Status: Laws passed penalize creation/distribution of non-consensual and politically timed deepfakes. Some provisions temporarily blocked over First Amendment concerns; legal challenges are ongoing .
5. EU AI Act (Transparency Measures)
-
Proposed: EU AI Act sets transparency requirements for deepfakes (Article 52(3))-mandates watermarking or labeling of AI-generated content.
-
Scope: A risk-based approach, treating deepfakes as “limited risk” but requiring disclosures.
Why Denmark’s Law Is Revolutionary - Even If Others Acted First
1. It’s not about banning deepfakes - it’s about giving people copyright over themselves.
Most other laws (like the U.S. Take It Down Act) deal with removing harmful content or punishing malicious use. Denmark goes further:
It proposes amending the national Copyright Act so that your face, voice, and body are legally yours - as if they were a written book or song.
That’s a category shift: from content moderation to ownership law.
2. It creates a proactive right - not just protection after harm.
U.S. laws and the EU AI Act rely on:
-
You noticing something harmful
-
Reporting it
-
Hoping it gets taken down
Denmark’s approach flips the script:
If it’s your likeness, you own it.
This means you don’t have to prove harm - you can act based on unauthorised use alone. That’s a major legal difference.
3. It frames deepfake misuse as a copyright violation, not just a privacy or criminal issue.
That opens the door for:
-
Civil lawsuits, even if the content isn’t sexual or defamatory
-
Royalty claims for commercial misuse
-
Automatic takedown rights under international IP treaties
This gives citizens legal tools that currently only creators, performers, and companies enjoy.
4. It’s a political signal - pushing for EU-wide standardisation
Denmark isn’t doing this quietly. They plan to lead this conversation during their EU Council presidency in 2025–2026.
That means this isn’t just about Danish law -
It’s a test case for how Europe might rewrite the rules around identity, consent, and AI.
Bottom Line
Status: Public consultation open
Expected Timeline: Late 2025 to early 2026
Legal Type: Amendment to Denmark’s Copyright Act
Applies To: Face, voice, body - used in AI-generated content
Where to Follow: Ministry of Culture website, EU Law Tracker
Access: Full draft to be published during consultation phase
Prompt It Up: The New Way to Connect with the News
Not sure what to do when something serious like a deepfake involves you?
You don’t need to scroll endlessly for answers.
Use this simple prompt with any LLM of your choice - just fill in the blanks and get country-specific, platform-specific help in seconds.
📋 Copy & Paste Prompt:
I’ve found a deepfake of myself created or shared using [insert tool or platform], and I live in [insert country].
What are my legal rights, and what steps can I take to report it, remove it, or take action based on the current laws in my country?
This works with ChatGPT, Claude, Gemini, and most LLMs - try it where you feel most comfortable.
Frozen Light Team Perspective -
Because Perspective is How you Stop a Cult
We never thought we’d be talking about copyrighting our own face or voice - but here we are.
And as strange as that sounds, it feels like a natural response to an unnatural problem.
Denmark’s approach is bold. It gives people legal power over how they appear in the world.
In a space where deepfakes are getting harder to spot and easier to spread, that kind of protection isn’t just helpful - it’s necessary.
But let’s not skip the nuance.
This law is powerful, but power always comes with risk.
There’s already concern about how vague the definitions are - could someone claim ownership over a slight edit, a mole, a filter?
Could creators lose space to parody or satire because no one wants to risk a takedown?
And then there’s the biggest question:
What happens when copyright becomes a tool not for protection, but for control?
Denmark is starting something important. They're showing that identity deserves legal weight.
But this won’t be the final version - and it shouldn’t be.
The challenge now is balance: protecting people without silencing creativity, giving rights without giving room for abuse.
And honestly? They’ll probably need technical experts in the room too - because proving what’s real and what’s fake isn’t always something the law can do alone.
It’s a strong beginning.
What comes next will define whether this becomes a shield - or a weapon.