Your Face, Your Voice, Your Property: Denmark’s Fight Against Deepfakes

Posted By


 

This summer, the Danish government announced a plan to give every person legal control over their own face, voice, and even body – kind of like copyright protection for you.

That means if someone uses AI to create a fake video or audio clip of you without permission, you could demand it be taken down and even sue for damages.

It’s the first law of its kind in Europe, and it’s aimed straight at stopping the explosion of “deepfakes”; super-realistic digital fakes made with AI.

These fakes aren’t just silly internet memes. They’re being used for scams, political smears, and even disturbing fake pornography.

What Denmark’s Law Would Do

The bill would make your likeness your property. If an AI program generates a video of you saying or doing something you never actually did, you’d have the right to:

  • Get it taken down from websites and social media.

  • Go after damages in court, just like someone whose copyrighted music or book was stolen.

  • Hold platforms accountable with big fines if they don’t act fast.

There are exceptions for parody, satire, and news reporting, but even those will be closely watched to make sure bad actors don’t hide behind “it’s just a joke” when they’re really spreading lies.

Why This Matters in America

If you think this is just Europe’s problem, think again. Deepfakes are already causing trouble here in the U.S.

In 2024, scammers used AI to mimic a company executive’s voice and trick an employee into wiring $25 million. Political deepfakes are starting to pop up too, and you can bet we’ll see more of them in upcoming elections.

Nevada’s high-profile political races make us an easy target.

Imagine a fake video showing a candidate saying something offensive just days before voters head to the polls. Even if it’s quickly debunked, the damage could be done.

And it’s not just politics. Nevada’s entertainment industry is wide open to abuse.

Singers on the Strip, UFC fighters, or local influencers could have their image or voice stolen to promote something they never agreed to.

The Bigger Problem

Globally, deepfake attacks jumped 41% from spring to summer of 2025, causing about $350 million in losses.

Victims range from celebrities and CEOs to everyday people whose personal photos were stolen and altered.

Here in America, there are some laws on the books like the federal Take It Down Act that requires platforms to remove non-consensual sexual deepfakes within 48 hours – but nothing as broad as Denmark’s proposal.

Because of our First Amendment protections, U.S. laws have to walk a fine line between stopping abuse and protecting free speech.

What Critics Say

Some worry Denmark’s plan could be abused to silence legitimate criticism or comedy.

Others question whether it will even work, since much of the deepfake content is hosted outside Denmark. Global tech companies could just block Danish users instead of changing their rules.

But with Denmark set to take over the presidency of the EU Council later this year, they may push similar rules across Europe – and that could put pressure on U.S. lawmakers to act.

Where This Could Go

Whether you love or hate the idea of new internet rules, the deepfake genie isn’t going back in the bottle.

AI tools are getting better every day, and what’s possible now would have been science fiction just five years ago.

Nevada lawmakers might want to start paying attention. While we don’t want to impose upon freedom of speech, we should be asking whether our current laws are strong enough to protect Nevadans from AI-driven fraud, defamation, and abuse.

In the coming years, the biggest threat to your reputation might not be what you actually say or do; it could be what AI makes it look like you said or did.

The opinions expressed by contributors are their own and do not necessarily represent the views of Nevada News & Views. This article was written with the assistance of AI. Please verify information and consult additional sources as needed.