The consequences of making a nonconsensual deepfake

Making a nonconsensual deepfake could lead to legal and financial repercussions.
 By 
Rebecca Ruiz
 on 
Illustration of teen boy seemingly in the act of making a deepfake image, surrounded by authority figures.
The consequences of making a nonconsensual deepfake image can be life-changing. Credit: Zain bin Awais / Mashable

Lawyer Sean Smith has seen up close how nonconsensual deepfakes, a form of image-based sexual abuse, can ruin lives.

Smith, a family law attorney with the Roseland, New Jersey, firm Brach Eichler, recently represented both the families of minor victims and perpetrators throughout educational disciplinary proceedings.

His clients have included teen girls whose images were taken from social media, then digitally "undressed" by their male classmates, who used software powered by artificial intelligence.


You May Also Like

The apps and websites capable of creating explicit nonconsensual deepfakes typically market themselves as satisfying a curiosity or providing entertainment. As a result, users likely don't understand that the resulting imagery can inflict painful, lifelong trauma on the person whose likeness has been stolen — who is almost always a girl or woman. The victim may never be able to remove every synthetic photo or video from the internet, given how difficult it is to track and delete such content.

This can lead to professional, personal, and financial devastation for survivors. The same can be true for perpetrators when their name and reputation is associated with creating nonconsensual deepfakes. They may face suspension or expulsion if they're a student, and also face criminal and civil penalties, depending on where they live.

"It destroys lives on every side," Smith told Mashable.

This typically isn't made clear to youth and adult users who engage in image-based sexual abuse.

Is it illegal to make a deepfake?

Despite the absence of information about the consequences of nonconsensual deepfakes, their rise has prompted several states to pass legislation criminalizing them.

Meanwhile, Congress has introduced but has yet to vote on a bill that would give victims the right to file a civil suit against perpetrators. A separate federal bill would criminalize the publication of nonconsensual intimate imagery, including that created by AI, and require social media companies to remove that content at a victim's request.

In some states, offenders can face civil penalties should the victim successfully sue them for damages. Their wages may be garnished or their property seized to pay for such damages.

Last year, Illinois amended an existing law in order to make deepfake offenders liable when they distribute nonconsensual synthetic images. A survivor can sue the person who disseminates the content for damages, which may result from emotional distress, the cost of mental health treatment, the loss of a job, and other related costs.

"When the laws get enforced, it's going to be a black mark that will follow a person for a very long time..."
- Matthew B. Kugler, professor of law, Northwestern University

In New York, dissemination of nonconsensual deepfakes can lead to a year spent in jail, a fine, and a civil suit. Florida imposes both criminal and civil penalties for the "promotion" of nonconsensual synthetic material. The state's law also expanded the definition of "child pornography" to include deepfakes of minors engaged in sexual conduct.

Indiana, Texas, and Virginia are among the states that have made the creation of nonconsensual deepfakes punishable by jail time.

Many states, however, don't yet have laws that make the creation or distribution of deepfakes illegal, or give victims the right to sue. Additionally, it may be difficult for victims to pursue criminal or civil penalties against the person who promoted the content because their identity is unknown, or because law enforcement is understaffed to investigate potential crimes.

But Matthew B. Kugler, professor of law at Northwestern University, says that shouldn't give people a false sense of security.

"When the laws get enforced, it's going to be a black mark that will follow a person for a very long time, and no one's going to feel bad about the fact that that black mark follows [the offender] for a very long time," Kugler says.

In 2020, Kugler studied public attitudes toward sexually explicit, nonconsensual deepfake videos in a survey of 1,141 U.S. adults. The vast majority of the respondents wanted to criminalize the act.

There is another potential legal consequence to creating nonconsensual deepfake imagery, regardless of whether the offender's state imposes criminal or civil penalties.

Adam Dodge, a lawyer and founder of Ending Tech-Enabled Abuse (EndTAB), says that a victim can file for a protective or restraining order if she knows who's responsible for the creation or distribution of the imagery. In many jurisdictions, image-based abuse qualifies as a form of harassment.

Such restraining orders are discoverable in background searches conducted by potential employers, Dodge says. A restraining order can also be applied to a youth offender. Though a minor's legal record is meant to be sealed, Dodge has seen instances where the information becomes public.

What happens to minors who create or share a nonconsensual deepfake

Teens who find deepfake apps or sites, either through word of mouth or ruthless internet marketing and search strategies, often don't grasp the potential fallout for victims or themselves, says Smith.

He notes that because the phenomenon is so new, school-based discipline can vary widely. At public schools, which are legally obligated to keep students enrolled to the extent that it's possible, the punishment can vary from brief in- or out-of-school suspensions.

But Smith says that private schools, with their own codes of conduct, may quickly escalate to expulsion.

The victim's parents may also pursue legal action in an effort to hold the perpetrator and their family accountable. Though Smith hasn't seen such a case yet, he expects some parents to begin filing civil lawsuits against a perpetrator's parents on the grounds of negligent supervision. Any damages won could potentially be covered by homeowner's insurance, unless the parents' carrier restricts such claims.

Teens could also be subject to criminal penalties, including those related to child pornography and other criminal statutes. Smith is aware of juvenile proceedings against teens who've created nonconsensual deepfakes. Though they did not serve time in jail, the offenders entered into a private agreement with the state as culpability for their actions.

In Florida, however, two teens were arrested and charged with felonies last December for disseminating nonconsensual deepfakes.

Smith says that parents and teens urgently need to understand these and other consequences.

"The problem with this technology is that the parents and the kids don't realize how big a mistake the use of the technology is," Smith says. "How just the introduction of the technology onto a cellphone…can create this much larger lifetime mistake."

If you have had intimate images shared without your consent, call the Cyber Civil Rights Initiative’s 24/7 hotline at 844-878-2274 for free, confidential support. The CCRI website also includes helpful information as well as a list of international resources.

Rebecca Ruiz
Rebecca Ruiz
Senior Reporter

Rebecca Ruiz is a Senior Reporter at Mashable. She frequently covers mental health, digital culture, and technology. Her areas of expertise include suicide prevention, screen use and mental health, parenting, youth well-being, and meditation and mindfulness. Rebecca's experience prior to Mashable includes working as a staff writer, reporter, and editor at NBC News Digital and as a staff writer at Forbes. Rebecca has a B.A. from Sarah Lawrence College and a masters degree from U.C. Berkeley's Graduate School of Journalism.

Mashable Potato

Recommended For You
Elon Musk's Grok faces another EU investigation over nonconsensual AI images
Elon Musk's tweet and Grok logo

Grok ban: The nations considering blocking AI chatbot over nonconsensual sexual content
A phone showing the Grok app held in front of a computer screen with the search "Grok remove clothes."

Grok under investigation for sexualized deepfake generation
A hand holding a phone displaying the Grok logo.

Grok says it has restricted image generation to subscribers after deepfake concerns. But has it?
Social media apps on a smartphone - Bluesky, X (formerly Twitter), Truth Social.

Google, Apple hosted dozens of deepfake nudify apps, investigation reveals
close-up view of apple app store logo on screen

More in Life
California just launched the country's largest public broadband network
Newsom stands behind a teen on a computer. A group of people cheer and clap behind them.

The Shark FlexStyle is our favorite Dyson Airwrap dupe, and it's $160 off at Amazon right now
The Shark FlexStyle Air Styling & Drying System against a colorful background.

Amazon's sister site is having a one-day sale, and this Bissell TurboClean deal is too good to skip
A woman using the Bissell TurboClean Cordless Hard Floor Cleaner Mop and Lightweight Wet/Dry Vacuum.

The best smartwatch you've never heard of is on sale for less than $50
Nothing CMF Watch 3 Pro in light green with blue and green abstract background

Reddit r/all takes another step into the grave
Reddit logo on phone screen

Trending on Mashable
NYT Connections hints today: Clues, answers for April 3, 2026
Connections game on a smartphone

Wordle today: Answer, hints for April 3, 2026
Wordle game on a smartphone

Google launches Gemma 4, a new open-source model: How to try it
Google Gemma

What's new to streaming this week? (April 3, 2026)
A composite of images from film and TV streaming this week.

The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!