Skip navigation

News

With AI, Anyone Can Be a Victim of Nonconsensual Porn. Can Laws Keep Up?

States around the country are scrambling to respond to the dramatic rise in deepfakes, a result of little regulation and easy-to-use apps

Illustration of a closeup of a femme-presenting person's face looking at three images; the three images show a pixelated drawing of suggestive figures with the person's same skin tone and hairstyle
In the past year or so, 10 states have passed legislation to criminalize the creation or dissemination of deepfakes, outlining penalties ranging from fines to jail time. Rena Li for The 19th

This article was copublished with The 19th, a nonprofit newsroom covering gender, politics, and policy. Sign up for The 19th’s newsletter here.

More than two dozen students at Westfield High School in New Jersey were horrified last year to learn that naked images of them were circulating among their peers. According to the school, some students had used Artificial Intelligence (AI) to create pornographic images of others from original photos. And they’re not the only teenage girls being victimized by fake nude photos: Students in Washington State and Canada have also reported facing similar situations as the ability to realistically alter photos becomes more broadly accessible with websites and apps. 

The growing alarm around deepfakes—AI-generated images or videos—in general was amplified even further in January, as one involving the superstar Taylor Swift spread quickly through social media.

Carrie Goldberg, a lawyer who has been representing victims of nonconsensual porn—commonly referred to as revenge porn—for more than a decade, said she only started hearing from victims of computer-generated images more recently. 

“My firm has been seeing victims of deepfakes for probably about five years now, and it’s mostly been celebrities,” Goldberg said. “Now, it’s becoming children doing it to children to be mean. It’s probably really underreported because victims might not know that there’s legal recourse, and it’s not entirely clear in all cases whether there is.” 

Governing bodies are trying to catch up. In the past year or so, 10 states have passed legislation to criminalize the creation or dissemination of deepfakes specifically. These states—including California, Florida, Georgia, Hawaii, Illinois, Minnesota, New York, South Dakota, Texas and Virginia—outlined penalties ranging from fines to jail time. Indiana is likely to soon join the growing list by expanding its current law on nonconsensual porn. 

Indiana Rep. Sharon Negele, a Republican, authored the proposed expansion. The existing law defines “revenge porn” as disclosing an intimate image, such as any that depict sexual intercourse, uncovered genitals, buttocks or a woman’s breast, without the consent of the individual depicted in the image. Negele’s proposed bill passed through both chambers and is now awaiting the governor’s signature.

Negele said she was motivated to update Indiana’s criminal code when she heard the story of a high school teacher who discovered that some of her students had disseminated deepfake images of her. It was “incredibly destructive” to the teacher’s personal life, and Negele was surprised to see that the perpetrators could not be prosecuted under current law. 

“It started with my education of understanding the technology that is now available and reading about incident after incident of people’s faces being attached to a made up body that looks incredibly real and realistic,” Negele said. “It’s just distressing. Being a mom and a grandmother and thinking about what could happen to my family and myself—it’s shocking. We’ve got to get ahead of this kind of stuff.”  

Goldberg, whose law firm specializes in sex crimes, said she anticipates more states will continue expanding their existing legislation to include AI language. 

“Ten years ago, only three states had revenge porn or image-based sexual abuse laws,” Goldberg said. “Now, 48 states have outlawed revenge porn, and it has really created a tremendous reduction in revenge porn—not surprisingly—just as we advocates had said it would. The whole rise of deepfakes has filled in the gaps as being a new to way to sexually humiliate somebody.” 

In 2023, more than 143,000 new AI-generated videos were posted online, according to The Associated Press. That’s a huge jump from 2019, when the “nudify” websites or applications were less commonplace, and still there were nearly 15,000 of these fake videos online, according to a report from Deeptrace Labs, a visual threat intelligence company. Even back then, those videos—96 percent of which had nonconsensual pornography of women—had garnered over 100 million views. 

Goldberg said policymakers and the public alike seem to be more motivated to ban AI-generated nude images specifically because virtually anyone can be a victim. There’s more empathy.

“With revenge porn, in the first wave of discussions, everyone was blaming the victim and making them seem like they were some sort of pervert for taking the image or stupid for sharing it with another person,” Goldberg said. “With deepfakes, you can’t really blame the victim because the only thing they did was have a body.” 

Amanda Manyame, a South Africa-based digital rights advisor for Equality Now, an international human rights organization focused on helping women and girls, said that there are virtually no protections for victims of deepfakes in the United States. Manyame studies policies and laws around the world, analyzes what’s working and provides legal advice around digital rights, particularly on tech-faciliated sexual exploitation and abuse. 

“The biggest gap is that the U.S. doesn’t have federal law,” Manyame said. “The challenge is that the issue is governed state by state, and naturally, there’s no uniformity or coordination when it comes to protections.” 

There is, however, currently a push on Capitol Hill: A bipartisan group of senators introduced in January the Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024—also known as the DEFIANCE Act. The proposed legislation aims to stop the proliferation of nonconsensual, sexually-explicit content. 

“Nobody—neither celebrities nor ordinary Americans—should ever have to find themselves featured in AI pornography,” Republican Sen. Josh Hawley, a co-sponsor of the bill, said in a statement. “Innocent people have a right to defend their reputations and hold perpetrators accountable in court.” Rep. Alexandria Ocasio-Cortez has introduced a partner bill in the House.

According to new polling from Data for Progress, 85 percent of likely voters across the political spectrum said they support the proposed DEFIANCE Act—with 72 percent of women in strong support compared to 62 percent of men. 

But younger men are more likely to oppose the DEFIANCE Act, with about one in five men under 45 (22 percent) saying they strongly or somewhat oppose legislation allowing subjects of explicit nonconsensual deepfakes to sue the creator.

Danielle Deiseroth, executive director of Data for Progress, said this issue showed one of the “more sharp contrasts” between young men and women that she’s seen in awhile.

“We can confidently say that women and men under 45 have diverging opinions on this policy,” Deiseroth said. “This is an issue that disproportionately impacts women, especially young women, who are more likely to be victims of revenge porn. And I think that’s really the root cause here.” 

Goldberg said that creating policies to criminalize bad actors is a good start but is ultimately insufficient. A good next step, she said, would be to take legal action targeting the online distributors, like the App Store and Google Play, that are providing products primarily used for criminal activities. Social media platforms and instant messaging apps, where these explicit images are distributed, should also be held accountable, Goldberg added. 

The founders of #MyImageMyChoice, a grassroots organization working to help victims of intimate image abuse, agreed that more should be done by private companies involved in the creation and distribution of these images. 

The founders—Sophie Compton, Reuben Hamlyn and Elizabeth Woodward—pointed out that search engines like Google drive most of the total web traffic to deepfake porn sites, while credit card companies process their payments. Internet service providers let people access them, while major services like Amazon, Cloudflare, and Microsoft’s Github host them. And social media sites like X allow the content to circulate at scale. Google changed its policy in 2015 and started allowing victims to submit a request to remove individual pieces of content from search results and has since expanded the policy to deepfake abuse. However, the company does not systematically delist image-based sexual violence and deepfake abuse sites.

“Tech companies have the power to block, de-index or refuse service to these sites—sites whose entire existence is built on violating consent and profiting from trauma,” Compton, Hamlyn and Woodward said in a statement to The 19th. “But they have chosen not to.” 

Goldberg pointed to the speed at which the Taylor Swift deepfakes spread. One image shared on X, formerly known as Twitter, was viewed 47 million times before the account that posted it was suspended. Images continued to spread despite efforts from the social media companies to remove them.

“The violent, misogynistic pictures of Taylor Swift, bloody and naked at a Kansas City Chiefs football game, is emblematic of the problem,” Goldberg said. “The extent of that distribution, including on really mainstream sites, sends a message to everybody that it’s okay to create this content. To me, that was a really pivotal and quite frightening moment.” 

Given the high profile nature of the victim, the incident sparked pronounced and widespread outrage from Swift’s fans and brought public attention to the issue. Goldberg said she checked to see whether any of the online distributors had removed products from their online stores that make it easier and cheaper to create sexually explicit deepfakes—and she was relieved to see they had. 

As the country’s policymakers and courts continue to try to respond to the quickly developing and increasingly accessible technology, Goldberg said it’s important that lawmakers continue deferring to experts and those who work directly with victims, such as lawyers, social workers and advocates. Lawmakers who are regulating abstract ideas or rapidly advancing technologies can be a “recipe for disaster” otherwise, she added. 

Manyame also emphasized the importance in speaking directly to survivors when making policy decisions, but added that lawmakers also need to be thinking more holistically about the problem and not become too bogged down by the specific technology—at the risk of always being behind. For example, Manyame said the general public is only now beginning to understand the risks posed by AI and deepfakes—something she helped write a report back in 2021. Looking ahead, Manyame is already thinking about the Metaverse—a virtual reality space—where users are starting to reckon with instances of rape, sexual harassment and abuse. 

“A lot of the laws around image-based sexual abuse are a little bit dated because they speak about revenge porn specifically,” Manyame said. “Revenge porn has historically been more of a domestic violence issue, in that it is an intimate partner sharing a sexually exploitative image of their former or existing partner. That’s not always the case with deepfakes, so these laws might not provide enough protections.” 

In addition, Manyame argued that many of these policies fail to broaden the definition of “intimate image” to consider diverse cultural or religious backgrounds. For some Muslim women, for instance, it might be just as violating and humiliating to create and disseminate images of their uncovered head without a hijab. 

When it comes to solutions, Manyame pointed to actions that can be taken by the app creators, platform regulators and lawmakers. 

At the design phase, more safety measures can be embedded to limit harm. For example, Manyame said there are some apps that can take photos of women and automatically remove their clothing while that same function doesn’t work on photos of men. There are ways on the back end of these apps to make it harder to remove clothes from anyone, regardless of their gender. 

Once the nefarious deepfakes are already created and posted, however, Manyame said the social media and messaging platforms should have better mechanisms in place to remove the content after victims report it. Many times, individual victims are ignored. Manyame said she’s noticed these large social media companies are more likely to remove these deepfakes in countries, such as Australia, that have third-party regulators to advocate on behalf of victims.

“There needs to be monitoring and enforcement mechanisms included in any solution,” Manyame said. “One of the things that we hear from a lot of survivors is they just want their image to be taken down. It’s not even about going through a legal process. They just want that content gone.” 

Manyame said it’s not too big of an ask for many tech companies and government regulators because many already respond quickly to remove inappropriate photos involving children. It’s just a matter of extending those kinds of protections to women, she added. 

“My concern is that there’s been a rush to implement A.I. laws and policies without considering what some of the root causes of these harms are. It’s a layered problem, and there’s many other layers that need to be tackled.”

We don't only investigate technology. We instigate change.

Your donations power our award-winning reporting and our tools. Together we can do more. Give now.

Donate Now