Minnesota Targets AI Deepfakes with Tough New Legislation \ Newslooks \ Washington DC \ Mary Sidiqi \ Evening Edition \ Minnesota lawmakers are proposing a bill to block AI-generated deepfake pornography before it spreads, targeting companies that operate “nudification” websites. Victims, including Molly Kelly and Megan Hurley, shared their traumatic experiences, emphasizing the urgent need for regulation. While the bill has bipartisan support, legal experts warn it may face constitutional challenges over free speech concerns.
Minnesota’s AI Deepfake Crackdown: Quick Looks
- A Minnesota bill aims to block AI-generated explicit images before they spread.
- Victims like Molly Kelly and Megan Hurley shared their traumatic experiences.
- The bill would fine “nudification” site operators up to $500,000 per violation.
- Critics warn the legislation may face constitutional challenges over free speech.
- Other states and Congress are also pushing for AI deepfake regulations.
Deep Look
Minnesota lawmakers are advancing a groundbreaking bill that would target the creators and distributors of AI-generated explicit images, aiming to stop harmful content before it spreads. The legislation, which has bipartisan support, would impose severe penalties on operators of “nudification” websites that use artificial intelligence to create realistic, nonconsensual nude images and videos.
The bill gained momentum after Molly Kelly, a Minnesota woman, came forward with her experience of being targeted by this technology. She was horrified to discover that a person she knew had used widely available AI software to generate explicit images and videos of her—using innocent family photos from social media.
“My initial shock turned to horror when I learned that the same person targeted about 80, 85 other women, most of whom live in Minnesota, some of whom I know personally,” Kelly testified.
Minnesota’s approach differs from laws in other states that focus on the distribution of deepfake pornography. Instead, it seeks to prevent the images from being created in the first place.
How the Bill Works and Its Legal Challenges
The legislation, introduced by Democratic Sen. Erin Maye Quade, would require companies running AI-powered “nudification” sites or apps to block access for Minnesota users. If they fail to do so, they could face civil penalties of up to $500,000 for each unlawful access, download, or use.
Maye Quade emphasized that preventing AI-generated explicit content at its source is crucial because the harm begins as soon as these images exist.
“It’s not just the dissemination that’s harmful to victims,” she said. “It’s the fact that these images exist at all.”
However, some legal experts warn that the bill could face First Amendment challenges. Wayne Unger, a law professor at Quinnipiac University, and Riana Pfefferkorn, an AI law expert at Stanford University, argue that the proposal may be too broad to survive a court challenge.
Federal law generally shields websites from being held liable for content users generate. Pfefferkorn suggested that narrowing the bill’s scope to AI-generated child pornography could make it more legally viable, as such content is not protected under free speech laws.
“If Minnesota wants to go down this direction, they’ll need to add a lot more clarity to the bill,” Unger said. “And they’ll have to narrow what they mean by ‘nudify’ and ‘nudification.’”
Despite these concerns, Maye Quade defended the bill, stating that it regulates harmful conduct rather than restricting speech.
“These tech companies cannot keep unleashing this technology into the world with no consequences. It is harmful by its very nature,” she said.
Victims Speak Out Against AI-Generated Harassment
Beyond Molly Kelly’s testimony, other victims have come forward to describe the devastating effects of AI-generated explicit images.
Megan Hurley, a massage therapist, said she felt especially violated after discovering that deepfake pornography had been made of her.
“It is far too easy for one person to use their phone or computer and create convincing, synthetic, intimate imagery of you, your family, and friends, your children, your grandchildren,” Hurley said. “I do not understand why this technology exists, and I find it abhorrent that there are companies out there making money in this manner.”
Sandi Johnson, a senior policy counsel at the victim advocacy group RAINN (Rape, Abuse & Incest National Network), testified that once deepfake images are created, they can be widely shared and nearly impossible to remove.
How Other States and Congress Are Responding
Minnesota is not alone in seeking ways to regulate AI-generated explicit content. States across the U.S. and federal lawmakers are exploring new measures to combat deepfake pornography.
- Congressional Action: A bipartisan bill co-sponsored by Sen. Amy Klobuchar (D-MN) and Sen. Ted Cruz (R-TX) would make it a federal crime to publish nonconsensual sexual images, including AI-generated deepfakes. The bill also requires social media platforms to remove such content within 48 hours of a victim’s request.
- San Francisco Lawsuit: In August, San Francisco became the first city to sue several “nudification” websites, accusing them of violating state laws against fraudulent business practices, nonconsensual pornography, and child exploitation.
- State-Level Efforts: In Kansas, lawmakers passed a bill expanding the definition of child sexual exploitation to include AI-generated images that are “indistinguishable from a real child.” Florida, Illinois, Montana, New Jersey, New York, North Dakota, Oregon, Rhode Island, South Carolina, and Texas have introduced similar legislation.
Maye Quade is working to share Minnesota’s proposal with lawmakers in other states, hoping to build a nationwide push against AI-powered exploitation.
“If we can’t get Congress to act, then we can maybe get as many states as possible to take action,” she said.
The Future of AI Deepfake Regulation
The rise of AI-generated explicit content poses serious challenges for lawmakers, tech companies, and victims. While many states have focused on banning the distribution of deepfake pornography, Minnesota’s bill takes a more aggressive stance by attempting to block the creation of such content altogether.
However, legal battles over free speech and website liability are likely, as AI experts caution against overly broad restrictions that could set unintended legal precedents.
As deepfake technology continues to evolve, governments, advocacy groups, and victims alike are pushing for stronger protections—hoping to prevent AI from becoming a tool for digital abuse and exploitation.
Minnesota Targets AI