One year after the TAKE IT DOWN Act criminalized publication of non-consensual intimate images in the US, enforcement and impact remain uncertain as AI-generated content continues to flourish across online platforms.
- Federal law criminalizes non-consensual AI-generated intimate images.
- Deepfake content creation and distribution continue to grow despite enforcement warnings.
- Platform accountability and technical restrictions now critical test points.
What happened
In May 2025, the US Congress passed and President Donald Trump signed the TAKE IT DOWN Act, criminalizing the publication of non-consensual intimate images, whether AI-generated or real. The law was accompanied by strong enforcement signals from the Federal Trade Commission, emphasizing compliance responsibilities and penalties for violating entities.
Notably, shortly before the bill’s passage, the renowned deepfake site MrDeepfakes shut down after losing critical infrastructure support. Although this marked a symbolic win for advocates, other large and active online communities have since emerged, hosting millions of users and continuing to facilitate the creation and sharing of AI-driven explicit imagery.
Why it matters
Despite establishing criminal penalties at the federal level, the TAKE IT DOWN Act has yet to significantly curb the production and circulation of non-consensual deepfake pornography. In fact, advances in AI technology and the proliferation of user-friendly websites for generating content have lowered barriers for creators and increased supply.
At the same time, sizable online platforms and forums provide extensive networks for distributing illegal and harmful deepfake content, often with instructional content to evade detection and restrictions. This indicates that legal deterrents alone may be insufficient without robust cooperation from platforms and improved technical safeguards.
What to watch next
Observers will closely monitor how the Federal Trade Commission and other regulators enforce compliance standards among AI service providers and hosting platforms, as platform-level accountability could prove pivotal in stemming abuse. Measures like de-platforming, content removal policies, and technical content controls will be critical to evaluate.
Additionally, ongoing research into the scale and nature of AI-enabled intimate image abuse, alongside evolving legislative and technological responses, will shape future policy development. The intersection of AI capabilities, enforcement, and online community dynamics will remain a key focus in assessing the TAKE IT DOWN Act’s long-term effectiveness.