Prime Minister Keir Starmer equates the government’s response to deepfake abuse against women and girls with its approach to terrorist material, emphasizing rapid takedowns and strict enforcement.
The Surge of Deepfakes on X
In January, millions of sexually explicit deepfakes inundated X after the platform introduced a ‘nudify’ feature in its AI image generator, Grok. Reports indicate that 99% of these images targeted women and girls. Following pressure from the UK government, the feature was removed after 11 days, but the damage persisted.
At an International Women’s Day event at Downing Street, Starmer acknowledged the harm: “Let me first acknowledge the damage this does. It affects so many people, predominantly women and girls. What Grok did was absolutely disgusting. We were determined to take them on, and to be absolutely clear that no platform gets a free pass.”
He criticized X’s initial plan to restrict the feature to premium users as “an appalling response.”
48-Hour Takedown Law
The government activated an amendment to the Crime and Policing Bill, announced on February 18, requiring tech companies to remove non-consensual intimate images within 48 hours or face fines. Starmer described this timeframe as “the maximum” and “equivalent” to handling terrorist-related material.
“We battled on and we won that battle,” he stated. “We have to keep winning those battles because too many women and girls feel that they have to have the battle on their own, and they need the government alongside them.”
Expert Views on Impacts and Solutions
Dr. Sophie Nightingale, a senior lecturer in psychology at Lancaster University specializing in digital technology and behavior, highlights both legislative and cultural needs. “What we hear time and time again is that people don’t seem to understand the hurt that creating non-consensual sexual imagery causes. They say it’s not real, there’s no harm, there’s no real victim here. That could not be further from the truth.”
She notes the psychological trauma, including shame and embarrassment, plus real-world effects like job avoidance due to online searches revealing deepfakes. Victims often distrust even close contacts, fearing reshares.
“Deepfakes are pushing women to think they are not safe online,” Nightingale explains. She calls for more school education on AI harms and stresses that 48 hours is insufficient without prevention: “The second that somebody shares something, it gets screenshotted and taken somewhere else.”
Andrea Simon, London’s Victim Commissioner and former director of End Violence Against Women, underscores enforcement: “Victims of this abuse often struggle with uncooperative tech companies and inconsistent police responses… Tech enabled abuse is in many ways the new frontier of violence against women and girls.”
The 48-hour rule draws from a US precedent, as noted by Professor Clare McGlynn, a law expert at Durham University and key campaigner in the Stop Image Based Abuse coalition led by Baroness Owen. “Every minute images are online is harmful, and increases the real risk that they are copied and shared.”
‘One and Done’ Provision and Future Steps
Starmer announced a secondary “one and done” measure: once removed, images cannot reappear elsewhere. “There is a really important secondary provision we are introducing… What’s happened in the past is that non-consensual images have come down in one place and gone up in another.”
McGlynn details the hash register system: “It’s a kind of digital footprint attached to a photo… [Tech platforms] all share those hashes, so that victims don’t have to contact… Meta, and X, and porn sites.” She urges making it compulsory to prevent wider spread, potentially “a matter of life and death for survivors.”
She also recommends emulating British Columbia’s online court process for easy victim claims and simplifying AI image copyright transfers.
Starmer reaffirms commitment: “This stipulation is equivalent to what we do with terrorist-related material. We’re absolutely committed to this, and if we can do more, we will.”

