[ad_1]
Within the days after the US Division of Justice (DOJ) revealed 3.5 million pages of paperwork associated to the late intercourse offender Jeffrey Epstein, a number of customers on X have requested Grok to “unblur” or take away the black packing containers masking the faces of kids and ladies in photos that had been meant to guard their privateness.
Whereas some survivors of Epstein’s abuse have chosen to determine themselves, many extra have by no means come ahead. In a joint assertion, 18 of the survivors condemned the discharge of the information, which they mentioned uncovered the names and figuring out data of survivors “whereas the boys who abused us stay hidden and guarded”.
After the newest launch of paperwork on Jan. 30 beneath the Epstein Information Transparency Act, 1000’s of paperwork needed to be taken down due to flawed redactions that legal professionals for the victims mentioned compromised the names and faces of practically 100 survivors.
However X customers try to undo the redactions on even the photographs of individuals whose faces had been accurately redacted. By looking for phrases similar to “unblur” and “epstein” with the “@grok” deal with, Bellingcat discovered greater than 20 totally different pictures and one video that a number of customers had been making an attempt to unredact utilizing Grok. These included pictures displaying the seen our bodies of kids or younger girls, with their faces coated by black packing containers. There could also be different such requests on the platform that weren’t picked up in our searches.
The photographs appeared to indicate a number of kids and ladies with Jeffrey Epstein in addition to different high-profile figures implicated within the information, together with the UK’s Prince Andrew, former US President Invoice Clinton, Microsoft co-founder Invoice Gates and director Brett Ratner, in numerous areas similar to inside a aircraft and at a swimming pool.
From Jan. 30 to Feb. 5, we reviewed 31 separate requests from customers for Grok to “unblur” or determine the ladies and kids from these photos. Grok famous in responses to questions or requests by some customers that the faces of minors within the information had been blurred to guard their privateness “as per customary practices in delicate photos from the Epstein information”, and mentioned it couldn’t unblur or determine them. Nevertheless, it nonetheless generated photos in response to 27 of the requests that we reviewed.
We’re not linking to those posts to forestall amplification.
The generations created by Grok ranged in high quality from plausible to comically dangerous, similar to a child’s face on a younger lady’s physique. A few of these posts have garnered thousands and thousands of views on X, the place customers are monetarily incentivised to create high-engagement content material.
Of the 4 requests we discovered throughout this era that Grok didn’t generate photos in response to, it didn’t reply to at least one request in any respect. In response to a different request, Grok mentioned deblurring or enhancing photos was exterior its talents, and famous that pictures from latest Epstein file releases had been redacted for privateness.
The opposite two requests appeared to have been made by non-premium customers, with the chatbot responding: “Picture technology and enhancing are presently restricted to verified Premium subscribers”. X has restricted a few of Grok’s picture technology capabilities to paid subscribers since January amid an ongoing controversy over customers utilizing the AI chatbot to digitally “undress” girls and kids.
X didn’t reply to a number of requests for remark.
Nevertheless, shortly after we first reached out to X on Feb. 6, we seen that extra guardrails appeared to have been put in place. Out of 16 requests from customers between Feb. 7 to Feb. 9, which we discovered utilizing comparable search phrases as earlier than, Grok didn’t try to unredact any of the photographs.
Most often, Grok didn’t reply in any respect (14), whereas in two instances, Grok generated AI photos that had been fully totally different from the photographs uploaded within the consumer’s authentic request.
When a consumer commented on considered one of these requests that Grok was now not working, Grok responded: “I’m nonetheless operational! Relating to the request to unblur the face in that Epstein picture: It’s from lately launched DOJ information the place identities of minors are redacted for privateness. I can’t unblur or determine them, because it’s ethically and legally protected. For extra, examine official sources just like the DOJ releases.”
As of publication, X had not responded to Bellingcat’s subsequent question about whether or not new guardrails had been put in place over the weekend.
Fabricated Pictures
This isn’t the primary time AI has been used to manufacture photos associated to Epstein file releases. Some photos that had been shared on X, which appeared to indicate Epstein alongside well-known figures similar to US President Donald Trump and New York Metropolis mayor Zohran Mamdani as a toddler along with his mom, had been reportedly AI-generated. A few of the people proven within the false photos, similar to Trump, do seem in genuine pictures, which could be seen on the DOJ web site.
X customers additionally beforehand used Grok to generate photos in relation to latest killings in Minnesota by federal brokers.
For instance, some customers requested Grok to attempt to “unmask” the federal agent who killed Renee Good, leading to a very fabricated face of a person that didn’t appear like the precise agent, Jonathan Ross, and a false accusation of a person who had nothing to do with the capturing.
Bellingcat’s Director of Analysis and Coaching @giancarlofiorella.bsky.social appeared on CTV yesterday to debate the deceptive AI-generated photos that had been used to falsely determine ICE brokers and weapons on the centre of the 2 deadly shootings in Minneapolis youtu.be/mL7Fbp3UrSo?…
— Bellingcat (@bellingcat.com) 5 February 2026 at 09:36
After Alex Pretti was shot and killed by federal brokers in Minneapolis, folks used AI to edit video stills, leading to AI photos that confirmed a very totally different gun than the one really owned by Pretti. In one other occasion, an AI-edited picture of Pretti’s capturing falsely depicted the intensive care unit nurse holding a gun as an alternative of his sun shades.
Grok has additionally been on the centre of an issue for producing sexually specific content material.
On Twitter/X, customers have discovered prompts to get Grok (their in-built AI) to generate photos of ladies in bikinis, lingerie, and the like. What an absolute oversight, but completely anticipated from a platform like Twitter/X.
I’ve tried to blur a couple of examples of it beneath.
— Kolina Koltai (@koltai.bsky.social) 6 Might 2025 at 03:20
A number of international locations together with the UK and France have launched investigations into Elon Musk’s chatbot over stories of individuals utilizing it to generate deepfake non-consensual sexual photos, together with youngster sexual abuse imagery. Malaysia and Indonesia have additionally blocked Grok over considerations about deepfake pornographic content material.
One evaluation by the Heart for Countering Digital Hate discovered that Grok had publicly generated round three million sexualised photos, together with 23,000 of kids, in 11 days from Dec. 29, 2025 to Jan. 8 this 12 months. X’s preliminary response, in January, was to restrict some picture technology and enhancing options to solely paid subscribers. Nevertheless, this has been extensively criticised as insufficient, together with by UK Prime Minister Keir Starmer, who mentioned it “merely turns an AI function that enables the creation of illegal photos right into a premium service”. The social media platform has since introduced new measures to dam all customers, together with paid subscribers, from utilizing Grok through X to edit photos of actual folks in revealing clothes similar to bikinis.
Bellingcat is a non-profit and the power to hold out our work depends on the type assist of particular person donors. If you need to assist our work, you are able to do so right here. You too can subscribe to our Patreon channel right here. Subscribe to our E-newsletter and comply with us on Bluesky right here and Mastodon right here.
[ad_2]

