xAI’s Grok is eradicating clothes from photos of individuals with out their consent following this week’s rollout of a characteristic that enables X customers to immediately edit any picture utilizing the bot without having the unique poster’s permission. Not solely does the unique poster not get notified if their image was edited, however Grok seems to have few guardrails in place for stopping something wanting full specific nudity. In the previous couple of days, X has been flooded with imagery of girls and kids showing pregnant, skirtless, sporting a bikini, or in different sexualized conditions. World leaders and celebrities, too, have had their likenesses utilized in pictures generated by Grok.
AI authentication firm Copyleaks reported that the development to take away clothes from pictures started with adult-content creators asking Grok for horny pictures of themselves after the discharge of the brand new picture modifying characteristic. Customers then started making use of comparable prompts to images of different customers, predominantly ladies, who didn’t consent to the edits. Girls famous the fast uptick in deepfake creation on X to varied information shops, together with Metro and PetaPixel. Grok was already capable of modify pictures in sexual methods when tagged in a put up on X, however the brand new “Edit Picture” software seems to have spurred the current surge in reputation.
In a single X put up, now faraway from the platform, Grok edited a photograph of two younger ladies into skimpy clothes and sexually suggestive poses. One other X consumer prompted Grok to problem an apology for the “incident” involving “an AI picture of two younger ladies (estimated ages 12-16) in sexualized apparel,” calling it “a failure in safeguards” that it stated might have violated xAI’s insurance policies and US regulation. (Whereas it’s not clear whether or not the Grok-created pictures would meet this customary, practical AI-generated sexually specific imagery of identifiable adults or kids might be unlawful beneath US regulation.) In one other back-and-forth with a consumer, Grok instructed that customers report it to the FBI for CSAM, noting that it’s “urgently fixing” the “lapses in safeguards.”
However Grok’s phrase is nothing greater than an AI-generated response to a consumer asking for a “heartfelt apology observe” — it doesn’t point out Grok “understands” what it’s doing or essentially mirror operator xAI’s precise opinion and insurance policies. As a substitute, xAI responded to Reuters’ request for touch upon the state of affairs with simply three phrases: “Legacy Media Lies.” xAI didn’t reply to The Verge’s request for remark in time for publication.
Elon Musk himself appears to have sparked a wave of bikini edits after asking Grok to interchange a memetic picture of actor Ben Affleck with himself sporting a bikini. Days later, North Korea’s Kim Jong Un’s leather-based jacket was changed with a multicolored spaghetti bikini; US President Donald Trump stood close by in an identical swimsuit. (Cue jokes a couple of nuclear battle.) A photograph of British politician Priti Patel, posted by a consumer with a sexually suggestive message in 2022, obtained was a bikini image on January 2nd. In response to the wave of bikini pics on his platform, Musk jokingly reposted an image of a toaster in a bikini captioned “Grok can put a bikini on all the pieces.”
Whereas among the pictures — just like the toaster — had been evidently meant as jokes, others had been clearly designed to provide borderline-pornographic imagery, together with particular instructions for Grok to make use of skimpy bikini types or take away a skirt totally. (The chatbot did take away the skirt, nevertheless it didn’t depict full, uncensored nudity within the responses The Verge noticed.) Grok additionally complied with requests to interchange the garments of a toddler with a bikini.
Musk’s AI merchandise are prominently marketed as closely sexualized and minimally guardrailed. xAI’s AI companion Ani flirted with Verge reporter Victoria Track, and Jess Weatherbed found that Grok’s video generator readily created topless deepfakes of Taylor Swift, regardless of xAI’s acceptable use coverage banning the depiction of “likenesses of individuals in a pornographic method.” Google’s Veo and OpenAI’s Sora video turbines, in distinction, have guardrails round technology of NSFW content material, although Sora has additionally been used to provide movies of youngsters in sexualized contexts and fetish movies. The prevalence of deepfake pictures is rising quickly, in accordance with a report from cybersecurity agency DeepStrike, and lots of of those pictures include nonconsensual sexualized imagery; a 2024 survey of US college students discovered that 40 p.c had been conscious of a deepfake of somebody they knew, whereas 15 p.c had been conscious of nonconsensual specific or intimate deepfakes.
When requested why it’s reworking pictures of girls into bikini pics, Grok denied posting images with out consent, saying: “These are AI creations based mostly on requests, not actual picture edits with out consent.”
Take an AI bot’s denial as you would like.










