Elon Musk stated Wednesday he’s “not conscious of any bare underage photos generated by Grok,” hours earlier than the California Lawyer Common opened an investigation into xAI’s chatbot over the “proliferation of nonconsensual sexually express materials.”
Musk’s denial comes as strain mounts from governments worldwide — from the UK and Europe to Malaysia and Indonesia — after customers on X started asking Grok to show photographs of actual ladies, and in some circumstances youngsters, into sexualized photos with out their consent. Copyleaks, an AI detection and content material governance platform, estimated roughly one picture was posted every minute on X. A separate pattern gathered from January 5 to January 6 discovered 6,700 per hour over the 24-hour interval. (X and xAI are a part of the identical firm.)
“This materials…has been used to harass folks throughout the web,” stated California Lawyer Common Rob Bonta in an announcement. “I urge xAI to take speedy motion to make sure this goes no additional.”
The AG’s workplace will examine whether or not and the way xAI violated the legislation.
A number of legal guidelines exist to guard targets of nonconsensual sexual imagery and little one sexual abuse materials (CSAM). Final yr the Take It Down Act was signed right into a federal legislation, which criminalizes knowingly distributing nonconsensual intimate photos – together with deepfakes – and requires platforms like X to take away such content material inside 48 hours. California additionally has its personal sequence of legal guidelines that Gov. Gavin Newsom signed in 2024 to crack down on sexually express deepfakes.
Grok started fulfilling person requests on X to supply sexualized photographs of ladies and youngsters in direction of the top of the yr. The development seems to have taken off after sure adult-content creators prompted Grok to generate sexualized imagery of themselves as a type of advertising and marketing, which then led to different customers issuing related prompts. In a variety of public circumstances, together with well-known figures like “Stranger Issues” actress Millie Bobby Brown, Grok responded to prompts asking it to change actual photographs of actual ladies by altering clothes, physique positioning, or bodily options in overtly sexual methods.
Based on some stories, xAI has begun implementing safeguards to deal with the problem. Grok now requires a premium subscription earlier than responding to sure image-generation requests, and even then the picture is probably not generated. April Kozen, VP of promoting at Copyleaks, informed TechCrunch that Grok might fulfill a request in a extra generic or toned-down method. They added that Grok seems extra permissive with grownup content material creators.
Techcrunch occasion
San Francisco
|
October 13-15, 2026
“Total, these behaviors counsel X is experimenting with a number of mechanisms to scale back or management problematic picture era, although inconsistencies stay,” Kozen stated.
Neither xAI nor Musk has publicly addressed the issue head on. A number of days after the cases started, Musk appeared to make gentle of the problem by asking Grok to generate a picture of himself in a bikini. On January 3, X’s security account stated the corporate takes “motion in opposition to unlawful content material on X, together with [CSAM],” with out particularly addressing Grok’s obvious lack of safeguards or the creation of sexualized manipulated imagery involving ladies.
The positioning mirrors what Musk posted at this time, emphasizing illegality and person conduct.
Musk wrote he was “not conscious of any bare underage photos generated by Grok. Actually zero.” That assertion doesn’t deny the existence of bikini pics or sexualized edits extra broadly.
Michael Goodyear, an affiliate professor at New York Legislation Faculty and former litigator, informed TechCrunch that Musk seemingly narrowly targeted on CSAM as a result of the penalties for creating or distributing artificial sexualized imagery of youngsters are better.
“For instance, in the US, the distributor or threatened distributor of CSAM can withstand three years imprisonment beneath the Take It Down Act, in comparison with two for nonconsensual grownup sexual imagery,” Goodyear stated.
He added that the “larger level” is Musk’s try to attract consideration to problematic person content material.
“Clearly, Grok doesn’t spontaneously generate photos. It does so solely in keeping with person request,” Musk wrote in his put up. “When requested to generate photos, it is going to refuse to supply something unlawful, because the working precept for Grok is to obey the legal guidelines of any given nation or state. There could also be instances when adversarial hacking of Grok prompts does one thing sudden. If that occurs, we repair the bug instantly.”
Taken collectively, the put up characterizes these incidents as unusual, attributes them to person requests or adversarial prompting, and presents them as technical points that may be solved by means of fixes. It stops in need of acknowledging any shortcomings in Grok’s underlying security design.
“Regulators might think about, with consideration to free speech protections, requiring proactive measures by AI builders to forestall such content material,” Goodyear stated.
TechCrunch has reached out to xAI to ask what number of instances it caught cases of nonconsensual sexually manipulated photos of ladies and youngsters, what guardrails particularly modified, and whether or not the corporate notified regulators of the problem. TechCrunch will replace the article if the corporate responds.
The California AG isn’t the one regulator to attempt to maintain xAI accountable for the problem. Indonesia and Malaysia have each quickly blocked entry to Grok; India has demanded that X make speedy technical and procedural modifications to Grok; the European Fee ordered xAI to retain all paperwork associated to its Grok chatbot, a precursor to opening a brand new investigation; and the UK’s on-line security watchdog Ofcom opened a proper investigation beneath the UK’s On-line Security Act.
xAI has come beneath hearth for Grok’s sexualized imagery earlier than. As AG Bonta identified in an announcement, Grok features a “spicy mode” to generate express content material. In October, an replace made it even simpler to jailbreak what little security pointers there have been, leading to many customers creating hardcore pornography with Grok, in addition to graphic and violent sexual photos.
Lots of the extra pornographic photos that Grok has produced have been of AI-generated folks — one thing that many would possibly nonetheless discover ethically doubtful however maybe much less dangerous to the people within the photos and movies.
“When AI methods enable the manipulation of actual folks’s photos with out clear consent, the impression may be speedy and deeply private,” Copyleaks co-founder and CEO Alon Yamin stated in an announcement emailed to TechCrunch. “From Sora to Grok, we’re seeing a speedy rise in AI capabilities for manipulated media. To that finish, detection and governance are wanted now greater than ever to assist stop misuse.”










