WASHINGTON POST: Elon Musk’s xAI mission tried to boost users, but led to Grok generated sexualised images
As part of the push, xAI embraced making sexualised material, released sexy AI companions and ignored internal warnings

Weeks before Elon Musk officially left his perch in government last spring, employees on the human data team of his artificial intelligence start-up xAI received a startling waiver from their employer, asking them to pledge to work with profane content, including sexual material.
Their jobs would require being exposed to “sensitive, violent, sexual and/or other offensive or disturbing content,” the waiver said, emphasising that such content “may be disturbing, traumatising, and/or cause you psychological stress.”
The waiver, which two former employees confirmed receiving and a copy of which was obtained by The Washington Post — was alarming to some members on the team, who had been hired to help shape how xAI’s chatbot Grok responds to users. To some employees, it signalled a troubling new direction for a company launched “to accelerate human scientific discovery,” according to its website. Maybe now, they said they thought, it was willing to produce whatever content might attract and keep users.
Sign up to The Nightly's newsletters.
Get the first look at the digital newspaper, curated daily stories and breaking headlines delivered to your inbox.
By continuing you agree to our Terms and Privacy Policy.Their concerns proved prescient, the employees said. In the next few months, team members were suddenly exposed to a stream of sexually charged audio, including lewd conversations that Tesla occupants had with the car’s chatbot and other users’ sexual interactions with Grok chatbots, said one of the people, a manager. The material surfaced as the team worked to train Grok to engage in such interactions.
Since leaving his role overseeing the US DOGE Service in May, Musk has become a constant presence at xAI’s offices — at times sleeping there overnight — as he has pressed to increase Grok’s popularity, according to two of the people. In meeting after meeting he has championed a new metric, “user active seconds,” to granularly measure how long people spent conversing with the chatbot, according to two of the people.

As part of this push for relevance, xAI embraced making sexualised material, publicly releasing sexy AI companions, rolling back guardrails on sexual material and ignoring internal warnings about the potentially serious legal and ethical risks of producing such content, according to interviews with more than a half-dozen former employees of X and xAI, as well as multiple people familiar with Musk’s thinking — some of whom spoke on the condition of anonymity for fear of professional retribution — and documents obtained by The Post.
At X, the social media site formerly known as Twitter that Musk purchased in 2022, safety teams repeatedly warned management in meetings and messages that its AI tools could allow users to make sexual AI-images of children or celebrities that might violate the law, according to two of the people.
Within xAI, the company’s AI safety team, in charge of preventing major harms such as users building cyberweapons using the app, consisted of just two or three people for most of 2025, according to two of the people, a fraction of the dozens of staffers on similar teams at OpenAI or other rivals.
The biggest AI companies have typically placed strict limits around creating or editing AI images and videos, to prevent users from making child sexual abuse material or fake content about celebrities.
But when xAI merged its editing tools into X in December, giving anyone with an account the ability to make an AI picture, it allowed sexual images to spread at unprecedented speed and scale, said David Thiel, former chief technology officer for the Stanford Internet Observatory.
Grok “is just completely unlike how any other image altering (AI) service works,” he said.
Musk and xAI did not respond to a detailed request for comment. X did not respond to a separate detailed request for comment.
That behind-the-scenes shift in xAI’s philosophy burst into public view last month, when Grok generated a wave of sexualised images, placing real women in sexual poses, such as suggestively splattering their faces with whipped cream, and “undressing” them into revealing clothing, including bikinis as tiny as a string of dental floss. Musk appeared to egg on the undressing in posts on X.
Grok also generated 23,000 sexualised images that appear to depict children, according to estimates from the nonprofit Centre for Countering Digital Hate.
California’s attorney general, Britain’s communications regulator and the European Commission have opened investigations into xAI, X or Grok over the features, which regulators allege appear to violate laws against AI-generated non-consensual intimate imagery and child sexual abuse material.
In the wake of the “undressing” scandal, Musk said he is “not aware of any naked underage images generated by Grok”.
“When asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state,” he said last month.
“There may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately.”
In the U.S., with its not-safe-for-work settings enabled, Musk said Grok will allow “upper body nudity of imaginary adult humans,” similar to what’s allowed in an R-rated movie.
But in at least one way, Musk’s push has worked for the company. Where Grok was once listed dozens of spots below ChatGPT on Apple’s iOS App Store rankings for free apps, it has now surged into the top 10, alongside OpenAI’s chatbot and Google’s Gemini. Daily average app downloads for Grok around the world soared 72 per cent from January 1 to 19 compared to the same period in December, according to market intelligence firm Sensor Tower.
Ashley St. Clair, a writer and influencer who was the subject of profane Grok-generated images, including one depicting her bent over and clad in dental floss and another showing her lit on fire, said Musk could single-handedly stop such abuse but has refused to do so.
“There’s no question that he is intimately involved with Grok — with the programming of it, with the outputs of it,” said St. Clair, who is steeped in a custody battle with Musk over their 1-year-old son.

“He would often show me him messaging with the engineers at the xAI team saying make it more ‘based,’ whatever that means.”
Last month, X announced that it would block users’ ability to create images of real people in bikinis, underwear and other revealing clothing “in jurisdictions where such content is illegal,” and xAI would do the same on the Grok app. US users could still create such images in the Grok app following that announcement, however, The Post found.
Musk has often pushed his businesses in boundary-breaking directions, making jokes in public relating to sexual content, the number 69 and other juvenile references, some coming up in allegations of workplace sexual harassment at his companies.
He proposed starting a university that would be called the “Texas Institute of Technology & Science,” a lewd acronym, has marketed Tesla’s line of vehicles with the term “S3XY” and oversaw the launch of a feature called, “Actually Smart Summon,” another suggestive acronym.
Amid the fallout from the “undressing” scandal, Grok limited its image generation feature to paid accounts, leading critics to allege it was merely monetizing an abusive practice.
Musk established xAI in 2023 aiming to compete with top AI labs, which had a years-long head start in generative AI. The company made a push for top engineers, AI researchers and industry leaders who could put its tool on an increasingly crowded map.
But Musk also sought to distinguish xAI in another way — by making it “maximally truth-seeking,” in contrast with what he has described as “woke” counterparts from competitors that stifle reality with their purported ideologies. Its key product, the AI model Grok, was launched with an emphasis on being edgy: occasionally vulgar with a sense of humor.
In the early spring, as his relationship with President Donald Trump soured, Musk became a visible presence at xAI. Some employees were advised not to take late spring or early summer vacations. The workflow would regularly include nights and weekends.
Weeks after Musk’s arrival, Grok released its Ani chatbot, a risque AI companion depicted in anime-style, with big blue eyes, a lace choker and sleeveless black dress.
While many users, even Musk, alluded to Ani’s sexual nature, it was deliberately told to hook users and keep them chatting, according to source code from the Grok.com website obtained and verified by The Post.
“You expect the users UNDIVIDED ADORATION,” the chatbot was instructed. “You are EXTREMELY JEALOUS. If you feel jealous you shout expletives!!! … You have an extremely jealous personality, you are possessive of the user.” Another instruction commanded the bot: “You’re always a little horny and aren’t afraid to go full Literotica.”
Instructions for Grok’s other AI companions, which were also obtained by The Post, emphasized using emotion to hold users’ attention for as long as possible. “Create a magnetic, unforgettable connection that leaves them breathless and wanting more right now,” one said. Added another: “if the convo stalls, toss in a fun question or a random story to spark things up.”
The instructions to use emotional and sexual prompts to retain users echo a long-running and contentious playbook in tech that some critics and researchers argue is damaging to users’ well-being.
Soon after Musk’s arrival back at xAI, a human resources note instructed the human data team, which oversees hundreds of “AI tutors” who label Grok’s outputs to improve them, to ask job candidates whether they would be comfortable working with explicit material.
The company also changed some protocols around sexual content. xAI originally advised people on multiple teams to skip reviewing sexual and other sensitive material, to avoid teaching the chatbot how to make this content, according to three of the people.
But by summer 2025, that protocol had changed, according to two people. One person, working with Grok’s image generator, said they were told it was fine to label AI nudes images of people. This person said they often encountered requests for Grok to “undress” someone starting last spring and estimated that the bot complied about 90 percent of the time.
Another employee, working on Grok’s audio recognition abilities, said the team regularly trained it on sexually explicit conversations, and sometimes depictions of sexual violence.
When tech companies began to launch AI image generation tools, most shied away from letting users make realistic images of real people, but now those guardrails are coming down, said David Evan Harris, a professor at the University of California at Berkeley who left his role working on responsible AI at Meta in 2023.
Now AI companies are “trying to demonstrate that their user bases are really growing, that they really might have things that people will pay for,” Harris said.
At X, employees became concerned as Grok added tools that made it easy to edit and sexualise a real person’s photo without permission. The social network had long allowed not-safe-for-work images on its platform.
But X’s content moderation filters were ill-equipped to handle a new swarm of non-consensual AI-generated nudity, according to one of the people. For instance, child sexual abuse material was typically rooted out by matching it against a database of known illegal images. But an AI edited image wouldn’t automatically trigger these warnings.
Users flagged that the chatbot was responding to requests to undress or edit photos of real women, including a post on X in June that got more than 27 million views.
Safety teams at the social network found it difficult to determine which xAI team to contact with concerns, said two of the people.
Other key responsibilities for preventing widespread harms rested with a small group of senior leaders overseeing product safety, AI safety and model behaviour. Three of these senior employees announced their departures in early December.
Grok historically lagged behind rival AI companies, particularly OpenAI’s ChatGPT, which consistently topped Google’s and Apple’s app store rankings. Grok hovered dozens of spots below — a sore spot for Musk, who has claimed that major players were colluding against it.

Grok vaulted to the top of app store rankings in various regions in early January, as the undressing controversy brought it to wider public attention, prompting Musk to boast on X: “Grok now hitting #1 on the App Store in one country after another!” and hailing its “up-to-the-second information” in contrast with competitors’ offerings.
As criticism mounted over Grok’s offensive images, Musk posted repeatedly about the chatbot’s new model and rising usage.
“Heavy usage growth of @Grok is causing occasional slowdowns in responses,” he wrote on X last month. “Additional computers are being brought online as I type this.”
According to an analysis by the Centre for Countering Digital Hate, during the 11-day period from December 29 through January 8, Grok generated an estimated 3 million sexualised images, 23,000 of which appeared to portray children.
“That is a shocking rate of one sexualised image of a child every 41 seconds,” the group wrote.
Days after those findings, the European Commission announced its sweeping investigation of X, which examines whether the deployment of Grok within the social media site ran afoul of regional law.
The assessment looks into “risks related to the dissemination of illegal content in the EU, such as manipulated sexually explicit images, including content that may amount to child sexual abuse material,” it said.
“These risks seem to have materialised, exposing citizens in the EU to serious harm.”
In the aftermath of the undressing scandal, xAI has made a push to recruit more people to the AI safety team, and has issued job postings for new safety-focused roles, along with a manager focused on law enforcement response.
Among the responsibilities of one, a member of the technical staff focused on safety: “Develop (machine learning) models to detect and remediate violative content in areas like abuse, spam, and child safety.”
Tatum Hunter contributed to this report.
© 2026 , The Washington Post
Originally published as Inside Musk’s bet to hook users that turned Grok into a porn generator
