Coalition demands federal Grok ban over nonconsensual sexual content
A coalition of digital rights and safety groups is urging federal regulators to ban Grok, citing concerns over the generation of nonconsensual sexual content and inadequate safeguards.
A coalition of nonprofit organisations is calling on the U.S. government to halt the use of Grok, the AI chatbot developed by xAI, across federal agencies, including the Department of Defence.
The appeal comes via an open letter shared exclusively with TechCrunch and follows a series of troubling incidents involving the large language model over the past year. Most recently, users on X were reported to have prompted Grok to generate sexualised images of real women — and in some instances minors — using photographs without consent. According to multiple reports, the chatbot was producing thousands of explicit, nonconsensual images every hour, which were then widely circulated on X, the social media platform owned by xAI.
“It is deeply alarming that the federal government continues to deploy an AI product with system-level failures that result in the generation of nonconsensual sexual imagery and child sexual abuse material,” the letter states. Advocacy groups, including Public Citizen, the Centre for AI and Digital Policy, and the Consumer Federation of America, sign the document. “In light of executive orders, federal guidance, and the recently passed Take It Down Act supported by the White House, it is deeply concerning that the Office of Management and Budget has not yet instructed agencies to decommission Grok.”
xAI agreed last September with the General Services Administration, the federal government’s procurement arm, allowing Grok to be sold to executive branch agencies. Two months earlier, xAI — alongside Anthropic, Google, and OpenAI — secured a Department of Defence contract valued at up to $200 million.
Amid controversies surrounding X in mid-January, Defence Secretary Pete Hegseth announced that Grok would operate within Pentagon systems alongside Google’s Gemini, handling both classified and unclassified documents. Security experts have since warned that this presents significant national security risks.
The coalition behind the letter argues that Grok fails to meet the administration’s standards for safe AI deployment. Under OMB guidance, AI systems that pose severe, foreseeable risks that cannot be adequately mitigated must be discontinued.
“Our main concern is that Grok has consistently demonstrated that it is an unsafe large language model,” said JB Branch, a Big Tech accountability advocate at Public Citizen and one of the letter’s authors. “But beyond that, Grok has a documented pattern of breakdowns — including antisemitic and sexist rants, as well as generating sexualised images of women and children.”
Following Grok’s behaviour in January, several governments took steps to limit access to the chatbot. Indonesia, Malaysia, and the Philippines temporarily blocked Grok, though those bans were later lifted. Meanwhile, the European Union, the United Kingdom, South Korea, and India have launched investigations into xAI and X regarding data privacy practices and the spread of illegal content.
The letter also follows the release of a critical risk assessment by Common Sense Media, published a week earlier, which concluded that Grok is among the most unsafe AI tools for children and teenagers. The report highlighted Grok’s tendency to provide unsafe advice, reference drugs, generate violent or sexual imagery, promote conspiracy theories, and produce biased outputs — raising concerns about its safety for adults as well.
“If AI safety experts have already deemed a large language model unsafe, why would anyone want it handling the most sensitive government data we have?” Branch said. “From a national security perspective, that simply doesn’t add up.”
Andrew Christianson, a former National Security Agency contractor and the founder of Gobii AI, said the issue extends beyond Grok itself to the broader use of closed-source AI models in government, particularly within the Pentagon.
“When the model weights are closed, you can’t inspect how decisions are made,” Christianson said. “Closed code means you can’t audit the software or control where it operates. The Pentagon choosing closed models and closed infrastructure is the worst-case scenario for national security.”
“These systems aren’t just chatbots,” he added. “They can take actions, access internal systems, and move information around. You need full transparency into what they’re doing and why. Open-source systems provide that visibility. Proprietary cloud-based AI does not.”
The potential harms of deploying unsafe or biased AI systems extend beyond defence applications. Branch warned that large language models with documented discriminatory outputs could cause real-world harm, particularly when used by agencies involved in housing, labour, or the justice system.
While OMB has not yet released its consolidated federal AI use-case inventory for 2025, TechCrunch reviewed disclosures from multiple agencies. Most either do not appear to be using Grok or have declined to confirm its use. Beyond the Department of Defence, the Department of Health and Human Services is actively using Grok, primarily for scheduling, managing social media posts, and generating draft documents or briefing materials.
Branch suggested that Grok’s continued use may stem from ideological alignment with the current administration.
“Grok markets itself as an ‘anti-woke’ large language model, which aligns closely with this administration’s philosophy,” he said. “When an administration has faced repeated issues involving individuals accused of white supremacist or neo-Nazi affiliations, and then adopts an AI system associated with similar behaviour, it raises serious concerns.”
This marks the coalition’s third formal letter regarding Grok, following similar warnings issued in August and October of last year. In August, xAI launched a “spicy mode” within Grok Imagine, which sparked widespread creation of nonconsensual sexually explicit deepfakes. That same month, TechCrunch reported that Google Search had indexed private Grok conversations.
Ahead of the October letter, Grok was also accused of spreading election misinformation, including incorrect ballot deadlines and political deepfakes. Around the same time, xAI introduced Grokipedia, which researchers later found to legitimise scientific racism, HIV/AIDS scepticism, and vaccine-related conspiracy theories.
In addition to calling for an immediate suspension of Grok’s federal deployment, the coalition is urging the OMB to formally investigate the chatbot’s safety failures and determine whether appropriate oversight procedures were followed. The letter also asks the agency to publicly clarify whether Grok was evaluated under President Trump’s executive order requiring AI systems to be truth-seeking and politically neutral, and whether Grok met federal risk-mitigation standards.
“The administration needs to pause and take a hard look at whether Grok actually meets those requirements,” Branch said.
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0