US senators demand answers from X, Meta, Alphabet, and others on sexualized deepfakes
US senators have asked major tech platforms, including X, Meta, and Alphabet, to explain how they are addressing the spread of nonconsensual sexualized deepfakes.
The growing problem of nonconsensual, sexualized deepfakes is no longer limited to a single platform.
In a letter sent to the leaders of X, Meta, Alphabet, Snap, Reddit, and TikTok, a group of U.S. senators is demanding evidence that the companies have “robust protections and policies” in place to address sexualized AI-generated deepfakes. Lawmakers also asked companies to explain how they plan to curb the spread of such content across their platforms.
The senators further requested that the companies retain all documents and information related to the creation, detection, moderation, and monetization of sexualized AI-generated imagery, as well as any policies governing such content.
The letter was sent just hours after X announced updates to Grok, stating that the chatbot would no longer generate edits of real people in revealing clothing and that image creation and editing through Grok would be restricted to paying subscribers. X and xAI are part of the same corporate entity.
Citing media reports showing how frequently and easily Grok generated sexualized or nude images of women and children, the senators warned that existing safeguards across platforms may be inadequate.
“We recognize that many companies maintain policies against nonconsensual intimate imagery and sexual exploitation, and that many AI systems claim to block explicit pornography. In practice, however, as seen in the examples above, users are finding ways around these guardrails. Or these guardrails are failing,” the letter stated.
While Grok and X have been at the centre of recent criticism, lawmakers emphasized that the issue extends far beyond a single company.
Sexualized deepfakes first gained widespread attention in 2018, when synthetic pornographic videos of celebrities circulated on Reddit before being removed. Since then, similar content targeting celebrities and politicians has spread across platforms such as TikTok and YouTube, often originating elsewhere.
Last year, Meta’s Oversight Board highlighted two cases involving explicit AI-generated images of female public figures. Meta has also faced scrutiny for allowing advertisements from “nudify” apps on its platforms, though it later filed a lawsuit against the company CrushAI. Reports have also documented children sharing deepfakes of classmates on Snapchat. Meanwhile, Telegram, which was not included in the senators’ letter, has become known for hosting bots that digitally undress photos of women.
In response to the letter, X pointed to its recent announcement detailing updates to Grok. A Reddit spokesperson said the company strictly prohibits nonconsensual intimate media, including AI-generated content, and does not allow tools that can produce such material. Alphabet, Snap, TikTok, and Meta did not immediately respond to requests for comment.
https://t.co/awlfMjX6FS — Safety (@Safety) January 14, 2026
The senators are requesting detailed responses addressing issues such as how platforms define deepfakes and nonconsensual imagery, how policies are enforced, what technical safeguards are in place, how re-uploads are prevented, how monetization is blocked, and how victims are notified.
The letter is signed by Senators Lisa Blunt Rochester, Tammy Baldwin, Richard Blumenthal, Kirsten Gillibrand, Mark Kelly, Ben Ray Luján, Brian Schatz, and Adam Schiff.
The action follows comments from Elon Musk, who said he was “not aware of any naked underage images generated by Grok.” Later that day, California’s attorney general opened an investigation into xAI’s chatbot amid growing international criticism over its lack of safeguards.
xAI has said it removes illegal content, including child sexual abuse material and nonconsensual nudity. Still, critics argue this does not address why such content could be generated in the first place.
Lawmakers also pointed out that the issue extends beyond sexualized imagery. Reports have highlighted instances in which AI tools generated explicit content involving minors, violent imagery, and racist videos that have garnered millions of views online. The spread of AI-generated media from Chinese platforms has further complicated enforcement, as labelling requirements differ significantly from those in the United States.
Although Congress passed the Take It Down Act in May tocriminalisee the creation and distribution of nonconsensual sexualized imagery, critics say the law places most responsibility on individual users rather than platforms. Several states are now proposing their own measures. This week, Kathy Hochul proposed legislation requiring the labelling of AI-generated content and banning nonconsensual deepfakes during sensitive election periods.
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0