India orders social media platforms to take down deepfakes faster
India directs social media platforms to remove deepfake content more quickly, tightening enforcement under IT rules to curb misinformation and protect users online.
India has directed social media companies to strengthen oversight of deepfakes and other AI-generated impersonation content and to significantly reduce the time allowed to comply with takedown directives. The move could substantially influence how global technology firms handle moderation in one of the world’s largest and fastest-growing internet markets.
The revisions, issued Tuesday as amendments to India’s 2021 IT Rules, formally bring deepfakes under a structured regulatory framework. The updated rules require labelling and traceability for synthetic audio and visual material and sharply compress compliance deadlines. Platforms now face a three-hour limit to act on official takedown orders and a two-hour window for certain urgent user complaints.
India’s scale as a digital economy magnifies the impact of these changes. With more than a billion internet users and a largely young demographic, the country represents a crucial growth market for companies such as Meta and YouTube. Compliance approaches developed for India may influence broader global moderation policies and product features.
Under the amended framework, platforms that enable users to upload or share audiovisual content must require clear disclosures when content is synthetically generated. They must also implement mechanisms to verify those disclosures and ensure that deepfakes carry visible labels and traceable provenance data embedded in the material.
Certain forms of synthetic content — including deceptive impersonation, non-consensual intimate imagery, and material associated with serious criminal activity — are explicitly prohibited. Failure to comply, particularly when authorities or users flag violations, may put companies at risk of losing safe-harbour protections under Indian law, increasing potential legal liability.
To meet these obligations, platforms are expected to rely heavily on automated systems. They must deploy technical tools to validate user disclosures, detect and label deepfakes, and prevent the creation or circulation of banned synthetic content.
Rohit Kumar, founding partner at New Delhi-based policy consultancy The Quantum Hub, described the changes as a more calibrated regulatory effort targeting AI-generated deepfakes. However, he noted that the significantly shortened grievance response timelines — including two- to three-hour takedown requirements — will substantially increase compliance pressures. He added that the link between non-compliance and the potential loss of safe-harbour status warrants scrutiny.
Aprajita Rana, a partner at the Indian corporate law firm AZB & Partners, noted that the revised rules narrow their scope to AI-generated audiovisual content rather than covering all online information. She also pointed out that routine, cosmetic, or efficiency-driven uses of AI are carved out as exceptions. Still, she cautioned that the obligation to remove content within three hours of becoming aware of it may depart from established free speech norms. She further stated that labelling obligations extend across formats in an effort to curb the spread of child sexual abuse material and deceptive media.
The Internet Freedom Foundation, a New Delhi-based digital rights advocacy group, warned that the compressed timelines could accelerate censorship by leaving little room for human review and encouraging automated over-removal. In a statement shared on X, the group also raised concerns about expanded prohibited content categories and provisions that permit platforms to disclose user identities to private complainants without judicial oversight.
“These impossibly short timelines eliminate any meaningful human review,” the organisation said, cautioning that the amendments may undermine due process and free speech protections.
Two industry sources indicated that the amendments followed a relatively limited consultation process, with only a small number of stakeholder recommendations reflected in the final text. While the government appears to have narrowed the scope of regulated material to AI-generated audiovisual content rather than all online content, other proposed measures were reportedly not incorporated. According to the sources, the extent of revisions between the draft and final rules may have justified an additional round of consultation to clarify compliance expectations.
Government takedown authority has long been a contentious issue in India. Social media firms and civil society organisations have criticised the breadth and opacity of removal directives. Even X, owned by Elon Musk, has challenged New Delhi in court over content-blocking orders, arguing that they exceeded appropriate limits and lacked sufficient safeguards.
Companies including Meta, Google, Snap, X, and India’s IT ministry did not respond to requests for comment.
The updated rules arrived just months after the Indian government, in October 2025, reduced the number of officials authorised to issue internet content removal orders, following a legal challenge by X regarding the scope and transparency of such powers.
The amendments will take effect on February 20, giving platforms a limited time to adjust their compliance systems. The timing coincides with India’s hosting of the AI Impact Summit in New Delhi from February 16 to 20, an event expected to draw senior technology leaders and policymakers from around the world.
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0