Regulating Synthetically Generated Information: India’s IT Rules Amendment of 2026
The rapid proliferation of artificial intelligence technologies has ushered in an era of synthetically generated information (SGI), encompassing deepfakes, AI altered audio-visual content, and algorithmically manipulated media that blurs the line between reality and fabrication. In response, the Ministry of Electronics and Information Technology (MeitY) on 10th Feb amended the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) and brought in new Amendment Rules, 2026 (Amendment Rules). The Amended Rules became effective from February 20, 2026, these changes mark India’s first comprehensive statutory framework for SGI, prioritizing transparency, swift enforcement, and accountability amid rising misinformation threats.
The objective of these rules is to address exponential digital growth from 250 million to over 1 billion internet users in less than a decade outpacing digital literacy, fuelling SGI driven scams. These varied from financial scams to celebrity impersonation, false endorsements and offering merchandise bearing celebrity names etc. The Delhi and Bombay High Courts in last one year has witnessed a flurry of personality rights cases involving SGI impersonations, underscoring judicial consensus on harm and injury arising out of such SGI impersonations.
Definition of Synthetically Generated Information (SGI)
The cornerstone of the Amendment Rules lies in the introduction of the definition of “synthetically generated information” under Rule 2(1)(wa), a novel definition absent in the original 2021 framework. SGI is defined as any audio, visual, or audio-visual content that is “artificially or algorithmically created, generated, modified or altered using a computer resource” in a manner that depicts or conveys information appearing authentic but which a reasonable person would perceive as realistically depicting persons, events, or scenes that have not occurred. Exclusions carve out bona fide uses, such as routine photo/video editing, academic/research content, watermarking for branding, or AI training data devoid of realistic impersonation.
This precise delineation, addresses ambiguities in prior rules while targeting high-risk deepfakes including, but not limited to, realistic forgeries often used for electoral manipulation, defamation, or non-consensual pornography. MeitY further clarifies that the focus is multimodal (images, audio, video) and not just textual content, narrowing the scope to perceptible harms. However, platforms/intermediaries fear overbroad interpretations could interfere against legitimate generative AI tools, potentially stifling innovation without clear technical standards for the term “realistic depiction”.
Labelling and Meta Data Obligations for Intermediaries
- Amendments to Rule 3(1) impose stringent due diligence on intermediaries, particularly Significant Social Media Intermediaries (SSMIs) with over 5 million users. Rule 3(3) mandates prominent disclosure of SGI at upload, prohibiting users from removing or suppressing labels/metadata. Intermediaries must ensure all SGI is embedded with unalterable metadata or unique identifiers tracing origin, provenance, and alterations, while deploying algorithms to detect and prevent non-disclosure.
- Rule 4(1A), inserted for SSMIs, elevates user declarations from mere endeavours to mandatory verification i.e. platforms are directed to confirm SGI disclosures before publication and reject non-compliant uploads. The Rules emphasize upon a clear and prominent marking visible to recipients underscoring permanence and mandating that no intermediary may facilitate label stripping. Non-compliance risks forfeiture of safe harbour immunity under Section 79(1) of the Information Technology Act, 2000, exposing platforms to vicarious liability for user generated SGI violations.
Accelerated Takedown Regime and Grievance Redressal
- A paradigm shift emerges in enforcement timelines with clear obligation on SSMIs to acknowledge complaints upon actual knowledge received by i) an order of a court of competent jurisdiction or ii) a reasoned intimation from the authorised officer of the Appropriate Government or its agency. SSMIs must remove/disable access or issue warnings within three hours to the publisher for unlawful SGI, slashing the prior 36-hour window.
- Grievance officers are also now required to acknowledge complaints within twenty-four hours and resolve issues within seven days (halved from 15). The Rules also specifies that the intermediaries are required to deploy reasonable and appropriate technical measures like automated detection to ensure that unlawful/prohibited SGI must be prevented under Rule 3(3)(a)(i) such as child in sexually explicit manner (CSEAM), sexually explicit content or non-consensual intimate imagery.
Challenges for Intermediaries/Businesses
- The labelling requirement for SGI highlights a shift to proactive obligations, akin to EU AI Act watermarking. However, the lack of grace periods for tech upgrades could burden the SMEs disproportionately.
- Businesses apprehend the three-hour clock as operationally unfeasible without 24/7 human-AI hybrids, risking erroneous takedowns and free speech under Article 19(1)(a).
- The amendments explicitly link SGI lapses to its breach of due diligence requirement. The Amendments to Rule 3(2) specify that failure to expeditiously remove flagged content post-notice voids protection as an intermediary. Resultantly incentivizing over removal.
- MeitY has clarified that the application of the Rules is platform wide, and not per-post, which could lead to platforms pre-emptively censoring ambiguous content to mitigate risks.
- While advancing transparency, the new Rules raises certain implementation hurdles. Technical feasibility of metadata persistence across edits/sharing chains remains unaddressed with no government provided tools are being offered under the new Amendment to The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
- The overall cost burden of implementation and talent shortages for compliance teams may have an impact on innovation with resources being allocated towards compliance.
Conclusion
While free speech curtailment and operational infeasibility challenges remain, such criticisms overlook India’s scam epidemic such as use of SGI for “digital arrest” frauds exploiting low literacy users. The 2026 Amendments indicate a proactive, SGI-centric evolution of India’s intermediary liability framework, mandating labelling, verification, and ultra-swift takedowns to combat AI-fuelled misrepresentation and misinformation. By tethering compliance to safe harbour survival, the Amended Rules would compel platforms to internalize harms, fostering a safer digital ecosystem. Yet success of the new regime would hinge on clarificatory guidelines, technical aids, and judicial guardrails to balance innovation with rights. Policymakers must monitor efficacy through annual reports, refining ambiguities lest overregulation stifles the very openness AI promises.
