The Central Government has formally brought AI-generated content within India’s regulatory framework for the first time. Through notification G.S.R. 120(E), issued by the Ministry of Electronics and Information Technology (MeitY) and signed by Joint Secretary Ajit Kumar, amendments were introduced to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The revised rules take effect from February 20, 2026.
The move represents a new shift in the Indian cybersecurity and digital governance policy. While the Information Technology Act, 2000, has long addressed unlawful online conduct, these amendments explicitly define and regulate “synthetically generated information” (SGI), placing AI-generated content under structured compliance obligations.
What the Law Now Defines as “Synthetically Generated Information”
The notification inserts new clauses into Rule 2 of the 2021 Rules. It defines “audio, visual or audio-visual information” broadly to include any audio, image, photograph, video, sound recording, or similar content created, generated, modified, or altered through a computer resource.
More critically, clause (wa) defines “synthetically generated information” as content that is artificially or algorithmically created or altered in a manner that appears real, authentic, or true and depicts or portrays an individual or event in a way that is likely to be perceived as indistinguishable from a natural person or real-world occurrence.
This definition clearly encompasses deep-fake videos, AI-generated voiceovers, face-swapped images, and other forms of AI-generated content designed to simulate authenticity. The framing is deliberate: the concern is not merely digital alteration, but deception, content that could reasonably be mistaken for reality.
At the same time, the amendment carves out exceptions. Routine or good-faith editing, such as color correction, formatting, transcription, compression, accessibility improvements, translation, or technical enhancement, does not qualify as synthetically generated information, provided the underlying substance or meaning is not materially altered. Educational materials, draft templates, or conceptual illustrations also fall outside the SGI category unless they create a false document or false electronic record.


This distinction attempts to balance innovation in Information Technology with protection against misuse.
New Duties for Intermediaries
The amendments substantially revise Rule 3, expanding intermediary obligations. Platforms must inform users, at least once every three months and in English or any Eighth Schedule language, that non-compliance with platform rules or applicable laws may lead to suspension, termination, removal of content, or legal liability. Where violations relate to criminal offences, such as those under the Bharatiya Nagarik Suraksha Sanhita, 2023, or the Protection of Children from Sexual Offences Act, 2012, mandatory reporting requirements apply.
A new clause (ca) introduces additional obligations for intermediaries that enable or facilitate the creation or dissemination of synthetically generated information. These platforms must inform users that directing their services to create unlawful AI-generated content may attract penalties under laws including the Information Technology Act, the Bharatiya Nyaya Sanhita, 2023, the Representation of the People Act, 1951, the Indecent Representation of Women (Prohibition) Act, 1986, the Sexual Harassment of Women at Workplace Act, 2013, and the Immoral Traffic (Prevention) Act, 1956.
Consequences for violations may include immediate content removal, suspension or termination of accounts, disclosure of the violator’s identity to victims, and reporting to authorities where offences require mandatory reporting.
The compliance timelines have also been tightened. Content removal in response to valid orders must now occur within three hours instead of thirty-six hours. Certain grievance response windows have been reduced from fifteen days to seven days, and some urgent compliance requirements now demand action within two hours.
Due Diligence and Labelling Requirements for AI-generated Content
A new Rule 3(3) imposes explicit due diligence obligations for AI-generated content. Intermediaries must deploy reasonable and appropriate technical measures, including automated tools, to prevent users from creating or disseminating synthetically generated information that violates the law.
This includes content containing child sexual abuse material, non-consensual intimate imagery, obscene or sexually explicit material, false electronic records, or content related to explosive materials or arms procurement. It also includes deceptive portrayals of real individuals or events intended to mislead.
For lawful AI-generated content that does not violate these prohibitions, the rules mandate prominent labelling. Visual content must carry clearly visible notices. Audio content must include a prefixed disclosure. Additionally, such content must be embedded with permanent metadata or other provenance mechanisms, including a unique identifier linking the content to the intermediary computer resource, where technically feasible. Platforms are expressly prohibited from enabling the suppression or removal of these labels or metadata.
Enhanced Obligations for Social Media Intermediaries
Rule 4 introduces an additional compliance layer for significant social media intermediaries. Before allowing publication, these platforms must require users to declare whether content is synthetically generated. They must deploy technical measures to verify the accuracy of that declaration. If confirmed as AI-generated content, it must be clearly labelled before publication.
If a platform knowingly permits or fails to act on unlawful synthetically generated information, it may be deemed to have failed its due diligence obligations. The amendments also align terminology with India’s evolving criminal code, replacing references to the Indian Penal Code with the Bharatiya Nyaya Sanhita, 2023.
Implications for Indian Cybersecurity and Digital Platforms
The February 2026 amendment reflects a decisive step in Indian cybersecurity policy. Rather than banning AI-generated content outright, the government has opted for traceability, transparency, and technical accountability. The focus is on preventing deception, protecting individuals from reputational harm, and ensuring rapid response to unlawful synthetic media.
For platforms operating within India’s Information Technology ecosystem, compliance will require investment in automated detection systems, content labelling infrastructure, metadata embedding, and accelerated grievance redressal workflows. For users, the regulatory signal is clear: generating deceptive synthetic media is no longer merely unethical; it may trigger direct legal consequences.
As AI tools continue to scale, the regulatory framework introduced through G.S.R. 120(E) marks India’s formal recognition that AI-generated content is not a fringe concern but a central governance challenge in the digital age.
