Skip to content Skip to footer

Regulating Deepfakes and Synthetic Media- India’s draft IT Amendment Rules, 2025

Regulating Deepfakes and Synthetic Media- India’s draft IT Amendment Rules, 2025

by Ms. Amisha Singh (Senior Associate) and Mr. Savan Dhameliya ‎(Associate)

The Ministry of Electronics and Information Technology (MeitY) released the draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, on 22 October 2025, inviting public feedback on the proposed changes. According to the Ministry, the Government of India continues to emphasise its commitment to fostering an open, safe, trusted and accountable digital ecosystem. The emergence of generative artificial intelligence tools and the rapid spread of synthetically generated content, commonly referred to as deepfakes, have created new challenges in the online space, including risks of misinformation, impersonation, manipulation of public discourse and other potential harms to users. In light of these evolving risks, and following extensive public consultations and parliamentary deliberations, MeitY has proposed these amendments to enhance the due diligence obligations of intermediaries, particularly social media intermediaries (SMIs), significant social media intermediaries (SSMIs) and platforms that facilitate the creation or alteration of synthetic content. MeitY has further noted that these proposed amendments aim to establish a clear legal framework for labelling, traceability and accountability in respect of synthetically generated information.

SUMMARY OF KEY PROPOSED AMENDMENTS

The draft amendments seek to address the challenges posed by synthetically generated content through the following key measures:

(i) Definition of “Synthetically Generated Information” [Rule 2(1)(wa)]

A new definition has been introduced for information that is artificially or algorithmically created, modified or altered using computer resources in a manner that appears reasonably authentic.

(ii) Clarificatory Inclusion [Rule 2(1A)]

References to “information” under various provisions, including Rules 3(1)(b), 3(1)(d), 4(2) and 4(4), will now expressly include synthetically generated information, unless the context otherwise requires.

(iii) Protection for Removal of Harmful Synthetic Content [Proviso to Rule 3(1)(b)]

Intermediaries will be protected when they remove or disable access to harmful synthetic content based on reasonable efforts or user grievances, without affecting their safe harbour under Section 79(2) of the Information Technology Act, 2000 (IT Act).

(iv) Due Diligence Obligations [New Rule 3(3)]

Intermediaries that provide computer resources enabling the creation or modification of synthetically generated information will be required to comply with enhanced due diligence standards. These obligations include:

  • Ensuring that all synthetically generated content is labelled or embedded with a permanent unique metadata tag or identifier.
  • Displaying or making such label or identifier audibly or visually in a clear and prominent manner, covering at least ten percent of the display surface area or, in the case of audio content, during the first ten percent of its duration.
  • Ensuring that the label or identifier allows immediate recognition of the content as synthetically generated.

Further, intermediaries will be prohibited from altering, concealing or removing such identifiers or labels.

(v) Obligations for Significant Social Media Intermediaries [New Rule 4(1A)]

Significant Social Media Intermediaries (SSMIs) will be required to obtain a declaration from users indicating whether the information being uploaded is synthetically generated. They must also adopt reasonable and proportionate technical measures to verify these declarations and ensure that all such content is clearly labelled or accompanied by a visible notice identifying it as synthetically generated information.

IMPLICATIONS

The proposed amendments are expected to bring greater accountability and transparency in the digital ecosystem. They seek to establish clear responsibilities for intermediaries and SSMIs in managing synthetically generated information such as deepfake or AI-generated content. By mandating visible labelling, metadata traceability and transparency for all public-facing synthetic media, the amendments aim to ensure that users can easily identify such content.

In practical terms, these obligations are likely to impact how social media platforms and AI-driven applications operate. Platforms offering AI-based filters that modify facial features or create synthetic enhancements in photos and videos may now be required to label such outputs as synthetically generated. Similarly, short-video and photo-sharing platforms that enable users to apply generative effects, such as background replacement, voice cloning or face swaps, will need to embed metadata tags or visible notices to indicate algorithmic modification. SSMIs may be required to obtain user declarations, verify them through reasonable technical measures and ensure that all such content carries clear identifiers.

These measures enhance traceability and accountability while presenting new operational and compliance considerations, particularly for platforms relying on generative AI tools for entertainment, advertising or creative content. At the same time, the amendments protect intermediaries acting in good faith under Section 79(2) of the IT Act when addressing user grievances related to synthetic or manipulated content. Overall, they aim to empower users to distinguish authentic information from synthetic material and advance India’s vision of an open, safe, trusted and accountable Internet that continues to encourage free expression and innovation.

MeitY has invited all stakeholders to provide their feedback and comments on the draft amendments. Such inputs, arranged in a rule-wise manner, may be submitted by email to itrules.consultation@meity.gov.in in MS Word or PDF format by 6th November 2025.