Why in the News
The Ministry of Electronics and Information Technology (MeitY) has proposed amendments to
the IT Rules, 2021, which would require creators and platforms to label AI-generated content. The
move aims to tackle misinformation, deepfakes, and other risks associated with synthetic media.
Rise of AI-Generated Content in India
India has seen a rapid increase in the use of artificial intelligence for generating content in social
media, entertainment, and digital advertising. While AI enables creative and engaging media, it
also facilitates the creation of highly realistic synthetic content, including deepfakes—manipulated
images, audio, or videos that misrepresent reality.
Concerns about such content escalated in 2023 after a digitally altered video of a popular actor
went viral. Prime Minister Narendra Modi described deepfakes as a new “crisis,” highlighting the
need for regulatory oversight.
Key Features of the Proposed Amendments
Mandatory Self-Declaration
All content creators must declare if their posts—text, audio, video, or images—are
AI-generated. Platforms like YouTube, Instagram, and X (formerly Twitter) must enforce this.
Dual Labelling Requirement
Content-level label: Embedded watermark or visible marker within the media itself,
covering at least 10% of duration/visuals.
Platform-level label: Displayed wherever the content appears online.
Platform Accountability
Social media companies must detect and label AI-generated content if users fail to disclose it. Automated tools and technical measures are
required.
Definition of Synthetic Content
Any content artificially created, modified, or generated using computational tools in a manner that appears authentic.
Consequences of Non-Compliance
Platforms failing to label synthetic content may lose their legal immunity under Section 79 of the IT Act.
Metadata Requirement
AI-generated material must carry a permanent, unique metadata identifier to ensure traceability and accountability.
Scope of Application
The rules cover social platforms and AI content creation tools like OpenAI’s Sora or Google’s Gemini, requiring built-in labelling
mechanisms.
Rationale
The move responds to rising public concern over misuse of AI-generated content for reputational harm, financial fraud, and political manipulation.
By requiring transparency, the government intends to empower users to differentiate between real and synthetic content.
Earlier, India relied on general IT Act provisions against impersonation and fraud. However, advances in AI necessitate targeted regulation.
International Context
- China (2025): Requires visual labels and hidden watermarks on AI-generated content.
- European Union: AI Act mandates disclosure when users interact with AI.
- United States: Developing federal guidelines; tech companies follow voluntary watermarking.
India’s draft positions it as an early adopter of legally binding AI labelling regulations.
Challenges and Future Outlook
Implementing these rules may be complex due to:
- Detection of AI content in real-time across formats and languages.
- Compliance burdens on small creators and startups.
The government has invited public feedback until November 6, 2025, signaling intent to refine the framework. If implemented well, India could set a global benchmark for responsible AI governance.
Prelims Questions (MCQs)
Which ministry has proposed mandatory labelling of AI-generated content in India?
A) Ministry of Information and Broadcasting
B) Ministry of Electronics and Information Technology ✅
C) Ministry of Science and Technology
D) NITI Aayog
Under the proposed IT Rules amendment, what is the minimum visual coverage required for an AI content watermark?
A) 5%
B) 10% ✅
C) 15%
D) 20%
What does Section 79 of the IT Act, 2000, provide for intermediaries?
A) Tax exemption
B) Immunity from liability for third-party content ✅
C) Mandatory labelling of content
D) Licensing for AI tools
Which of the following AI platforms/tools would fall under the scope of the proposed rules?
A) Google Gemini ✅
B) Wikipedia
C) Email servers
D) Blockchain networks
Deepfakes are primarily a concern because they:
A) Violate copyright laws only
B) Blur the line between real and synthetic content ✅
C) Improve AI capabilities
D) Are limited to entertainment industry