The Indian government has unveiled a draft framework mandating the labelling of AI-generated content across digital and social media platforms to curb the spread of deepfakes and misinformation, amid rising concerns about online safety and election integrity.
Announced on Wednesday by the Ministry of Electronics and Information Technology (MeitY), the proposed rules require visible disclosure labels on all AI-generated visuals covering at least 10% of the display surface, while AI-generated audio must include similar identification for the first 10% of its duration. The move seeks to ensure transparency, traceability, and accountability for synthetic content shared across major platforms such as OpenAI, Meta, Google, and X (formerly Twitter).
Platforms will also need to obtain user declarations confirming whether uploaded content has been created using AI tools, and develop technical systems to verify authenticity. The framework mirrors recent regulatory efforts in the European Union and China, positioning India among the first nations to adopt quantifiable labelling standards for AI-generated media.
Dhruv Garg, founding partner of the Indian Governance and Policy Project, called the 10% visibility threshold “a global first,” emphasizing India’s proactive stance in addressing the risks posed by generative AI misuse—including impersonation, misinformation, and election manipulation.
The rules follow a surge in deepfake incidents, including cases before Delhi courts involving Bollywood actors Abhishek and Aishwarya Rai Bachchan, who allege unauthorized AI-generated content on YouTube.
Public feedback on the draft will be accepted until November 6, after which the policy could become part of India’s broader Digital India Act. With nearly 1 billion internet users, India aims to balance AI innovation with digital responsibility, setting a global precedent in the governance of synthetic media.