News

New Deepfake Regulations Coming Soon in India, Says IT Minister Ashwini Vaishnaw

India is preparing to launch new techno-legal regulations to tackle the growing threat of deepfakes. Union IT Minister Ashwini Vaishnaw, speaking at the NDTV World Summit 2025 in New Delhi on October 18, announced that the government would “very soon” introduce enforceable laws to address AI-generated misinformation.

 

 

The announcement marks a shift from earlier advisories—notably the March 2024 guidelines under the IT Rules 2021—which urged platforms to label synthetic content. The upcoming framework aims to offer legal teeth and move beyond voluntary compliance.

 

 

Vaishnaw warned of the dual nature of AI: while it enables creativity and novelty, it can also be weaponized to “harm society in ways humans have never seen before.” India, he said, stands at a crossroads, needing to balance AI-driven growth with strong safeguards.

 

 

The Minister’s statement comes amid India’s aggressive push to become an AI superpower. The government is supporting six large language models, including two with 120 billion parameters, designed to avoid biases seen in Western models.

 

 

Complementing this is the rollout of semiconductor manufacturing, with two domestic units now in operation. Global interest is surging—most notably, Google's $15 billion AI investment in Visakhapatnam, aimed at building a major innovation hub.

 

 

Despite the innovation boom, regulation lags behind. India’s existing legal framework is not fully equipped to handle the complex risks of deepfakes—such as impersonation, disinformation, and erosion of trust. “Your face and voice should not be used in a harmful way,” Vaishnaw emphasized.

 

 

The upcoming techno-legal framework will combine technical tools—like watermarking, detection algorithms, and AI-based content flagging—with legal mechanisms. Vaishnaw noted: “The world of AI cannot be regulated simply by passing a law.”

 

 

Key targets of the law will be malicious deepfakes—those used for propaganda, fraud, or defamation. Harmless and creative uses will remain untouched. The law will also enshrine individual rights, empowering citizens to act if their likeness is misused.

 

 

Questions remain: What counts as a harmful deepfake? What verification must platforms perform? And how will freedom of expression be balanced against the need to prevent AI-fueled manipulation?

 

Dr. Deepak Kumar Sahu, Founder & CEO of FaceOff Technologies Pvt. Ltd., emphasized the company’s unwavering commitment to building AI-powered trust verification solutions.

 

“At FaceOff, we are shaping the future of trust detection with our cutting-edge Multimodal Fusion Platform, powered by the Adaptive Cognito Engine (ACE). Our Multi-AI Fusion Architecture enables advanced detection capabilities that help safeguard businesses against evolving AI and GenAI threats.”

 

Reinforcing its "Made in India, Made for the World" philosophy, Dr. Sahu highlighted that every aspect of FaceOff’s technology—from core AI models to final applications—is developed in-house at its innovation hubs in Delhi and Kolkata. This reflects the company’s strong focus on indigenous innovation and achieving complete technological self-reliance.

 

 

The coming regulations will demand readiness across the board. Tech firms must prepare for compliance and traceability. Platforms may be required to flag synthetic content in real-time. Civil society has raised concerns of overreach. But for India, this moment could define its role as a global leader in AI governance, as the deepfake era accelerates.