In a landmark judgment, an Indian court has ruled that AI-generated videos and deepfakes featuring a person’s likeness without consent constitute a violation of personality rights—a decisive step toward regulating synthetic media amid rising misuse of generative AI. The ruling comes as courts worldwide grapple with the legal and ethical challenges posed by deepfakes, especially those used for defamation, fraud, harassment, and unauthorized commercial exploitation.
The case involved AI-generated videos that digitally recreated an individual’s face, voice, and expressions without permission. The court observed that such content not only misleads viewers but also intrudes on a person’s inherent right to control the commercial and personal use of their identity. These rights, the court said, cover attributes such as name, likeness, voice, image, mannerisms, and other identifiable characteristics.
Emphasizing that consent is central to digital identity protection, the court warned that synthetic media can cause severe reputational harm, emotional distress, and financial loss. It noted that AI-generated replicas blur the line between truth and fabrication, making victims vulnerable to impersonation and manipulation—including misuse in political propaganda, fraudulent endorsements, and obscene deepfakes.
The judgment highlights the urgent need for stronger legal frameworks to regulate deepfake technology and safeguard individuals’ digital rights. It also underscores the responsibility of platforms to prevent the spread of synthetic content, implement detection mechanisms, and respond swiftly to takedown requests.
With deepfake incidents rising exponentially, the ruling is expected to shape future cases and influence policy development, setting a precedent for recognizing and protecting personality rights in the age of AI-generated media.