The Delhi High Court intervened to safeguard Congress MP Shashi Tharoor’s personality rights by directing platform X to remove a deepfake video falsely showing him praising Pakistan, and restraining further unauthorized use of his identity across digital media.
- X ordered to take down deepfake video of Shashi Tharoor
- Misuse of Tharoor’s likeness and voice restrained across all media
- Meta instructed to keep certain Instagram reels inaccessible
What happened
The Delhi High Court, responding to a lawsuit filed by Congress MP Shashi Tharoor, issued an interim order requiring the social media platform X to remove a deepfake video that falsely depicted Tharoor praising Pakistan. This video was generated through artificial intelligence technologies and was part of a wider malicious campaign targeting the MP’s reputation.
The court also issued broad restrictions on the unauthorized use of Tharoor’s name, image, voice, and speaking style, preventing the creation and dissemination of manipulated audio-visual content that could misrepresent him for commercial, political, or malicious purposes. Meta was additionally directed to maintain the inaccessibility of certain Instagram reels identified as violating Tharoor’s personality rights.
Why it matters
This ruling underscores the judicial recognition of personality and publicity rights under Indian constitutional law, highlighting a growing focus on protecting public figures from synthetic media manipulation. By acknowledging the exclusive control Tharoor holds over his personal attributes, the court seeks to prevent reputational harm caused by AI-generated misinformation.
Given the context of ongoing regional elections in Kerala, where Tharoor is an active candidate, the case exemplifies how deepfakes and disinformation can be weaponized to influence political outcomes and damage individuals’ standing, as well as potentially affecting India’s international image.
What to watch next
The court has empowered Tharoor to request prompt removal of any further infringing synthetic media content across platforms, and mandated social media companies to disclose detailed information about accounts responsible for such content. Observers should monitor how platforms comply with these directions and the effectiveness of such legal measures in curbing deepfake misuse.
Long-term, this case may serve as a precedent for similar personality rights litigation in India concerning AI-generated content. It also sets a benchmark for regulatory responses to emerging digital threats, balancing free expression with protection against technological abuses in political and commercial arenas.