India is moving to tighten regulatory control over powerful AI cybersecurity tools following reports that Anthropic’s Claude Mythos model can identify and exploit software vulnerabilities. Authorities are demanding that such systems be hosted within Indian territory to mitigate security and jurisdictional risks.

  • India demands local hosting for AI cybersecurity systems targeting critical sectors.
  • Anthropic’s Claude Mythos model is under government scrutiny over vulnerability risks.
  • Regulators launch task forces to strengthen AI-driven cyber threat defenses.

What happened

Indian government officials from the Finance Ministry, Ministry of Electronics and Information Technology (MeitY), and CERT-In recently held discussions with Anthropic’s India team to examine the cybersecurity implications of its AI model, Claude Mythos. This model reportedly has the capability to detect and exploit software vulnerabilities in sensitive sectors such as banking, telecommunications, and power infrastructure, raising alarm within regulatory circles.

A major focal point of these talks was the physical location where the AI systems are hosted. Officials stressed that deploying advanced AI models on servers outside India poses regulatory, security, and jurisdictional challenges. The government is pushing for 'sovereign AI access,' where both the AI models and their processing infrastructure operate strictly within Indian jurisdiction or on government-approved sovereign cloud platforms.

Why it matters

The scrutiny of Anthropic’s Claude Mythos model comes amid growing concerns about private companies developing potent cybersecurity tools that could potentially be weaponized without sufficient oversight. Indian regulators including SEBI and CERT-In have ramped up advisories and directives to financial and regulated entities to fortify their cyber defenses specifically citing the threats posed by Mythos.

Additionally, the government's demand for sovereign hosting reflects broader national security priorities. Ensuring that AI cyber tools operate within Indian borders helps mitigate risks related to foreign jurisdiction, unauthorized access, and data privacy. This approach is essential to protecting critical sectors that form the backbone of India’s digital economy and infrastructure.

What to watch next

India’s officials are continuing to assess whether Claude Mythos represents a fundamental technological advancement or if its capabilities are overstated. Anthropic has yet to commit to hosting AI models within India, which will be a key issue in future negotiations. The government's stance could set important precedents for AI governance and sovereignty in the cybersecurity domain.

Moreover, the emergence of dedicated task forces such as cyber-suraksha.ai to evaluate AI-driven cyber threats suggests India is preparing for a longer-term regulatory framework addressing advanced AI systems. Observers should watch for new policy measures, potential mandates for AI infrastructure localization, and how Indian companies adapt to defend against AI-powered vulnerabilities they cannot fully access or control.

Source assisted: This briefing began from a discovered source item from MediaNama. Open the original source.
How SignalDesk reports: feeds and outside sources are used for discovery. Public briefings are edited to add context, buyer relevance and attribution before they are published. Read the standards

Related briefings