Recent user reports reveal that Google’s AI chatbot Gemini has been providing real personal phone numbers in its responses. This unexpected exposure of private contact details highlights a growing privacy gap in the use of generative AI models and stokes fears of harassment and misuse.

  • Google’s Gemini chatbot has supplied real phone numbers in responses
  • Exposed phone numbers have caused harassment and confusion
  • No straightforward technical solution currently exists to block leaks

What happened

Users on social media and forums have reported receiving calls and messages from strangers after Google’s AI chatbot Gemini mistakenly shared real phone numbers tied to individuals. One prominent case involved an Israeli software engineer inundated with WhatsApp messages due to Gemini giving out his personal number as a customer service contact. Similarly, a University of Washington researcher observed her colleague’s private number being exposed during an AI test query.

These incidents underscore a troubling pattern where generative AI tools, trained on vast datasets containing personal information, inadvertently output personally identifiable information (PII) such as real phone numbers. The company DeleteMe, specializing in removing personal data online, has seen a 400% rise in queries related to generative AI privacy concerns over recent months. This points to the likelihood that such leaks are more widespread than publicly documented.

Why it matters

The leaking of real phone numbers by AI chatbots poses significant privacy risks. Those whose numbers are exposed face unwanted contact, harassment, and potential scams, as there is often no way for recipients of such AI-generated information to verify its accuracy before acting on it. The affected individuals endure disruptions and potential threats to personal safety and mental well-being.

Moreover, the root cause appears to be the use of unrestricted personal data in training these large language models, reflecting the broader challenges of balancing AI development with data privacy compliance. The absence of effective mechanisms to prevent or control such leaks raises urgent questions about AI governance and responsibility, both for technology providers and regulators.

What to watch next

Close attention will be needed on how Google and other AI developers respond to these privacy lapses, whether by improving dataset curation, implementing stricter filters, or developing tools that prevent model leakage of sensitive data. Public pressure and regulatory scrutiny may accelerate adoption of standards to protect individuals’ private information from generative AI outputs.

Meanwhile, individuals affected by such leaks and privacy organizations are expected to escalate calls for transparency about training data sources and better user controls. Ongoing monitoring of complaints and data exposure incidents will be critical to assess the scale of the issue and push for technical and policy solutions that safeguard personal privacy in the age of AI.

Source assisted: This briefing began from a discovered source item from MIT Technology Review. Open the original source.
How SignalDesk reports: feeds and outside sources are used for discovery. Public briefings are edited to add context, buyer relevance and attribution before they are published. Read the standards

Related briefings