In 2026, a new privacy concern emerges as ChatGPT and some other AI chatbots provide real phone numbers and addresses from historical public records, raising questions about personal data sensitivity in AI training and use.
- ChatGPT provided outdated personal contact information from public records.
- Other AI chatbots like Grok and Claude refused to disclose private details.
- The episode underscores shifting social norms and privacy complexities with AI.
What happened
ChatGPT gave out a user's old phone number and address when prompted, information that was sourced from a publicly available PDF stemming from a freedom of information act (FOIA) request made years ago. The provided phone number was no longer in use but had been associated with the user for many years previously. Additionally, the chatbot offered another person’s address also found in the same document, even providing a phone number that belonged to a different individual with the same name in another area.
In contrast, other AI chatbots like Grok, Claude, Perplexity, and Gemini generally refused to supply personal contact information upon request. These chatbots cited privacy concerns or completely censored the data, demonstrating a varied approach to handling requests for personally identifiable information (PII). Some clearly recognized when the user was asking for their own data and still refused to share it.
Why it matters
This incident highlights emerging challenges in the balance between AI functionality and privacy protection. AI systems trained on vast amounts of data—some of which may contain personally identifiable information—can inadvertently reveal sensitive details, causing inconvenience or privacy violations. The availability of such data raises concerns about how personal data is used and protected in AI models.
Furthermore, the shift in cultural norms surrounding privacy is apparent. Whereas decades ago public directories distributed phone numbers freely, today phone numbers and home addresses are considered intimate personal data. This evolving landscape means that AI developers and users must reconsider privacy standards and protections in the context of increasingly powerful and data-driven systems.
What to watch next
Monitoring how AI developers address privacy concerns related to personally identifiable information will be key. Will chatbots adopt uniform protocols restricting the sharing of sensitive personal data, or will approaches continue to vary? Legislation, ethical frameworks, and technical safeguards around AI-generated information disclosure may also evolve to better protect user privacy.
Additionally, observing public and regulatory responses to incidents where AI discloses personal data will provide crucial insight. Users, companies, and policymakers could push for clearer boundaries and accountability for AI outputs. In parallel, the broader cultural conversation on privacy and technology will continue to shift as AI becomes further integrated into daily life.