OpenAI has launched a new optional feature in ChatGPT enabling users to appoint a trusted contact who will be notified if the AI detects conversation cues suggesting serious self-harm or suicide risk.
- Users can designate a trusted contact for emergency notifications.
- AI monitors chats for signs of self-harm or suicidal intent.
- Human review determines when to alert the trusted contact.
What happened
OpenAI introduced a Trusted Contact feature in ChatGPT, aimed at enhancing user safety by allowing adults to name a trusted friend or family member to be alerted if their conversations indicate potential self-harm or suicide risk. When automated systems detect high-risk messages, a specially trained team reviews the chat to decide whether to notify the designated contact.
This feature follows increasing scrutiny of AI chatbots, including ChatGPT, after several lawsuits connected to user suicides and overdoses. The new tool supplements prior safety efforts like parental controls that notify guardians about at-risk teen users. The trusted contact receives prior notification and can decline the role, with alerts sent via email, text, or in-app messages.
Why it matters
The rollout addresses a critical challenge as millions of ChatGPT users weekly communicate about distressing topics, including suicidal ideation. AI chatbots’ conversational design encourages users to form strong emotional bonds, which raises mental health risks especially for vulnerable individuals. OpenAI’s new feature aims to facilitate earlier intervention by involving someone the user trusts.
This approach may help mitigate some harms linked to chatbot engagement during mental health crises. However, concerns remain over privacy, handling of sensitive data, and whether the feature shifts responsibility from developers to personal contacts. Cases of harmful chatbot interactions have sparked public debate on AI ethics and safety protocols within conversational AI platforms.
What to watch next
Observers will be looking for clarity on the processes behind the human review team, including their qualifications and capacity to handle alerts responsibly. Understanding what keywords or behaviors trigger escalations will also be important for assessing the feature's effectiveness and transparency.
Critically, how the trusted contact mechanism is received by users and whether it truly prevents harm could influence future regulatory expectations around AI safety. There may also be calls for broader collaboration between AI developers, mental health professionals, and policy makers to refine interventions that protect users at risk without compromising privacy.