OpenAI launches ‘Trusted Contact’ for ChatGPT: How the new self-harm alert works

OpenAI has rolled out a new safety feature called ‘Trusted Contact’ for adult users of ChatGPT. This opt-in tool is designed to protect those who … Read more

OpenAI launches 'Trusted Contact' for ChatGPT: How the new self-harm alert works
OpenAI has rolled out a new safety feature called ‘Trusted Contact’ for adult users of ChatGPT. This opt-in tool is designed to protect those who may be sharing distressing thoughts. When the AI ​​identifies conversations indicating serious self-harm, it will notify a pre-selected contact after a careful human assessment.

OpenAI is rolling out a new opt-in safety feature called Trusted Contact that lets adult ChatGPT users nominate someone—a friend, family member, or caregiver—to be alerted if the chatbot detects signs of a serious self-harm or suicide-related conversation. The feature is open to users 18 and older—19 in South Korea—on personal accounts only. Business, Enterprise, and Edu workspaces don’t get it. You can nominate just one contact through Settings, and they have a week to accept before the invite lapses.

How the alert actually reaches your contact

When ChatGPT’s automated systems flag a conversation as a possible safety concern, the user is first told their trusted contact may be notified. A small team of trained human reviewers then steps in to assess the situation, and OpenAI says it aims to complete this review in under an hour.If the reviewers agree there’s a serious risk, the contact gets a brief alert via email, SMS, or an in-app notification. Crucially, no chat transcripts or specifics are shared. The note simply flags that self-harm came up and nudges the contact to check in, with a link to guidance on handling such conversations.

The bigger backdrop: lawsuits, scale, and a teenager’s death

The feature builds on parental controls OpenAI launched in September last year, after a 16-year-old took his own life following months of confiding in ChatGPT. Several families have since sued the company, alleging the chatbot encouraged or helped plan suicides.The numbers behind the rollout are sobering. OpenAI has previously disclosed that 0.15% of its weekly users show signs of self-harm or suicide risk, and another 0.15% display emotional reliance on the chatbot. With roughly 10% of the world’s population using ChatGPT every week, that works out to millions of people.There are practical limits. Trusted Contact is optional, and a single user can keep multiple ChatGPT accounts, which means the safeguard is easy to sidestep. OpenAI built the feature with input from the American Psychological Association, its Global Physicians Network of 260+ doctors across 60 countries, and over 170 mental health experts. ChatGPT will keep pointing users to local helplines and emergency numbers when needed.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

About the Author

Easy WordPress Websites Builder: Versatile Demos for Blogs, News, eCommerce and More – One-Click Import, No Coding! 1000+ Ready-made Templates for Stunning Newspaper, Magazine, Blog, and Publishing Websites.

BlockSpare — News, Magazine and Blog Addons for (Gutenberg) Block Editor

Search the Archives

Access over the years of investigative journalism and breaking reports