ChatGPT’s ‘Trusted Contact’ will alert loved ones of safety concerns
ChatGPT’s ‘Trusted Contact’ will alert loved ones of safety concerns
ChatGPT 的“受信任联系人”功能将在出现安全隐患时提醒亲友
OpenAI is launching an optional safety feature for ChatGPT that allows adult users to assign an emergency contact for mental health and safety concerns. Friends, family members, or caregivers designated as a “Trusted Contact” will be notified if OpenAI detects that a person may have discussed topics like self-harm or suicide with the chatbot.
OpenAI 正在为 ChatGPT 推出一项可选的安全功能,允许成年用户指定一名紧急联系人,以应对心理健康和安全问题。如果 OpenAI 检测到用户可能与聊天机器人讨论了自残或自杀等话题,被指定为“受信任联系人”(Trusted Contact)的朋友、家人或护理人员将会收到通知。
“Trusted Contact is designed around a simple, expert-validated premise: when someone may be in crisis, connecting with someone they know and trust can make a meaningful difference,” OpenAI said in its announcement. “It offers another layer of support alongside the localized helplines already available in ChatGPT.”
OpenAI 在公告中表示:“‘受信任联系人’的设计基于一个简单且经专家验证的前提:当某人处于危机中时,与他们认识并信任的人建立联系可以产生重大的积极影响。它在 ChatGPT 现有的本地化求助热线之外,提供了另一层支持。”
The Trusted Contact feature is opt-in. Any adult ChatGPT user can enable it by adding contact details for a fellow adult (18+ globally or 19+ in South Korea) in their ChatGPT account settings. The Trusted Contact must accept the invitation within a week of receiving the request. Users can remove or edit their chosen contact in the settings, and the Trusted Contact can also choose to remove themselves at any time.
“受信任联系人”功能需要用户主动开启。任何成年 ChatGPT 用户都可以通过在账户设置中添加另一位成年人(全球 18 岁以上,韩国为 19 岁以上)的联系方式来启用该功能。受信任联系人必须在收到邀请后的一周内接受邀请。用户可以在设置中删除或编辑所选联系人,受信任联系人也可以随时选择退出。
OpenAI says that the notification is “intentionally limited” and will not share chat details or transcripts with the Trusted Contact. If OpenAI’s automated systems detect that a user is talking about harming themselves, ChatGPT will then encourage the user to reach out to their Trusted Contact for help, and let them know the contact may be notified. A “small team of specially trained people” will then review the situation, according to OpenAI, and ChatGPT will send a brief email, text message, or in-app ChatGPT notification to the Trusted Contact if the conversation is determined to indicate serious safety concerns.
OpenAI 表示,该通知是“刻意限制的”,不会与受信任联系人分享聊天细节或记录。如果 OpenAI 的自动化系统检测到用户正在谈论伤害自己,ChatGPT 会鼓励用户联系其受信任联系人寻求帮助,并告知用户该联系人可能会收到通知。据 OpenAI 称,随后会有一个“经过专门培训的小型团队”对情况进行评估;如果判定对话显示出严重的安全隐患,ChatGPT 将向受信任联系人发送简短的电子邮件、短信或 ChatGPT 应用内通知。
This builds on the emergency contact feature that was introduced alongside ChatGPT’s parental controls in September, after a 16-year-old took his own life following months of confiding in ChatGPT. Meta has also introduced a similar feature that alerts parents if their kids “repeatedly” search for self-harm topics on Instagram.
此功能是在去年 9 月随 ChatGPT 家长控制功能一同推出的紧急联系人功能基础上扩展而来的。此前,一名 16 岁少年在向 ChatGPT 倾诉数月后自杀。Meta 也曾推出过类似功能,如果孩子在 Instagram 上“反复”搜索自残相关话题,系统会向家长发出提醒。