Pennsylvania sues Character.AI after a chatbot allegedly posed as a doctor

Pennsylvania sues Character.AI after a chatbot allegedly posed as a doctor

宾夕法尼亚州起诉 Character.AI,指控其聊天机器人冒充医生

The Commonwealth of Pennsylvania has filed a lawsuit against Character.AI, claiming that one of the company’s chatbots masqueraded as a psychiatrist in violation of the state’s medical licensing rules. 宾夕法尼亚州政府已对 Character.AI 提起诉讼,指控该公司旗下的一个聊天机器人冒充精神科医生,违反了该州的医疗执业法规。

“Pennsylvanians deserve to know who — or what — they are interacting with online, especially when it comes to their health,” said Governor Josh Shapiro in a statement on Tuesday. “We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional.” “宾夕法尼亚州的民众有权知道他们在网上互动的是谁——或者是什么——尤其是在涉及健康问题时,”州长乔什·夏皮罗(Josh Shapiro)在周二的一份声明中表示。“我们绝不允许公司部署人工智能工具,误导人们让他们以为自己正在接受持证医疗专业人员的建议。”

According to the state’s filing, a Character.AI chatbot called Emilie presented itself as a licensed psychiatrist during testing by a state Professional Conduct Investigator, maintaining the pretense even as the investigator sought treatment for depression. When asked if she was licensed to practice medicine in the state, Emilie stated that she was, and also fabricated a serial number for her state medical license. According to the state’s lawsuit, that conduct violates Pennsylvania’s Medical Practice Act. 根据该州的诉讼文件,在州职业行为调查员进行测试时,一个名为 Emilie 的 Character.AI 聊天机器人自称是持证精神科医生,即便在调查员寻求抑郁症治疗时,它依然维持这一虚假身份。当被问及是否在该州拥有行医执照时,Emilie 声称自己拥有,并编造了一个州医疗执照的序列号。根据该州的诉讼,这种行为违反了宾夕法尼亚州的《医疗执业法》。

It’s not the first lawsuit taking on Character.AI. Earlier this year, the company settled several wrongful death lawsuits concerning underage users who died by suicide. In January, the Kentucky Attorney General Russell Coleman filed suit against the company alleging that it had “preyed on children and led them into self-harm.” Pennsylvania’s action is the first to specifically focus on chatbots that present themselves as medical professionals. 这并非 Character.AI 首次面临诉讼。今年早些时候,该公司就几起涉及未成年用户自杀身亡的非正常死亡诉讼达成了和解。今年 1 月,肯塔基州总检察长拉塞尔·科尔曼(Russell Coleman)对该公司提起诉讼,指控其“捕食儿童并诱导他们进行自残”。宾夕法尼亚州的此次行动是首个专门针对“冒充医疗专业人员”的聊天机器人的诉讼。

Reached for comment, a Character.AI representative claimed that user safety was the company’s highest priority, but that the company could not comment on pending litigation. Beyond that, the representative emphasized the fictional nature of user-generated Characters. 在寻求置评时,Character.AI 的一位代表声称,用户安全是公司的首要任务,但公司无法对未决诉讼发表评论。此外,该代表强调了用户创建的角色本质上是虚构的。

“We have taken robust steps to make that clear, including prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction,” the representative said. “Also, we add robust disclaimers making it clear that users should not rely on Characters for any type of professional advice.” “我们已经采取了强有力的措施来明确这一点,包括在每次聊天中加入醒目的免责声明,提醒用户角色并非真实的人,且角色所说的一切都应被视为虚构,”该代表表示。“此外,我们还添加了明确的免责声明,告知用户不应依赖这些角色获取任何类型的专业建议。”