OpenAI data reveal that more than a million ChatGPT users have shown signs of suicidal intent, prompting the company to strengthen safety and mental health measures.
OpenAI has revealed that over one million users of its ChatGPT chatbot have displayed potential signs of suicidal thoughts. In a blog post released Monday, the company said about 0.15 percent of weekly users engage in “conversations that include explicit indicators of potential suicidal planning or intent.”
With over 800 million people using ChatGPT weekly, this represents roughly 1.2 million users. OpenAI also reported that 0.07 percent of users nearly 600,000 people show possible signs of mental health crises linked to psychosis or mania.
The disclosure follows a lawsuit by the parents of California teenager Adam Raine, who died by suicide after allegedly receiving harmful advice from ChatGPT.
OpenAI said it has since enhanced parental controls, added mental health hotlines, and partnered with 170 mental health professionals to improve user safety.