OpenAI has announced that over a million people discuss weekly about suicidal thoughts in conversations with ChatGPT, suggesting a worsening of mental health issues. The company has also identified that approximately 560,000 users may show signs of mental health-related emergencies, such as psychosis or mania. These conclusions were obtained after an update on how the chatbot handles sensitive conversations.
OpenAI has been the subject of an investigation by the Federal Trade Commission after a teenager committed suicide following discussions with ChatGPT. Despite these issues, OpenAI claims that recent updates have improved user safety and expanded access to emergency lines. The company has hired mental health experts to evaluate and improve the chatbot's responses. However, experts are concerned that vulnerable users seek psychological support in chatbots, which can be harmful. Sam Altman, the head of OpenAI, stated that he intends to allow a wider range of content on ChatGPT, mentioning that it was previously restrictive due to concerns related to mental health.