OpenAI has announced new safety improvements for ChatGPT. Following the suicide of a young person in the United States, the company is adding parental control and emergency safeguard features. The aim of these changes is to keep users' private conversations secure and reduce mental health risks.
ChatGPT Security Updates: Following the suicide of a 16-year-old in the United States, OpenAI has announced safety improvements to its popular AI chatbot, ChatGPT. The company stated that ChatGPT will now include parental control and emergency safeguard features. This step has been taken to provide immediate help to users in crisis situations. OpenAI explained that its main objectives are to reduce the mental health risks for users who engage in long private conversations and to connect them with licensed therapists.
New Safeguard Features in ChatGPT
Following the suicide of a young person in the United States, OpenAI has announced safety improvements to its popular AI chatbot, ChatGPT. The company stated that parental control and new safeguard features will now be added to ChatGPT to ensure users' private conversations remain secure. According to OpenAI, people are using ChatGPT not only for coding, writing, and searching but also for in-depth private conversations, which are creating mental health risks that need to be managed.
Lawsuit Increases Responsibility
Matthew and Maria Rennie filed a lawsuit against OpenAI, holding ChatGPT responsible for the suicide of their 16-year-old son, Adam. They allege that the chatbot validated Adam's thoughts and suggested methods for self-harm. Furthermore, the chatbot also generated a suicide note. The family states that GPT-4o was launched without adequate safety measures and they have demanded compensation, user age verification, and warnings regarding excessive reliance on the chatbot.
OpenAI's Statement and Future Plans
An OpenAI spokesperson expressed sorrow over Adam's death and stated that ChatGPT already has safety measures in place that redirect users in crisis to suicide prevention hotlines. However, this is not always effective in lengthy conversations. The company now plans to improve this, which will include providing users with one-click access to emergency services and connecting those in need with licensed therapists through ChatGPT. Additionally, parental controls will be implemented for users under 18 years of age.