Pune

OpenAI ChatGPT Ceases Specific Medical, Legal, and Financial Advice After Harm Incidents

OpenAI ChatGPT Ceases Specific Medical, Legal, and Financial Advice After Harm Incidents
Last Updated: 6 hour ago

OpenAI ChatGPT's terms of use have been updated. This AI tool will no longer provide specific advice on sensitive topics such as medical, legal, and financial matters. The company made this decision after incidents where users suffered harm due to incorrect AI suggestions. Now, ChatGPT will only provide general information and recommend seeking advice from experts.

New ChatGPT Rules: OpenAI has implemented significant changes in ChatGPT since October 29th, under which no specific advice will now be provided on medical, legal, and financial matters. This decision was made to ensure user safety, as incidents of people suffering harm by relying on AI advice had emerged in the United States and many other regions. Under the new policy, this chatbot will now only provide general information and, if needed, instruct users to seek advice from doctors, lawyers, or financial experts. This step is considered significant for the responsible use of AI and for mitigating risks.

How will ChatGPT work now?

According to the new rules, ChatGPT will not provide information about medicines and their dosages, nor will it offer strategies related to any lawsuit or investment advice. It will only provide general information, a basic understanding of processes, and recommend consulting experts. This means it will no longer be considered a substitute for a doctor, lawyer, or financial advisor.

OpenAI has stated that many users are making decisions by fully relying on AI, which is dangerous. The company has clarified that ChatGPT's purpose is to assist in providing information and education, not to offer expert advice on critical matters.

Why was this decision made?

In recent months, several incidents have come to light where people have harmed themselves by relying on ChatGPT's advice. A 60-year-old individual started using sodium bromide based on the chatbot's advice, which worsened their condition. Similarly, another user in the US asked AI about their throat problems, where they were told that the possibility of cancer was normal. Later, this patient was diagnosed with stage four cancer.

Following an increase in such incidents, OpenAI has revised its policies to mitigate risks and promote the responsible use of AI. The company believes that incorrect advice on sensitive matters can lead to serious consequences, making it imperative to prevent them.

What will be its impact on users?

From now on, users will be able to use ChatGPT for studies, research, and general information. Individuals who were using it for tasks such as medical advice, legal documents, or investment strategies will now have to consult experts. This change will keep AI usage safer and within defined limits.

Technical experts believe that this step is crucial to protect AI from uncontrolled use and to safeguard people from making wrong decisions. However, some users might feel its limitations.

Leave a comment