A 60-year-old man consumed sodium bromide instead of sodium chloride based on incorrect advice from ChatGPT, leading to bromism. He was hospitalized three months later due to severe symptoms.
Chatgpt: Amidst the growing reliance on artificial intelligence (AI) technology, a recent case has surfaced, raising serious concerns about its use. A recently published case study details how a 60-year-old man became severely ill and had to be hospitalized due to incorrect health advice received from ChatGPT.
How the Case Began
According to the report, the patient had no prior medical or mental health history. He wanted to make changes to his diet to adopt a healthy lifestyle, and in this connection, he consulted ChatGPT for health information. The patient claims that the AI chatbot advised him to take sodium bromide as an alternative to common table salt (sodium chloride).
This advice was given without any clear warning or information about potential health risks. As a result, the individual consumed sodium bromide for three months, causing the level of bromide in his body to rise to dangerous levels.
Symptoms and Hospitalization
Three months later, the patient began to exhibit severe symptoms—
- Constant thirst but fear of drinking water
- Paranoia and hallucinations
- Insomnia and extreme fatigue
- Lack of muscle coordination (ataxia)
- Skin changes such as acne and red, spotted skin (cherry angioma)
As the condition worsened, he was admitted to the emergency room. Initial investigations revealed signs of suspected poisoning to the doctors. After consulting with the poison control department and conducting various tests, doctors confirmed that the patient had a syndrome called bromism. This disease is caused by prolonged consumption of bromide-containing compounds.
Key Highlights of the Case Study
This case was published in the journal Annals of Internal Medicine Clinical Cases under the title "A Case of Bromism Influenced by the Use of Artificial Intelligence."
- The researchers clarified that they did not have a record of the actual conversation between ChatGPT and the patient.
- Although the chatbot mentioned the context, it did not provide a clear health warning that this substance could be dangerous for prolonged consumption.
- The researchers say that unlike any professional doctor, ChatGPT did not ask why the user was seeking this information.
OpenAI's Response
Commenting on this case, an OpenAI spokesperson said that their product's Terms of Use clearly state that ChatGPT's responses should not be considered the 'sole source of facts' or an 'alternative to professional advice.' The company reiterated that the AI chatbot is only intended to provide general information, not to make medical decisions.
Experts' Warning
Researchers and health experts have warned that—
- AI chatbots can generate scientific inaccuracies — this technology can sometimes provide incomplete, out-of-context, or incorrect information.
- Health risk analysis is limited — AI does not have the ability to assess complex medical risks.
- Risk of misinformation — AI advice given without expert supervision can cause serious health harm.
Treatment and Improvement
The patient was given immediate medication after being admitted to the hospital and treatment was started to reduce bromide levels. After three weeks of intensive treatment, his condition improved and the mental symptoms also largely disappeared.