AI’s Dangerous Diet Advice: Man Hospitalized After Following ChatGPT’s Plan

AI is rapidly transforming many fields, including healthcare, but a recent case highlights the critical need for human oversight. A new report details how a 60-year-old New York man, seeking a healthy diet plan, ended up in the hospital with a rare form of poisoning. The man, who had no prior medical history, asked an AI chatbot how to eliminate sodium chloride (table salt) from his diet. The chatbot recommended replacing it with sodium bromide. Believing the AI’s advice to be sound, the man followed the plan for three months, purchasing the substance online.

dangers of AI for health

The Dangers of Bromide and AI’s Misinformation

Bromide, a chemical compound, was once used in early 20th-century medicines for anxiety and insomnia. However, it is now known to be toxic in large doses. As the bromide accumulated in his body, the man began experiencing severe neurological symptoms, including paranoia, hallucinations, and confusion. He also developed physical symptoms like an acne-like skin rash and red spots. Doctors ultimately diagnosed him with bromide toxicity, or “bromism,” a condition now considered almost unheard of. After three weeks of medical care to restore his electrolyte balance, the man recovered.

Read this also: Revolutionary AI Breakthroughs Reshaping Industries: 10 Game-Changing Innovations in 2025

This case is a stark reminder that while AI can be a powerful tool for information, it is not a substitute for professional medical advice. The chatbot’s “hallucination”—providing factually incorrect and dangerous information—nearly cost the man his life. The incident underscores a growing concern about AI-generated health misinformation. Developers of AI models, including OpenAI, have disclaimers that their services are not intended as a substitute for professional medical advice.

The Role of Human Oversight in a Tech-Driven World

The case highlights why human expertise and oversight are non-negotiable in healthcare. AI models are trained on vast datasets but lack the ability to critically reason, understand context, or identify potential dangers with the same nuance as a human expert. A medical professional would immediately recognize the toxicity of bromide and the risks of a drastic dietary change.

As AI tools become more integrated into our daily lives, it is crucial for individuals to be critical of the information they receive, especially when it concerns their health. The technology is still evolving, and while it can be useful for general knowledge or organizing information, it cannot replace the empathy, contextual understanding, and critical judgment of a qualified doctor. This incident serves as a powerful wake-up call for users and developers alike, emphasizing that a human-in-the-loop approach is essential for safety and accuracy.

Scroll to Top