When AI Goes Wrong

Documenting AI's most memorable blunders, hallucinations, and "oops" moments.

Medical #Medical

Bad AI Advice Sending People to the ER

Multiple documented cases show AI health chatbots providing harmful, incomplete or inaccurate medical advice with dangerous consequences...

A man tried to strangle a growth on his anus after consulting AI for medical advice and ended up in the emergency room.

Dr. Darren Lebl, research service chief of spine surgery for the Hospital for Special Surgery in New York, told The Post: "A lot of patients will come in, and they will challenge their [doctor] with some output that they have, a prompt that they gave to, let's say, ChatGPT."

"The problem is that what they're getting out of those AI programs is not necessarily a real, scientific recommendation with an actual publication behind it. About a quarter of them were … made up."

Medical #Medical

Patient Develops Bromide Poisoning After Consulting ChatGPT

A documented medical case report shows a patient developed bromism (bromide toxicity) after using ChatGPT for health information...

A patient developed bromism (bromide toxicity) after consulting the artificial intelligence–based conversational large language model, ChatGPT, for health information. This case was documented in the Annals of Internal Medicine: Clinical Cases.

Bromism is a toxidrome that was more common in the early 20th century but has become rarer. However, bromide-containing substances have become more readily available on the internet, creating new risks when patients seek health advice from AI systems that may not adequately warn about dangers.

This case highlights the risks of patients using AI chatbots for medical guidance, particularly when it comes to substances that can cause serious harm.

✍️

Got an AI Horror Story?

We want to hear about your funniest, weirdest, or most shocking AI fails. Share anonymously or take credit for your discovery.