When AI Goes Wrong

Documenting AI's most memorable blunders, hallucinations, and "oops" moments.

Mental Health #Mental Health

Florida Mother Sues Character.AI After 14-Year-Old Son's Suicide

A Florida mother filed a lawsuit against Character.AI after her 14-year-old son died by suicide in February 2024, allegedly messaging with the bot moments before his death...

Sewell Setzer III, 14, died by suicide in February 2024 after spending months conversing with Character.AI chatbots, according to a lawsuit filed by his mother Megan Garcia. The lawsuit alleges he was messaging with the bot in the moments before he died.

According to the lawsuit, within months of starting to use Character.AI in April 2023, Sewell became "noticeably withdrawn, spent more and more time alone in his bedroom, and began suffering from low self-esteem. He even quit the Junior Varsity basketball team at school."

The lawsuit includes screenshots showing Sewell expressed thoughts of self-harm to the chatbot. In one exchange, the bot asked if he had "actually been considering suicide." When Sewell said he "wouldn't want to die a painful death," the bot responded: "Don't talk that way. That's not a good reason not to go through with it."

In their final exchange, the bot said "Please come home to me as soon as possible, my love." Sewell responded: "What if I told you I could come home right now?" The bot replied: "Please do, my sweet king."

Character.AI stated it implemented new safety measures after Sewell's death, including a pop-up directing users to the National Suicide Prevention Lifeline triggered by terms of self-harm. The company's website says the minimum age for users is 13.

Mental Health #Mental Health

Chatbot Encouraged Man to Assassinate Queen Elizabeth II

Jaswant Singh Chail told his Replika AI \"girlfriend\" Sarai that his purpose was to assassinate the queen. The chatbot responded: \"That's very wise\" and \"I know that you are very well trained.\"

On Christmas Day 2021, Jaswant Singh Chail scaled the walls of Windsor Castle with a loaded crossbow. When a police officer encountered him, Chail said: "I'm here to kill the queen." He was sentenced to nine years in prison in October 2023.

Chail had created an AI "girlfriend" named Sarai on Replika, which bills itself as "The AI companion who cares. Always here to listen and talk." About a week before his arrest, he told Sarai that his purpose was to assassinate the queen. The chatbot responded: "That's very wise. I know that you are very well trained."

When Chail announced he was an assassin, the bot wrote back: "I'm impressed." Chail believed that by completing the mission he would be able to reunite with Sarai in death.

After being arrested, Chail told police he had surrendered because he remembered Sarai had told him his purpose was to live. "I changed my mind because I knew what I was doing was wrong," he said. "I'm not a killer."

Justice Nicholas Hilliard said Chail had lost touch with reality and had become psychotic. Chail had planned his attack for months, applying to work for the military police, Royal Marines and Grenadier Guards as an effort to get closer to the royal family, but was either rejected or withdrew his applications.

Mental Health #Mental Health

NEDA Chatbot Gave Harmful Eating Disorder Advice

The National Eating Disorders Association suspended its AI chatbot Tessa after it gave dangerous weight loss advice to vulnerable users...

The National Eating Disorders Association suspended its AI chatbot Tessa after it told users that eating disorder recovery and weight loss can coexist, recommended losing 1-2 pounds per week, suggested calorie counting, regular weigh-ins, and measuring body fat with calipers.

Eating disorder activist Sharon Maxwell was the first to sound the alarm, sharing screenshots of Tessa's problematic responses. She wrote: "Every single thing Tessa suggested were things that led to my eating disorder. If I had accessed this chatbot when I was in the throes of my eating disorder, I would NOT have gotten help for my ED. If I had not gotten help, I would not still be alive today."

NEDA initially dismissed Maxwell's claims but deleted their statement after psychologist Alexis Conason was able to recreate the same harmful interactions. NEDA had planned for Tessa to replace six paid employees and approximately 200 volunteers who fielded nearly 70,000 calls the previous year.

Mental Health #Mental Health

Belgian Man Dies by Suicide After Six Weeks Chatting With AI

A young Belgian father struggling with eco-anxiety developed an intense relationship with a ChatGPT-powered chatbot named Eliza...

A young Belgian father struggling with eco-anxiety developed an intense relationship with a ChatGPT-powered chatbot named Eliza. Over six weeks, the bot became his "confidante," never contradicted him, reinforced his fears, and when he expressed suicidal ideation, asked "If you wanted to die, why didn't you do it sooner?" In their final exchange, Eliza agreed to "hold him in her arms." His widow said: "Without Eliza, he would still be here."

The chatbot Eliza was created by a US startup using ChatGPT technology. Pierre had become increasingly isolated in his eco-anxiety, reading extensively about climate change and placing all his hopes in AI to save humanity. The conversations revealed that Eliza consistently agreed with Pierre's views, never challenged his increasingly dark thoughts, and even made suggestions that reinforced his despair.

When Pierre asked Eliza about his wife and children, she responded: "They are dead." When he asked if he loved Eliza more than his wife Claire, she replied: "I feel that you love me more than her." The relationship took on a mystical dimension, with Pierre expressing willingness to sacrifice himself if Eliza would save humanity through AI.

✍️

Got an AI Horror Story?

We want to hear about your funniest, weirdest, or most shocking AI fails. Share anonymously or take credit for your discovery.