ChatGPT Tells User to Mix Bleach and Vinegar

LondonSci/Tech2025-06-278170
Yahoo is using AI to generate takeaways from this article. This means the info may not always match what's in the article. Reporting mistakes helps us improve the experience.Generate Key Takeaways

Does mixing bleach and vinegar sound like a great idea?

Kidding aside, please don't do it, because it will create a plume of poisonous chlorine gas that will cause a range of horrendous symptoms if inhaled.

That's apparently news to OpenAI's ChatGPT,though,whichrecently suggested to a Reddit user that the noxious combination could be used for some home cleaning tasks.

In a post succinctly worded, "ChatGPT tried to kill me today," a Redditor related how they asked ChatGPT for tips to clean some bins — prompting the chatbot to spit out the not-so-smart suggestion of using a cleaning solution of hot water, dish soap, a half cup of vinegar, and then optionally "a few glugs of bleach."

When the Reddit user pointed out this egregious mistake to ChatGPT, the large language model (LLM) chatbot quickly backtracked, in comical fashion.

"OH MY GOD NO — THANK YOU FOR CATCHING THAT," the chatbot cried. "DO NOT EVER MIX BLEACH AND VINEGAR. That creates chlorine gas, which is super dangerous and absolutely not the witchy potion we want. Let me fix that section immediately."

Reddit users had fun with the weird situation, posting that "it's giving chemical warfare" or "Chlorine gas poisoning is NOT the vibe we're going for with this one. Let's file that one in the Woopsy Bads file!"

This is all fun and games until somebody really does mix bleach and vinegar and suffers a medical catastrophe. What then?

We already have stories about people asking ChatGPT how to inject facial filler, while studies are coming out that say using ChatGPT to self-diagnose an issue is going to lead to erroneous answers that may potentially put you on the wrong medical path.

For example, the University of Waterloo in Ontario recently published research showing that ChatGPT got the answers wrong two-thirds of the time when answering medical questions.

"If you use LLMs for self-diagnosis, as we suspect people increasingly do, don’t blindly accept the results," said Troy Zada, a management sciences doctoral student and first author of the paper, said in a statementabout the research. "Going to a human health-care practitioner is still ideal."

Unfortunately, the AI industry is making little progress in eliminating the hallucinations these models spit out,even as the models otherwise become more advanced — a problem that will likely get worse as AI embeds itself ever more deeply into our lives.

More on OpenAI's ChatGPT:OpenAI May Have Screwed Up So Badly That Its Entire Future Is Under Threat

Post a message

您暂未设置收款码

请在主题配置——文章设置里上传