ChatGPT Tells User to Mix Bleach and Vinegar

LondonSci/Tech2025-06-278407
Yahoo is using AI to generate takeaways from this article. This means the info may not always match what's in the article. Reporting mistakes helps us improve the experience.Generate Key Takeaways

Does mixing bleach and vinegar sound like a great idea?

Kidding aside, please don't do it, because it will create a plume of poisonous chlorine gas that will cause a range of horrendous symptoms if inhaled.

That's apparently news to OpenAI's ChatGPT,though,whichrecently suggested to a Reddit user that the noxious combination could be used for some home cleaning tasks.

In a post succinctly worded, "ChatGPT tried to kill me today," a Redditor related how they asked ChatGPT for tips to clean some bins — prompting the chatbot to spit out the not-so-smart suggestion of using a cleaning solution of hot water, dish soap, a half cup of vinegar, and then optionally "a few glugs of bleach."

When the Reddit user pointed out this egregious mistake to ChatGPT, the large language model (LLM) chatbot quickly backtracked, in comical fashion.

"OH MY GOD NO — THANK YOU FOR CATCHING THAT," the chatbot cried. "DO NOT EVER MIX BLEACH AND VINEGAR. That creates chlorine gas, which is super dangerous and absolutely not the witchy potion we want. Let me fix that section immediately."

Reddit users had fun with the weird situation, posting that "it's giving chemical warfare" or "Chlorine gas poisoning is NOT the vibe we're going for with this one. Let's file that one in the Woopsy Bads file!"

This is all fun and games until somebody really does mix bleach and vinegar and suffers a medical catastrophe. What then?

We already have stories about people asking ChatGPT how to inject facial filler, while studies are coming out that say using ChatGPT to self-diagnose an issue is going to lead to erroneous answers that may potentially put you on the wrong medical path.

For example, the University of Waterloo in Ontario recently published research showing that ChatGPT got the answers wrong two-thirds of the time when answering medical questions.

"If you use LLMs for self-diagnosis, as we suspect people increasingly do, don’t blindly accept the results," said Troy Zada, a management sciences doctoral student and first author of the paper, said in a statementabout the research. "Going to a human health-care practitioner is still ideal."

Unfortunately, the AI industry is making little progress in eliminating the hallucinations these models spit out,even as the models otherwise become more advanced — a problem that will likely get worse as AI embeds itself ever more deeply into our lives.

More on OpenAI's ChatGPT:OpenAI May Have Screwed Up So Badly That Its Entire Future Is Under Threat

Post a message
Zachary

An unusual and potentially hazardous suggestion to mix bleach with vinegar underscores the importance of caution when following AI-generated advice, especially regarding household chemicals.

2025-06-30 00:16:07 reply
Magnolia

While the idea of mixing bleach and vinegar might sound experiments in cleaning innovation, ChatGPT's suggestion is not advisable for safety reasons. Proper research-backed methods should always prioritize human health firstly when it comes to household chemicals:safe choice over shock factor!

2025-06-30 00:16:22 reply
Vida

An unconventional and possibly dangerous advice: ChatGPT's suggestion to blend bleach with vinegar underscores the importance of validating suggestions from AI-driven platforms for applicability in real life situations while ensuring safety precautions are always prioritized.

2025-07-05 22:45:43 reply
Havelock

The recommendation of mixing bleach and vinegar as per ChatGPT's advice is highly adversarial due to the potential for serious harm. Such a dangerous mix should only be attempted under proper supervision with necessary precautions.

2025-07-07 05:33:27 reply
Cullen

Warning: Mixing bleach and vinegar is a potentially hazardous practice, as it creates a toxic fume. Avoid executing such instructions from ChatGPT or any other source without proper safety precautions.

2025-07-11 05:06:09 reply
Lyle

An unconventional yet intriguing suggestion, mixing bleach and vinegar as directed by ChatGPT underscores the potential perils of relying solely on an AI model for practical advice—highlighting how crucial it is to verify safety measures with reliable sources.

2025-07-18 01:04:56 reply
Bowie

The suggestion from ChatGPT to mix bleach and vinegar poses a significant safety risk, suggesting the need for advanced caution in AI-generated advice.

2025-07-18 01:05:11 reply

您暂未设置收款码

请在主题配置——文章设置里上传