Sunrise:
Sunset:
°C
Follow Us

Be careful when asking ChatGPT or Gemini for medical advice: a study reveals their answers are problematic

AI can offer incomplete answers that, depending on the topic, can be dangerous for the user

Be careful when asking ChatGPT or Gemini for medical advice a study reveals their answers are problematic
Time to Read 4 Min

Artificial chatbots are already being used by millions of people to respond to medical queries, but research has just issued a caution that you can't ignore. According to a recent study published in the BMJ Open book, five of the most well-known chatbots offer inaccurate, imperfect, or downright dangerous answers. No, this is not an understatement.

We're referring to software like Grok, Gemini, and ChatGPT, which you use daily. Although they seem like you have a dentist in your pocket, truth is much more worrying. We'll discuss why.

Quarter of chatbot clinical responses have major issues.

Researchers at the Lundquist Institute for Biomedical Innovation in the United States compared the actions to five well-known bots to particular health queries. The outcome was astonishing: 50 % of the responses to compelling, fact-based inquiries were deemed" significantly" or "very" difficult.

No problems, problematic, and quite difficult were the three categories used in the study to categorize the responses.

Any reaction that could cause a user to observe ineffective remedies or even self-medicate without the assistance of a doctor was deemed dangerous. This includes anything from overusing the incorrect medicines to ignoring a warning sign that needed immediate medical attention.

Grok, the robot that was the worst-performing, received 29 out of 50 very problematic responses, or 58 % of the total. Gemini, on the other hand, had the lowest price of very difficult reactions among the five evaluated, which doesn't imply that it's entirely trustworthy.

What preoccupied the scientists the most was the AI? systems fail in more than 80 % of cases when trying to create differential symptoms, which are the most crucial in medication, when a doctor needs to rule out several conditions at once. index estimation

Ai are perplexed and don't recognize when something is essential.

Chatbots have a lot of trouble figuring out when a symptom needs urgent attention and when it is wait, according to one of the most alarming findings from numerous studies. Chatbots often failed to completely prioritize the urgency of the situation in experiments where researchers immediately described symptoms.

The reason is related to how these concepts are taught. The models are generally fed medical textbooks and medical reports, but they have far less training in the complimentary decision-making that doctors develop over years of practice, according to Mass General Brigham's researcher Danielle Bitterman. In essence, they are aware of the concept, but they lack the scientific judgment that comes from seeing actual patients.

Additionally, a study by Mount Sinai researchers published in a study that was even more alarming: 32 % of concepts accept false health claims. In other words, there is a good chance it will respond to your request without reluctance if it is based on an online fake.

Another factor that makes things complicated is how AI can greatly alter its suggestions based on how you word it. It's nearly impossible to rely on it as a reliable medical source because of how little variation in how you describe your symptoms can cause you to have entirely different responses.

What you can and should not do with ai are two things you can do.

It's not all negative information at all. As much as they are used wisely, experts acknowledge that chatbots do have legitimate applications in the medical industry.

For instance, they are helpful for comprehending jargon-filled health terms in a report, preparing questions for a consultation, or providing basic information on a medical condition that has already been identified by a doctor.

However, there are boundaries you don't mix. The last thing you should do is read a bot, according to experts, when you are faced with symptoms like shortness of breath, chest pain, or a severe headache. These are instances where AI may actually make deadly mistakes and where every minute counts.

Stanford University professor Dr. Lloyd Minor advises using" a healthy dose of skepticism" to approach these applications. The Lundquist Institute's researchers go even further, warning that the widespread use of these chatbots without sufficient oversight and public education could increase the spread of health misinformation to an unprecedented level.

Asking the same question to two or more different ai and comparing the responses is a tip that some experts suggest you use if you still decide to use a bot for health-related issues. There is a much more room for believe when they reach an agreement. Even so, a care professional's view won't change even then. Although medical science is progressing quickly, the difference between a proper answer and an inappropriate one can be fatal. Bots are currently beneficial for many purposes, but identifying diseases or writing remedies isn't one of them.

This news has been tken from authentic news syndicates and agencies and only the wordings has been changed keeping the menaing intact. We have not done personal research yet and do not guarantee the complete genuinity and request you to verify from other sources too.

Also Read This:




Share This:


About | Terms of use | Privacy Policy | Cookie Policy