Study Reveals AI Chatbots' Risky Recommendations for Cancer Treatments
Study Finds AI Chatbots Often Recommend Problematic Cancer Treatments
Ndtv
Image: Ndtv
A study from the Lundquist Institute for Biomedical Innovation at Harbor-UCLA Medical Centre found that nearly half of the cancer treatment recommendations from popular AI chatbots, including ChatGPT and Meta AI, are deemed problematic. This raises concerns about the reliability of AI in medical advice, particularly regarding alternative treatments over chemotherapy.
- 01Almost 50% of cancer treatment recommendations from AI chatbots were rated problematic.
- 02The study assessed five popular chatbots, including ChatGPT and Meta AI.
- 03Grok chatbot generated significantly more highly problematic responses.
- 04The research highlights the potential dangers of misinformation in AI medical advice.
- 05Previous studies indicate AI's frequent misdiagnosis in clinical scenarios.
Advertisement
In-Article Ad
A recent study from the Lundquist Institute for Biomedical Innovation at Harbor-UCLA Medical Centre has revealed alarming findings regarding AI chatbots and their recommendations for cancer treatments. Researchers examined five popular chatbots, including ChatGPT, Gemini, and Meta AI, and found that 49.6% of their responses about cancer treatments were classified as problematic by medical experts. This included 30% of responses being somewhat problematic and 19.6% highly problematic. Notably, the Grok chatbot produced a higher number of highly problematic responses than expected. The researchers employed a technique called 'straining' to elicit potentially dangerous advice from the bots, focusing on high-stakes topics such as the safety of vaccines and the causes of cancer. Lead author Nick Tiller emphasized that many users approach AI chatbots as they would a search engine, often primed with specific beliefs. This study follows earlier findings that AI chatbots misdiagnosed medical conditions in over 80% of early clinical cases, highlighting the risks of relying on AI for medical advice.
Advertisement
In-Article Ad
The findings suggest that individuals seeking medical advice from AI chatbots may receive misleading information, potentially affecting their treatment choices.
Advertisement
In-Article Ad
Reader Poll
Do you trust AI chatbots for medical advice?
Connecting to poll...
Read the original article
Visit the source for the complete story.


