ChatGPT Inaccurate and Unreliable for Providing Medical Information for Now


AI is changing medicine but is ChatGPT ready for prime time?

Artificial intelligence (AI) can already predict cancer risk, read images, suggest medications, aid drug discovery, and match patients with clinical trials.

Researchers in Boston are reportedly on the verge of a major advancement in lung cancer screening: Artificial intelligence that can detect early signs of the disease years before doctors would find it on a CT scan. The new AI tool, called Sybil, was developed by scientists at the Mass General Hospital and the Massachusetts Institute of Technology in Cambridge. In one study, it was shown to accurately predict whether a person will develop lung cancer in the next year 86% to 94% of the time.5

But how good are generative AI programs like ChatGPT at accessing and providing complex medical information on the management of disease?

ChatGPT has been demonstrated to be an unreliable tool in a recent medical demonstration project. Use of the tool ended up providing false or incomplete information about real drug-related queries according to the results of a new study.1

ChatGPT was launched by OpenAI in November 2022 and its potential for transforming cancer care is being lauded while its reliability as a source of accurate information is being actively debated. The chatbot has been able to pass all three parts of the United States Medical Licensing Exam for doctors and a Stanford Medical School clinical reasoning final.2

Connect with others in the CancerConnect Community to share information and support

cancerconnect.com

The chatbot tends to produce answers riddled with errors, and in another recent demonstration case developed cancer treatment plans that were mixed with correct and incorrect information.3

The most recent study continues to suggest ChatGPT still has a way to go before it can be used reliably based on study results presented at the recent American Society of Health-System Pharmacists Midyear Clinical Meeting between December 3 and 7 in Anaheim, California.1

The study led by Sara Grossman, associate professor of pharmacy practice at Long Island University posed questions to ChatGPT that had come through the Long Island University’s College of Pharmacy drug information service over a 16-month period between 2022 and 2023.

Anemia and fatigue are a common complication of Polycythemia and Myelofibrosis; understand treatment-what can you do about it?

By Dr. C.H. Weaver M.D.Jan 1, 2024

shutterstock_793257808

Treatment & CareA Pivotal Moment: Blood Tests Emerge for Cancer Screening

Blood based genomic testing is paving the way for improved cancer screening but it’s not ready to replace proven screening programs.

By Diana Price, Medically Reviewed by Dr. C.H. Weaver M.D.Dec 31, 2023

Pharmacists involved in the study researched and answered 45 questions with the responses being examined by a second investigator, and six questions were ultimately removed. The responses provided a base-criteria according to which the answers produced by ChatGPT would be compared with.

The researchers found that ChatGPT only provided a satisfactory response in accordance with the criteria to 10 of the 39 questions. For the other 29 questions ChatGPT either didn’t directly address the question or provided an incorrect or incomplete answer. ChatGPT was also unable to provide references when asked by the researchers to verify the information.

Although ChatGPT has the potential to help patients and doctors alike in their search for medical information individuals should remain cautious for the time being and make sure medical information is referenced and verified from trusted sources. OpenAI’s current usage policy4 states “ that its AI tools are “not fine-tuned to provide medical information. You should never use our models to provide diagnostic or treatment services for serious medical conditions.”

In Depth Articles on AI and Chatbots in Medicine

References

  1. https://www.prnewswire.com/news-releases/study-finds-chatgpt-provides-inaccurate-responses-to-drug-questions-302005250.html
  2. https://www.businessinsider.com/list-here-are-the-exams-chatgpt-has-passed-so-far-2023-1?r=US&IR=T#but-the-bot-did-pass-a-stanford-medical-school-clinical-reasoning-final-14
  3. https://www.businessinsider.com/chatgpt-generates-error-filled-cancer-treatment-plans-study-2023-8?r=US&IR=T
  4. https://openai.com/policies/usage-policies
  5. https://ascopubs.org/doi/full/10.1200/JCO.22.01345

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.