Recently, ChatGPT has emerged as a phenomenon with the potential to change many industries. It is a tool seemingly capable of providing answers to whatever question the user types into the chat box. Last February, ChatGPT passed the US Medical Licensing Exam. So what does that mean for AIs' place in the pharmacy and should pharmacists be implementing AI into their practice now?
That's what pharmacy researchers at Long Island University investigated. They challenged ChatGPT with real drug-related questions that had come through the Long Island University's College of Pharmacy drug information service. The research showed that ChatGPT was only able to provide satisfactory responses to 10 out of the 39 medical questions asked. For the other 29 questions, ChatGPT did not address the question posed or provided an incorrect or incomplete answer.
When prompted by the researchers to provide the references to verify the information, ChatGPT only provided references for eight responses and each included non-existent references. This study highlights how ChatGPT is not an appropriate tool when it comes to making clinical decisions in your everyday practice.
OpenAI, the company behind ChatGPT, has a usage policy that states that its AI tools are "not fine-tuned to provide medical information. You should never use our models to provide diagnostic or treatment services for serious medical conditions." Additionally, in its technical report, OpenAI states that GPT-4’s biggest limitation is that it “hallucinates” information (makes it up) and is often “confidently wrong in its predictions.” Meaning that it can provide an answer that is seemingly correct but is making up the information, which can lead healthcare providers astray if they are depending on the AI for accurate information.
AI models are trained to predict strings of words that best match the question posed, meaning it doesn’t understand the question but rather predicts the next words in order to be accepted by the user as an appropriate answer. AI hallucinations occur because AI lacks the capability to employ logical reasoning or identify factual inconsistencies in the information they generate. Meaning, generative AI doesn't actually understand the question like a human would, instead it predicts what sounds like the right answer.
There are generative AI models being developed for healthcare. At MedEssist, we've been working with Pendium health to develop an AI that can be used and trusted by healthcare professionals. Together we have pharmacists, NPs, MDs working on creating the most accurate AI possible for primary care. MedEssist AI is a highly accurate and easy-to-use AI that is integrated into the MedEssist platform, helping you answer complex DI questions, create personalized patient handouts instantly, and draft newsletters or other marketing materials.
Visit our website: https://www.medessist.com/ai to learn more!
Also take a look at our social media pages and website for great tips on how you can use MedEssist AI in your daily practice.
References:
Sign up for practical tips, tricks, and insights to simplifying your workflow and growing your business.
(unsubscribe anytime)