By Marcus A. Banks

To get the most from AI, pharmacists must be leaders in its development and use in the pharmacy, according to experts at the first ASHP Artificial Intelligence Summit, in Portland, Ore.

“I see tremendous opportunity for the digital enhancement of medication management, pharmacy practice and healthcare as a whole,” Lisa Stump, MS, the chief information officer at Yale Medicine and Yale New Haven Health, in Connecticut, said to kick off the summit.

Although definitions of AI vary, ASHP uses a version from the Oxford English Dictionary: “The capacity of computers or other machines to exhibit or simulate intelligent behavior.”

Machine learning is a subset of AI that speaker Andrea Sikora, PharmD, defined as “a computer’s ability to learn without being explicitly programmed.” Dr. Sikora, a critical care clinical pharmacist and clinical associate professor at the University of Georgia College of Pharmacy, in Augusta, is enthused about AI’s potential but noted that it is not ready for many pharmacy uses.

One way to change this would be for pharmacists to take ownership of this new technology in their departments. “As AI becomes more prevalent in healthcare, it is essential that pharmacists take a leadership role in its development,” Ms. Stump said.

ChatGPT ‘Has Infinity Time’ for Patients

Summit keynote speaker Harvey Castro, MD, MBA, an emergency medicine physician who has written often about AI in healthcare, noted that the patient–doctor encounters can be as short as 13 minutes (JAMA Health Forum 2023;4:e230052). ChatGPT, in contrast, “has infinity time,” said Dr. Castro, referring to the chatbot and virtual assistant developed by the company OpenAI.

Because ChatGPT never has to rush to another appointment, patients can get answers that are sometimes hard to obtain in person. Dr. Castro described a mother who had visited 17 different specialists over three years seeking solutions for her child’s chronic pain. Clinicians kept ordering tests and writing referrals, but never reached an answer. Eventually she uploaded all her son’s medical records to ChatGPT, which correctly suggested that her son had a tethered spinal cord. A surgeon untethered the cord, and today the pain is gone.

Dr. Castro developed Medi Helper, which uses ChatGPT to maximize in-person visits. Prior to seeing a doctor, patients can input their symptoms and medication history into Medi Helper—without including their name or any other identifying information. Medi Helper suggests possible diagnoses or treatments, which the patient can bring to the appointment to spark a conversation.

He noted that AI can also parse through a bevy of possible drugs to determine which has the best likelihood of thwarting a disease. This could improve the efficiency and success of drug trials. Robots also could be programmed with AI that anticipates changes in blood pressure based on how someone speaks, Dr. Castro said, rather than containing the damage after blood pressure spikes or plunges.

Like any tool, “AI can be used for good, and can be used for bad,” Dr. Castro said, offering a sobering example: If people describe symptoms of profound depression to an AI application, that system could encourage them to die by suicide, because the technology makes a connection between these two things.

“Wouldn’t you rather have a say in these systems early? We need a human in the loop,” Dr. Castro said.

Surmounting AI’s Potential Dangers

AI is far from mature for medication management, Dr. Sikora said, making this field ripe for pharmacists with coding skills and medication expertise to build new tools. “Medication dosing within patient contexts is quite specific,” she said. For example, a patient in an ICU may receive multiple medications every day—in different forms, with unique dosages and at varying intervals. Many drugs have similar names; if the wrong drug for an indication is input into an AI medication management system, the results could be disastrous.

“The difference between the right drug and the almost-right drug is actually a really big deal,” Dr. Sikora said, citing Mark Twain’s aphorism that there’s a difference between lightning and a lightning bug.

The core issue is that medication data are quite “high dimensional,” Dr. Sikora noted, parsable by an experienced pharmacist but not by AI systems usually trained on few variables. Few common data models are able to find associations with the many different variables used in an ICU pharmacy so that AI systems can be trained on how to interpret them, she said.

“We really lack good common data models for medication,” Dr. Sikora said, which pharmacists would need to validate as accurate in many situations before releasing for common use.

Medical data generated in one context should be FAIR (findable, accessible, interoperable and reusable) in others, she said. Sometimes people might be protective of the data models they have developed, she said, due to lack of incentives to share it.

Dr. Sikora’s cautionary tone is not a rejection of AI. Potential applications she cited included using AI to determine who is at risk for sepsis earlier than people can determine now, or building “pharmacophenotypes” of different medications linked to different health outcomes. “AI shows great potential for evidence-based use,” she said.

As pharmacists make the case for AI tool development to their leaders, one faux pas would be to frame this as a job-cutting measure. “Never ever say I’m going to replace an FTE [full-time equivalent],” said speaker Ghalib Abbasi, PharmD, the systems director for pharmacy informatics at Houston Methodist. AI initiatives should be to free up pharmacists to do higher level work such as providing more detailed patient consultations, Dr. Abbasi said.


The sources reported no relevant financial disclosures.

This article is from the September 2024 print issue.