Regulating the use of artificial intelligence in managed care pharmacy is a “Wild West” situation, with state boards of pharmacy facing significant information gaps regarding this rapidly evolving technology.
Due to a lack of technical expertise, the boards often are forced to rely on presentations from vendors or regulated entities, said experts at AMCP 2025, in Houston.
Jeff Mesaros, PharmD, JD, MS, the president of the Mesaros Group, is a member of the Florida Board of Pharmacy and an immediate past president of the National Association of Boards of Pharmacy (NABP). “My job when I’m sitting at that table is to protect the patients of that state,” he said. “The challenge I face is that the regulated entities, such as the pharmacies, or the third-party vendors of technology platforms, such as AI, likely have more information about these technologies than I do.”
During his time as NABP president, Dr. Mesaros led an effort to establish a Research and Innovation Institute to serve as a forum to help close that information gap. “The institute will act as an independent, impartial connector, facilitator and/or research partner for member boards of pharmacy, regulated entities and third parties to share, study and evaluate technology and digital health concepts,” Dr. Mesaros explained. Some pilot programs have already been launched, he noted, including one that uses AI to assisted with patient counseling, while another focuses on e-prescribing.
“We’re trying to continue to build up the institute to find these use cases and ensure the responsible integration of AI into pharmacy practice,” Dr. Mesaros said.
Ethical Challenges
AI poses multiple ethical challenges in pharmacy, said Linea Wilson, MBA, CHFP, a managing partner with the Talavay Consulting Group. “The first issue is human oversight. What tasks are you comfortable with allowing the AI to perform autonomously versus which things may still need human oversight, or as everyone says, ‘human in the loop?’” (See box for more details.)
The consequences of inadequate oversight of AI by humans are many. Ms. Wilson cited, as an example, the ongoing challenge of AI “hallucinations” and the possibility for inaccurate or fabricated information to affect clinical decisions and patient safety. “For example, you want to be sure that your AI is using the correct drug names and reimbursement codes instead of making things up that sound like they’re correct.”
A recent review categorized AI medical hallucinations and assessed the risk they pose to patients (arXiv preprint arXiv:2503.05777). The review found that diagnostic predictions demonstrated lower rates of hallucination (from 0% to 22%), while tasks that require accurate extraction of factual details, such as interpreting laboratory data, had error rates that approached 25%.
In one case cited in the review, AI interpreted elevated globulin levels in a 37-year-old man presenting with a rash as indicative of a general increase in immunoglobulin production, when in fact the elevated globulin levels were the result of polyclonal B-cell activation from HIV. “These hallucinations frequently use domain-specific terms and appear to present coherent logic, which can make them difficult to recognize without expert scrutiny,” the study authors wrote.
More Pitfalls
AI pitfalls don’t end with hallucinations and other content errors. Here are two other potential concerns with the technology requiring vigilance:
Transparency. “If you’re purchasing AI from a third party, you want to be sure that it is transparent in what it’s doing—a ‘clear box’ as opposed to a ‘black box.’ Vendors should be open to showing you how it works, how it will function for your organization and how it will interact with your patients,” Ms. Wilson said. “Hidden bias is a related and major concern. How do you watch for bias and correct it if it happens? How do you ensure that the tool has access to the right type of data needed to be trained?
Obtaining informed consent. Ms. Wilson described a recent visit with her own physician, who asked for informed consent before turning on her AI chart notes scribe. “When in the process, do you tell a human … that something is being done on their behalf using AI? We have some organizations that are [disclosing] everything, and others who are taking a ‘policy of silence’ on their use of AI.”
One of the biggest unknowns—and an area where there is likely to be a lot of movement in the coming year—is state-level efforts to govern AI. “The AMCP is tracking 60 bills in 23 states that would regulate the use of AI in managed care, many of which are focused on AI in utilization management, such as prior authorization,” said Adam Colborn, JD, AMCP’s associate vice president of congressional affairs. “Legislation has also been introduced in Congress that covers issues such as allowing AI to prescribe medicines and directing HHS [the Department of Health and Human Services] to develop a strategy for dealing with AI threats to patient information.”
On June 22, Texas Governor Gregg Abbot signed the Texas Responsible AI Governance Act. Among other stipulations, the comprehensive bill requires that healthcare providers who use AI in their practices must disclose such use to patients.
Several states have also introduced legislation that would govern the use of AI in prior authorization:
- Connecticut Senate Bill 10 would ban payors from using AI for clinical decisions instead of a review by a clinical peer.
- Nevada Senate Bill 128 would prohibit the use of AI to deny, modify or reduce care included in a prior authorization request, but would allow insurers to employ it for automatic approvals.
- A separate Texas bill, Senate Bill 815, would prohibit utilization reviewers from using AI as the sole basis to deny or delay care, although it could still be used for administrative support or fraud detection. (See sidebar for additional legal considerations.)
The only other such bill signed into law, Nebraska Legislative Bill 77, bans the use of AI as the sole basis for denying, delaying or modifying healthcare services and requires utilization reviewers to disclose any use of AI algorithms.
Dr. Mesaros reported consulting relationships with Amazon, Cigna and CVS Health. None of the other sources reported any relevant financial disclosures.
This article is from the September 2025 print issue.


