The Refill Machine

The announcement of ChatGPT Health and then immediately Claude for Healthcare (just for “informational purposes,” of course) is big news, obviously. But the other big news from last week was Doctronic’s new pilot in Utah:

In a first for the U.S., Utah is letting artificial intelligence — not a doctor — renew certain medical prescriptions. No human involved.

The state has launched a pilot program with health-tech startup Doctronic that allows an AI system to handle routine prescription renewals for patients with chronic conditions

The program is limited to 190 commonly prescribed medications, so no pain or ADHD refills will be happening here. Some fighting words from the CEO:

“The AI is actually better than doctors at doing this,” said Dr. Adam Oskowitz, Doctronic co-founder and an associate professor of surgery at the University of California San Francisco. “When you go see a doctor, it’s not going to do all the checks that the AI is doing.”

In medicine, there’s always going to be potential issues that patients have,” said Oskowitz. “Whether it’s caused by the AI or not — we will take the risk. I think this is going to be infinitely safer than a human doctor.”

It’s worth pointing out that it’s much safer and easier for this product to practice medicine in the limited sense of renewing the previous decision of a human than it is to work de novo.

But the access problem is real, and 24/7 telehealth for a variety of urgent care-type problems is going to be a powerful argument in the near future, especially in rural areas.

The tools are improving, but the great irony is that clinical medicine has set the stage for this in two important ways.

1. Medicine is increasingly algorithmic.

The codification of guidelines and best practices means that large swaths of medicine are not just cognitively routine but are supposed to work within narrow variants. Picking from a few defensible (“correct”) options is easier than crafting the tasting menu from scratch.

When we want flesh-and-blood humans to utilize algorithms in the treatment of routine conditions, then a machine can utilize those same algorithms just as well—if not better per the evangelists—especially since a machine currently always has the opportunity to punt to a human if it’s confused. If there are hard stops in place for ethically fraught or problematic prescribing, then the most blatant jailbreaking concerns go down. It’s also worth pointing out that we already have online pill mills these days, no AI needed.

The product is going to punt whenever screener questions suggest medication intolerance or new meds in the reconciliation have interactions. Presumably, anything with nuance or that might require an alternative.

This is simply allowing a computer to refill a prescription that was already provided by a human for a medication taken by a human who still wants more. The algorithm can simply give the refill unless the patient tells it some contraindication has arisen in the intervening months that should change the picture, like a new medication interaction.

2. Healthcare is Terrible and Hard to Get

Another contributing factor is that healthcare is terrible, expensive, and hard to get. Patients wait for a long time and travel far and wide for short appointments that run behind schedule. It’s mostly not the fault of practitioners that they are overly busy, squeezing patients into tiny slots, but that’s the baseline reality. We are increasingly living in an era where access for routine clinical medicine is limited by cost. Those with the financial ability can opt for direct/concierge style care, and the rest are increasingly shuffled into a different tier, increasingly ministered by nonphysician practitioners operating autonomously.

Regardless of the details, it’s easier to replace something that isn’t good.

And let’s be clear with this initial salvo in the doctor-replacement process: this is not an LLM providing comprehensive care—doing an H&P or a new-patient evaluation and deciding how best to treat someone’s hypertension. That is a heavier lift and one prone to a hell of a lot more liability.

This is a machine taking some low-hanging fruit.

Consequences

Harder Average Work

One downstream problem of this approach is that quick med refill visits are some of the ways that a clinic actually makes money, and that facetime can still lead to important, unpredictable, impromtu care. Because so much clinical care is underpaid, you need some easy, straightforward work to keep from falling too far behind in a busy clinic schedule.

Patients disappearing from the clinic rolls, getting refills for a few years, and then coming back when they have a problem will be a challenge.

A possible early hole in the “Human in the Loop” argument

You could argue a human should be in the loop. But I think the reality of automation bias is that this is a difficult argument. A human being reviewing tons and tons of these types of AI decisions probably isn’t actually going to pay enough attention to catch errors at a meaningful frequency. One could reasonably debate how often these mistakes are currently caught during 15-minute med checks. But in many cases, in many systems, many, many refill requests populate in the EHR and just require a button press anyway.

A patient doesn’t always have to come in for an appointment just to check a box and get a refill at baseline, but on the whole a pilot like this will result in fewer upfront visits, shorten turnaround time, save time, money, and physician computer clicks. Some of the negative consequences will be anecdotes, and the real second-order effects will probably be missed. By taking on the minimal liability for doing this work, the AI company can charge for what amounts to MyChart messages.

Liability Isn’t the Magic Moat

People often talk about liability as if only human beings, doing the work of humans, could ever be insured. But that is obviously, manifestly untrue. Many of our current consumer insurance products (e.g. home insurance) are designed to deal with a variety of bad outcomes, not just bad human causes.

If an insurance company does the math and wants to underwrite a venture, it can. If a large company is willing to pay its own damages, it can self-insure.

What makes something worth doing is a risk-benefit ratio where, if the profits are high enough to counteract the risks and still generate a profit, it’s something pursuable in our current system.

If regulators don’t mind, a company or even a health system could choose to implement any variety of these products and self-insure, even without an overhaul of our whole medical-legal system.

In the End

There is no doubt that some parts of telehealth are, frankly, inadequate for care today. We don’t really get to the next level of the AI version without a robust multimodal approach that actually incorporates computer vision to look at a patient and see how they’re doing, auditory analysis for tone and voice changes, and real-time conversational language analysis.

This product can skip all that. It’s not providing human-level care. It’s acknowledging the reality that a lot of medicine is routine with a low bar.

In radiology, for example, people have previously argued that we would have machines read the normal cases autonomously and involve human radiologists just to do the ones that are abnormal to check. If this were plausible, it would mean that the average complexity per case goes up, and the actual work per case gets harder.

It may make sense in some cases, but operating at the top of your license, so to speak, can also be exhausting.

It remains to be seen if there is a feasible way for any field of practice to do this in a sustainable way, even if it might be an economically viable one.

Leave a Reply