Skip to the content

Ben White

  • About
  • Archives
  • Asides
  • Support
    • Paying Surveys for Doctors
  • Medical Advice
    • Book: The Texas Medical Jurisprudence Exam: A Concise Review
    • Book: Student Loans (Free!)
    • Book: Fourth Year & The Match (Free!)
  • Radiology Jobs
  • #
  • #
  • #
  • #
  • About
  • Archives
  • Asides
  • Support
    • Paying Surveys for Doctors
  • Medical Advice
    • Book: The Texas Medical Jurisprudence Exam: A Concise Review
    • Book: Student Loans (Free!)
    • Book: Fourth Year & The Match (Free!)
  • Radiology Jobs
  • Search
  • #
  • #
  • #
  • #

Price Transparency and the True Cost of Quality Healthcare

05.19.21 // Medicine

When you read healthcare reviews online, so many of the 1-star reviews relate to prices: patients frustrated by high costs or surprised by high bills. It’s easy to think that price transparency rules will help. One key problem is that healthcare consumers are intermittently if not completely insulated from the true costs of their care due to the filter of commercial insurance. It’s hard to blame people for feeling that their doctor’s time is “worth” a $35 copay instead of the hundreds of dollars they really pay indirectly.

When my family moved from typical employer-provided health insurance to a high-deductible plan, I finally started seeing firsthand how much things really “cost,” and how ludicrous billing gamesmanship practices have become.

I’m a physician, and even I find it striking.

I recently received a bill for hundreds of dollars for an annual well-person patient visit that should have been covered at 100%. If you manage to complain about anything during the intake, you see, you also get billed for a problem visit at the same time.

Is that nuts? Well, yes, of course it is. But this is the world we live in and how institutions pay the bills.

Dr. Peter Ubel had an interesting article in The Atlantic back in 2013 called “How Price Transparency Could End Up Increasing Health-Care Costs” that holds up pretty well. His main thought experiment centers on imaging, which is an easy but sort of plus/minus example.

The same kind of consumer pressure rarely exerts a similar influence on the cost and quality of health-care goods. For starters, most patients have little inclination, or motivation, to shop for health-care bargains. Insurance companies pick up most of the tab for patients’ health-care. A patient who pays a $150 co-pay for an MRI (like I do with my insurance) won’t care whether the clinic she goes to charges the insurance company $400 or $800 for that MRI. The MRI is still going to cost the patient $150. Even patients responsible for 20 percent of the tab (a phenomenon called co-insurance) face a maximum bill of only $160 in this circumstance. That is not an inconsequential amount of money, but it is still not enough money to prompt most patients to shop around for less expensive alternatives, especially when most consumers don’t realize that the price of such for services often varies significantly, with little discernible difference in quality.

To make matters worse, patients often don’t shop for health care in the kind of rationally defensible way that economic theory expects them to. According to neoclassical economics, when making purchasing decisions consumers independently weigh the costs of services from the quality of those same services. If toaster A is more expensive than toaster B, the consumer won’t buy A unless it is better than B in some way—unless it is more durable or has better features—and unless these improved features are worth the extra money.

While some patients shop around for imaging services, many stay within a larger system for all their care or go where their doctor tells them. A more meaningful scenario in a large metro would be to compare broad costs across multiple specialties/types of care across multiple health systems. Say, in Dallas, would you generally pay less at UT Southwestern, Health Texas, or Texas Health? Does that hold true for primary care and specialty care? Are there certain categories of chronic diseases that one network does better or worse with? What about labs and imaging?

Due to network effects, a consumer may not meaningfully be able to choose where to do every little thing, but rapidly comparing systems is perhaps not beyond reach. It would be nice to know, for example, which places are playing games to maximize insurance payouts at patients’ expense and which (if any) aren’t.

Sometimes, however, cost and quality are not perceived by consumers as being independent attributes. Instead, people assume the cost of a good or service tells them something about its quality. For instance, blind taste tests have shown that consumers rate the flavor of a $100 bottle of wine as being superior to that of a $10 bottle of wine, even when researchers have given people the exact same wines to drink. Other studies show that expensive pain pills reduce pain better than the same pills listed at a lower price. Price, then, leads to a placebo effect.

Such a placebo effect is no major concern in the context of wine tasting and pain pills (even if it suggests that consumers could save themselves some money if they didn’t hold this strange belief that higher cost means higher quality). But suppose your doctor asks you to get a spinal MRI to evaluate the cause of your back pain, and you decide to shop around for prices before getting the test. Would greater price transparency cause you to choose an MRI provider more rationally? Or would you instead mistakenly assume that higher price means higher quality? There is reason to worry that price transparency won’t lead consumers to make savvy decisions. It is too difficult for people to know which health-care provider offers the highest quality care.

If patients are not going to make savvy use of price information to choose higher quality, lower cost health-care, some health-care providers, like doctors and hospitals, will probably respond to price transparency by raising their prices.

And there’s the rub: is it a race to the bottom or a slow creep to the top? And if it’s both, how do we predict and influence the outcome? If the growth of debt-fluid corporate and private equity has taught us anything, it’s that competition is fickle, and it doesn’t take much for a dominant position to be abused.

Imagine you direct an MRI center in Massachusetts, and the state government requires you and your competitors to post prices for your services. You consequently find out that the MRI center around the corner from you charges $300 more than you do for their spinal MRIs, and that this increased price hasn’t hurt their business. Imagine, also, that you are convinced that your competitors don’t offer higher quality MRI scans than you do—your MRI machines are just as new and shiny as theirs; your radiologists and technicians are just as well trained. In that case, if patients are not going to be price-sensitive, you are going to raise your prices to match your competitor’s. Otherwise you are just leaving money on the table.

Quality in healthcare is a theoretically important metric but it is so, so poorly measured and understood. Customer satisfaction? Not so good. Outcomes? Highly influenced by patient selection. Healthcare is heterogeneous and complex.

Ultimately, the problem is complex and nuanced, but we should keep this in mind. Efforts to increase price transparency through state and federal law need to be carefully crafted and closely followed. Such laws should include research funding that would enable experts to evaluate how the law influences patient and provider behavior.

Also, whenever possible, price transparency should be accompanied by quality transparency. We need to provide consumers with information not only about the cost of their services but also about the quality of those services, so that they can trade off between the two when necessary. I recognize that this is a huge challenge. Measuring health care quality is no simple task. But if we are going to push for greater price transparency, we should also increase our efforts to determine the quality of health care offered by competing providers. Without such efforts, consumers will not know when, or whether, higher prices are justified.

It’s no surprise that optimizing for cost seems like a reasonable plan given how easy it is to compare versus how hard meaningful quality indicators are to measure.

But price selection in the absence of quality selection creates a perverse incentive for the cheapest lowest-quality-but-just-barely-permissible product.

 

Residency and the Craftsman Mentality

05.12.21 // Medicine, Reading

From Cal Newport’s excellent Deep Work: Rules for Focused Success in a Distracted World:

Whether you’re a writer, marketer, consultant, or lawyer: Your work is craft, and if you hone your ability and apply it with respect and care, then like the skilled wheelwright you can generate meaning in the daily efforts of your professional life.

You don’t need a rarified job; you need instead a rarified approach to your work.

Let’s add “physician” to Newport’s list.

One of the more disheartening aspects of medical school is the siloing of medical specialties such that different breeds of doctors appear to compete in the hospital and medical students come away with the idea that one specialty should spark passion in their hearts (and that they will be professionally unhappy if they then don’t match into that one specialty).

It doesn’t have to be this way.

The satisfaction of professional growth and a job well done can transcend specialty choice. If the results of the match weren’t what you wanted, apply yourself to developing a craftsmen’s mentality. Get good at what you do, take pride in it, and passion can follow.

 

 

Explanations for the 2021 Official Step 1 Practice Questions

04.17.21 // Medicine

This year’s set was updated in February 2021 (PDF here).

The asterisks (*) signify a new question, of which there are only 2 (#24 and 53). The 2020 set explanations and pdf are available here; the comments on that post may be helpful if you have questions.

The less similar 2019 set is still available here for those looking for more free questions, and even older sets are all listed here. The 2019 and 2020 sets, for example, differed by 36 questions (in case you were curious).

 

Read More →

Scheduling Slack

04.15.21 // Medicine, Reading

From Alan’s Weiss’ classic Getting Started in Consulting:

Medical consultants advise doctors never to schedule wall-to-wall appointments during the day, because inevitably there are emergencies, late patients, complications on routine exams, and so forth. These create a domino effect by day’s end, and some very unhappy scheduled patients. Instead, they advise some built-in slack time that can absorb the contingencies. If not needed, slack time provides valuable respite.

Ha.

I read this book years ago when I was a resident and came across this passage when reviewing my Kindle highlights the other day.

Perhaps there are consultants in real-life operating as Dr. Weiss suggests, but this common-sense approach to sustainable medical practice is not what many large health systems employ.

In my wife’s old outpatient academic practice, lunchtime wasn’t respite. It was an overbook slot, and her schedule was so jam-packed that there were always patients clamoring to squeeze in.

In order to make that all work, the average doctor spends 1-2 hours charting at home per day.

Contrast that with her current solo practice where she has complete autonomy: her patients aren’t scheduled wall to wall, and she has time for the inevitable emergencies, hospitalizations, collateral phone calls, prior auths, and the other vagaries of modern medical practice.

I’m proud of the practice she’s built—during a pandemic no less!—but it’s crazy that even academic medicine has become so corporatized in its paradigm that it was easier to craft her own business in order to practice on anything approaching the terms that would best serve her patients and herself.

 

Attending

04.08.21 // Medicine, Reading

A few separate passages I’ve combined from Dr. Ronald Epstein’s Attending: Medicine, Mindfulness, and Humanity:

Altogether, I saw too much harshness, mindlessness, and inhumanity. Medical school was dominated by facts, pathways, and mechanisms; residency was about learning to diagnose, treat, and do procedures, framed by a pit-of-the-stomach dread that you might kill someone by missing something or not knowing enough.

Good doctors need to be self-aware to practice at their best; self-awareness needs to be in the moment, not just Monday-morning quarterbacking; and no one had a road map.

The great physician-teacher William Osler once said, “We miss more by not seeing than by not knowing.”

The fast pace of clinical practice—accelerated by electronic records—requires juggling multiple tasks seemingly simultaneously. Although commonly thought of as multitasking, multitasking is a misnomer—we actually alternate among tasks. Each time we switch tasks we need time to recover and, during the recovery period, we are less effective. Psychologists call this interruption recovery failure, which sounds a bit like those computer error messages we all dread. We increasingly feel as if we are victims of distractions rather than in control of them.

Outside of the OR (and not always even then), it’s rare to find an environment that promotes the space for deep focus and self-awareness. Mindfulness, insofar as a daily approach to medical practice, is something that goes against the grain of one’s surroundings.

Good doctors need to be self-aware to practice at their best; self-awareness needs to be in the moment, not just Monday-morning quarterbacking.

I like that. Medicine is generally ripe for Monday-morning quarterbacking (and radiology in particular due to the permanent, accessible, and objective nature of the imaging record).

But doctors don’t work in vacuums. We are humans.

Consider for a moment the discipline of human factors engineering:

Human factors engineering is the discipline that attempts to identify and address these issues. It is the discipline that takes into account human strengths and limitations in the design of interactive systems that involve people, tools and technology, and work environments to ensure safety, effectiveness, and ease of use. A human factors engineer examines a particular activity in terms of its component tasks, and then assesses the physical demands, skill demands, mental workload, team dynamics, aspects of the work environment (e.g., adequate lighting, limited noise, or other distractions), and device design required to complete the task optimally. In essence, human factors engineering focuses on how systems work in actual practice, with real—and fallible—human beings at the controls, and attempts to design systems that optimize safety and minimize the risk of error in complex environments.

(I first found that passage plagiarized on page 8 of the American Board of Radiology’s Non-interpretive Skills Guide.)

Despite the rise of checklists and evidence-based medicine, humans have been almost designed out of healthcare entirely. Rarely is anything in the system—from the overburdened schedules, administrative tasks, constant messaging, system-wide emails, the cluttered EMR, or the byzantine billing/coding game—designed to help humans take the time and mental space to sit in front of a patient (or an imaging study, for that matter) and fully be, in that moment, a doctor.

Program directors and the pass/fail USMLE

03.31.21 // Medicine

Just over a year ago, the NBME announced that Step 1 would soon become pass/fail in 2022. A lot of program directors complained, saying the changes would make it harder to compare applicants. In this study of radiology PDs, most weren’t fans of the news:

A majority of PDs (69.6%) disagreed that the change is a good idea, and a minority (21.6%) believe the change will improve medical student well-being. Further, 90.7% of PDs believe a pass/fail format will make it more difficult to objectively compare applicants and most will place more emphasis on USMLE Step 2 scores and medical school reputation (89.3% and 72.7%, respectively).

Some students also complained, believing that a high Step score was their one chance to break into a competitive specialty.

There are two main reasons some program directors want to maintain a three-digit score for the USMLE exams.

The Bad Reason Step Scores Matter

One reason Step scores matter is that they’re a convenience metric that allows program staff to rapidly summarize a candidate’s merit across schools or other non directly comparable metrics. This is a garbage use case—in all ways you might imagine—but several reasons include:

  • The test wasn’t designed for this. It’s a licensing exam, and it’s a single data point.
  • The standard error of measurement is 6. According to the NBME scoring interpretation guide, “plus and minus one SEM represents an interval that will encompass about two thirds of the observed scores for an examinee’s given true score.” As in, given your score on test day, you should expect a score in that 12-point page only 2/3 of the time. That’s quite the range for an objective summary of a student’s worth.
  • The standard error of difference is 8, which is supposed to help us figure out if two candidates are statistically different. According to the NBME, “if the scores received by two examinees differ by two or more SEDs, it is likely that the examinees are different in their proficiency.” Another way of stating this is that within 16 points, we should consider applicants as being statistically inseparable. A 235 and 250 may seem like a big difference, but our treatment of candidates as such isn’t statistically valid. Not to mention, a statistical difference doesn’t mean a real-life clinical difference (a concept tested on Step 1, naturally).
  • The standard deviation is ~20 (19 in 2019), a broad range. With a mean of 232 in 2019 and our standard errors as above, the majority of applicants are going to fall into that +/- 1SD range with lots of overlap in the error ranges. All that hard work of these students is mostly just to see the average score creep up year to year (it was 229 in 2017 and 230 in 2018). If our goal was just to find the “smartest” 10% of medical students suitable for dermatology, then we could just use a nice IQ test and forget the whole USMLE thing.

It’s easier to believe in a world where candidates are both smarter and just plain better when they have higher scores than it is to acknowledge that it’s a poor proxy for picking smart, hard-working, dedicated, honest, and caring doctors. You know, the things that would actually help predict future performance. Is there a difference in raw intelligence between someone with a 200 vs 280? Almost certainly. That’s 4 standard deviations apart. But what about a 230 and 245? How much are we really accidentally weighing the luxury of having both the time and money needed in order to dedicate lots of both to Step prep?

In my field of radiology, I care a lot about your attention to detail (and maybe your tolerance for eyestrain). I care about your ability to not cut corners and lose your focus when you’re busy or at the end of a long shift. I care that you’re patient with others and care about the real humans on the other side of those images.

There’s no test for that.

If there were, it wouldn’t be given by the NBME.

The Less Bad Reason Step Scores Matter

But there is one use case that unfortunately has some merit: multiple-choice exams are pretty good at predicting performance on other multiple-choice exams. That wouldn’t matter here if licensure was the end of the test-taking game, but Step performance tends to predict future board exam performance.

Some board exams are quite challenging, and programs pride themselves on high pass-rates and hate dealing with residents that can’t pass their boards. So, Step 1 helps programs screen applicants by test-taking ability.

Once upon a time, I considered a career as a neurosurgeon instead of a neuroradiologist. No denying it certainly sounded cooler. I remember attending a meeting with the chair of neurosurgery at my medical school. This is only noteworthy because of his somewhat uncommon frankness. At the meeting, he said his absolute minimum interview/rank threshold was 230 (this was back around 2010). And I remember him saying the only reason he cared was because of the boards. They’d recently had a resident that everyone loved and thought was an excellent surgeon but just couldn’t seem to pass his boards after multiple attempts. It was a blight on the program.

Now, leave aside for a moment the possible issue with test validity if a dutiful clinician and excellent operator is being screened out over some multiple-choice questions. At the end of the day, programs need their residents to pass their boards. And it’s ideal if they pass their boards without special accommodations or other back-bending (like extra study time off-service) to help enable success. So while Step 1 cutoffs may be a way to quickly filter a large number of ERAS applications to a smaller more manageable number, they’re also a way to help programs in specialties with more challenging board exams ensure that candidates will eventually move on successfully to independent practice.

There is only one real reason a “good” Step score matters, and that is because specialty board certification exams are also broken.

One of the easiest ways a program can demonstrate high-quality and high board passage rates regardless of the underlying training quality is to select residents who can bring strong test-taking abilities to bear when it comes to another round of bullshitty multiple-choice exams.

A widely known secret is that board exams don’t exactly reflect real-life practice or real-life practical skills. Much of this type of board knowledge is learned by the trainees on their own, often through commercial prep products. A residency program in a field with a challenging board exam, like radiology, may be incentivized to pick students with high scores simply as a way to best ensure that their board pass rates will remain high. If Step 1 mania has taught us anything, it’s shown us that if you want high scores on a high-stakes exam, you pick people with high academic performance and then get out of their way.

What Are We Measuring?

When I see the work of other radiologists, I am rarely of the opinion that the quality of their work depends on their innate intelligence such as might be measured on a standardized exam. Ironically, most radiology exam questions ask questions about obvious findings. Almost none rely on actually making the finding or combating satisfaction of search (missing secondary or incidental findings when another finding is more obvious). And literally none test whether or not a radiologist can communicate findings in writing or verbally. When radiologists miss findings and get sued, the vast majority are for “perceptual errors” and not “interpretive ones.” As in, when I miss things, it’s relatively rare that I misinterpreted the findings I make and more often that I just didn’t see something (often that even I normally would [because I’m human]).

Obviously, it’s never a bad thing to be super smart or even hard-working. But the medical testing industrial complex has already selected sufficiently for intelligence. What it hasn’t selected for is being competent at practicing medicine.

While everyone would like to have a smarter doctor and train “smarter” residents, the key here is that board passage rates are another reflection of knowledge cached predominately in general test-taking ability and not clinical prowess. All tests are an indirect measure, for obvious reasons, but most include a wide variety of dubiously useful material largely designed to simply make exams challenging without necessarily distinguishing capable from dangerous candidates.

So when program directors complain about a pass/fail Step 1, they should be also be talking with their medical boards. I don’t think we should worry about seeing less qualified doctors, but we should be proactive about ensuring trainee success in the face of exams of arbitrary difficulty.

 

Private Equity & the Comeback of the For-Profit Medical School

03.29.21 // Medicine

You may be used to hearing about private equity takeovers of medical practices, but you may be less familiar with the recent growth of for-profit (primarily osteopathic) medical schools, two of which are owned by Medforth Global Healthcare Education. Medforth, as you might have guessed, is a private equity firm based in New York, NY.

Given the current osteopathic tilt of these for-profit schools, can this do anything but worsen the unfair stigma already facing DO students and physicians?

Well, here is an excerpt for how a recent proposed for-profit private-equity-backed medical school in Billings, Montana got derailed:

Billings Clinic has had concerns about many aspects of the Medforth project. These concerns, combined with three events that occurred recently, have caused Billings Clinic to cease discussions with Medforth. On two separate occasions an executive representative of the medical school cast aspersions on a proposed medical school in Great Falls, Montana, on the basis of that medical school’s Jewish affiliation. Those statements intimated that a school with a stated Jewish heritage may not belong in Montana and would not be able to assimilate in the state. In a third instance, a different executive representative of the medical school referred to a female Billings Clinic leader as a “token.” These comments are inconsistent with Billings Clinic’s core values, including a dedication to diversity, inclusion, equity and belonging.

Ew. Now, are these clowns really a bunch of abhorrent scummy sexist racist antisemites? Absolutely a possibility, though flaunting that bias would be incredibly stupid.

Is it possible that much of this bigotry display instead reflects some poorly conceived cynical attempt to appeal to others believed to hold bigoted views? Do these private equity jokers just think that Montanans are a bunch of abhorrent scummy sexist racist antisemites?

Maybe it’s a bit of both. Maybe Medforth is just looking for kindred spirits.

When it comes to people running a medical school, neither possibility should be acceptable.

(h/t @jbcarmody)

Old Guard Medical Wisdom? Rest

03.26.21 // Medicine, Reading

From Rest: Why You Get More Done When You Work Less:

Neurosurgeon Wilder Penfield, for example, warned medical students that unless they cultivated other interests, “your specializing will expose you to an insidious disease that can shut you away from all but your occupational associates” and “imprison you in lonely solitude.” Penfield’s mentor, William Osler, warned that without care, “good men are ruined by success in practice,” and that “ever-increasing demands” can leave even the most curious person “worn out, yet not able to rest.” It was essential to develop “some intellectual pastime which may serve to keep you in touch with the world of art, of science, or of letters.”

These statements came from an era when residents literally lived in the hospital and Osler’s famous surgical colleague William Halstead’s work ethic was fueled by cocaine.

And even they thought it was important for doctors to be well-rounded, have hobbies, and get a life.

Honestly, I’m more interested in what you do for you than what boxes you’re just checking to impress me.

A Chance for Meaningful Parental Leave During Residency

03.24.21 // Medicine, Radiology

Last year, the ABMS—the umbrella consortium of medical specialties—waded into the established toxic mess of medical training schedules with a new mandate to provide trainees with a nonpunitive way to be parents, caretakers, or just sick:

Starting in July 2021, all ABMS Member Boards with training programs of two or more years duration will allow for a minimum of six weeks away once during training for purposes of parental, caregiver, and medical leave, without exhausting time allowed for vacation or sick leave and without requiring an extension in training. Member Boards must communicate when a leave of absence will require an official extension to help mitigate the negative impact on a physician’s career trajectory that a training extension may have, such as delaying a fellowship or moving into a full, salaried position.

6 weeks over the course of an entire residency may not seem like much given the vagaries of life, but it’s a better floor than many programs currently offer. A graduation delay sucks, and it’s the kind of punishment for living your life that causes many doctors to put off big milestones like starting a family. Medical training already takes a long time, and ~1 in 4 female physicians struggle with infertility (and in that study, 17% of those struggling would have picked a different specialty).

This issue is being addressed across medicine, but we’re going to discuss it in the context of radiology because I am a radiologist.

The American Board of Radiology’s recent attempt at how such language should look has drawn some ire on Twitter. Here is their email to program directors that’s been making the rounds:

Image

They proposed that a program “may” grant up to 6 weeks of leave over the course of residency for parental/caregiver/medical leave as a maximum without needing to extend residency at the tail end. The language here doesn’t even meet the ABMS mandate, which again states that a program “will” provide a “minimum” of 6 weeks (and explicitly states that said 6 weeks of leave shouldn’t be counted against regular sick time).

The ABR could have simply taken the straightforward approach of parroting the ABMS mandate. They could have—even better—taken the higher ground with an effort to trailblaze the first generous specialty-wide parental leave policy in modern medicine.

Instead, they have advocated for a maximum of six weeks, because any more and they feel they wouldn’t be able to “support the current length of required training.” As in, if a mom gets 3 months off to care for a newborn then the whole system falls apart.

I think they realized it would be prudent to ask for feedback first and then make the plan because a new softer blog post removes any specific language:

We need your input to develop a policy that appropriately balances the need for personal time including vacation as well as parental, caregiver, and/or medical leave with the need for adequate training. 

It is important to realize that the ABR is not restricting the amount of time an institution might choose to allow for parental, caregiver, and/or medical leave, nor are we limiting the amount of vacation a residency program might choose to provide. These are local decisions and the ABR does not presume to make these determinations. However, above a certain limit (not yet determined), an extension of training might be needed to satisfy the requirement for completion of the residency. 

Of course, in the original proposal, the ABR literally did want to limit program vacation (to 4 weeks, see above).

After the mishandling of the “ABR agreement” debacle and the initial we-can’t-do-remote-testing Covid pseudo-plan and now this, I hope the ABR will eventually come to the conclusion that stakeholders matter and that we can make radiology better by working together as a community.

Radiology is a “male-dominated” field, but it shouldn’t be. A public relations win here could make all the difference.

Plenty of Slack

I think there are more than six weeks of slack in our 4-year training paradigm, and it’s hard to argue otherwise.

When the ABR created the Core Exam and placed it at the of the PGY4/R3 year, they created a system where a successful radiology resident has proven (caveat: to the ABR) that they are competent to practice radiology before their senior year. It created a system where the fourth year of residency was opened up largely to a choose-you-own-adventure style of highly variable impact.

We have ESIR residents who spend most of their fourth-year doing IR, and we have accelerated nuclear medicine pathway residents that do a nuclear medicine fellowship integrated into their residency. There are folks early specializing into two-year neuroradiology fellowships during senior year, and others who take a bevy of random electives that they may never use again in clinical practice.1(I did 3-month nuclear medicine and MSK mini fellowships during mine. And an extra month of cardiac imaging. Guess how mission-critical all of that ended up being for my career as a neuroradiologist.)

We have many programs with a whole host of extracurricular “tracks” where residents might spend protected time every week doing research, quality improvement, or clinician-educator activities. I would know, I did all three during my residency. We have residents doing research electives and all kinds of other interesting things that may worthwhile but have no positive impact on their ability to practice radiology clinically, which is the primary purpose of residency training.

A hypothetical example: Take a research track resident with one half-day protected time every week for 40 weeks a year (say because of 8 weeks of night float and 4 weeks of vacation). That’s 20 days a year of reduced clinical activity. 20 working days is basically a month. If they have their R1 year to just focus on learning radiology before taking call, then over the next three years that resident would be “missing” 3 months of clinical time. But no one is seriously arguing that these tracks should postpone residency graduation.

We already have a system where there are minimum case requirements for residents to complete residency training. Last I checked, the ABR is certifying radiologists in the domain of clinical radiology, not their number of peer-reviewed publications or ability to do a sick root cause analysis.

Radiology residency may be four years after a clinical internship, but it’s clear that there is no standard radiology training program clinical “length” despite that fixed duration. Some residents are already doing far fewer months.

No one is adding up diagnostic work hours and saying you need 48 weeks/yr * 52 hours/wk * 4 years = 9,984 hours.

It’s not a thing, and it shouldn’t be.

Competency-based Assessment and Reasonable Limits

The core problem is that we have time-based residencies masquerading as a proxy for competency. You don’t magically become competent when you graduate. Competency is a continuum. Hiring trainees for a set number of years is convenient. It’s easy to schedule. It’s easy to budget. But it’s an artifact of convenience, not a mission-critical component of clinical growth.

There are R3 residents who are ready for the big leagues, and there are practicing doctors who should honestly move back down to the minors. No one is going to argue that a little more training makes you worse. But the logic that more is better gets us to the unsustainable current state of affairs, where doctors are accumulating more and more training to become hyper-specialized in the least efficient way possible while non-physician providers bypass our residency/fellowship paradigm to do similar jobs with zero training.

We all get better with deliberate practice. The question isn’t: is more better? The question is how much less is still enough for independent practice?

Obviously, the ABMS member boards like the ABR don’t exactly have the power to force institutions to change policies directly, and they probably don’t want to. But they do set the stage by mandating the criteria for board eligibility.

I would argue that the ABR should set a minimum threshold and no maximum. If a program is happy with that resident’s progress and they pass the Core Exam, then consider the boxes checked. Let everyone be treated with dignity and then give the programs the flexibility to compete in the marketplace of support.

When my son was born, I was able to take 4 days of sick time and then went straight into night float. That’s bullshit. You want to see motivation? Tell an expecting resident that if they’re a total champion that they can spend as much time as they need with their baby without delaying graduation.

Less than 6 weeks is unacceptable. And while a 6-week minimum is an improvement, I think the true minimum consistent with current training practices that should also have a chance of being implemented is three months.

I’d love to see six months or more. I don’t think that’s going to happen as a minimum, and there’s a very reasonable argument against it as underperforming residents really may need some of that time back. It would be nice to see language that demands 3 months, has no maximum, and strongly encourages programs to work with residents on a case-by-case basis to ensure they are ready for graduation with however much time they have.

But the first step is to have a minimum that doesn’t punish women who want to stay home with their infants until they’re done cluster feeding. Convince me otherwise.

Fairness

The ABR doesn’t use the language of “fairness” in their email, but I suspect the perception of fairness is at play. It’s almost always at play when older doctors consider policies that might benefit younger physicians. It’s the I-did-it-this-way-and-I’m-amazing-so-it-must-be-an-integral-part-of-the process. It’s the hazing.

Right now, some lucky residents across the country get varying degrees of time “off” thanks to PD support in the form of research electives, reading electives, and program staff simply looking the other way. We need to standardize a fair minimum that enables programs to provide a consistent humane process and not just put trainees solely at the mercy of their PDs and local GME office.

Yes, it’s true that if you allow parents time to be parents or people to take care of loved ones or people time to recover from illness that some residents will work fewer months than others. Every resident has their unique experience, but a policy change will also mean that every resident may not have a similar “paper” experience. That’s a fact.

Some people will say, that’s not fair. That it’s not fair to single residents or non-parents. That it’s not fair to the able-bodied. Or to those whose aging parents are healthy or have the resources to support themselves.

But let me provide a counterpoint:

I don’t think fairness means that every single resident has to have the exact same experience. They already don’t. I think fairness means we treat humans with the respect and compassion that every person deserves. I want to live in a world where everyone gets time to be a parent, even if yes, that world means that some doctors may have a career that is a few months shorter.

I think fairness means not punishing people when life happens just because making people jump through hoops makes it easier to check a box.

If you’re ready to practice, you’re ready.

If we need to reassess the validity of an exclusively time-based (instead of competency-based) training paradigm in order to do that, then let’s get to it.

The ABR is accepting feedback until April 15.

Patient Satisfaction: A Danger to be Avoided

03.16.21 // Medicine

Doctors intuitively know that the Yelpification of medicine is bad. But it’s not just toxic to the physician-patient relationship and bad for burnout, it’s actually dangerous.

The outsized and misplaced importance of patient satisfaction scores is a perfect embodiment of Goodhart’s law, well-paraphrased as “when a measure becomes a target, it ceases to be a good measure.”

If you make patient satisfaction scores a critical target—and they are—you will see consequent mismanagement. This is so blatantly apparent when it comes to urgent care and pain management that, if anything, high satisfaction scores are likely a more meaningful signal of poor care.

If a patient comes to an urgent care for a URI and wants antibiotics, they will be most “satisfied” when they receive the prescription they didn’t need. And all that over-treatment is not without risk.

Even outside of quality metrics, profit-centered health care businesses need patients to make money, and the “customer” is always right.

A study published in JAMA is a great example of the obvious negative externalities of prioritizing patient satisfaction scores. It analyzed a large number of telemedicine visits for URI:

72 percent of patients gave 5-star ratings after visits with no resulting prescriptions, 86 percent gave 5 stars when they got a prescription for something other than an antibiotic, and 90 percent gave 5 stars when they received an antibiotic prescription.

In fact, no other factor was as strongly associated with patient satisfaction as to whether they received a prescription for an antibiotic.

Another study out of UC Davis study analyzed a >50,000 person national Medical Expenditure Panel Survey and found that patients who were most satisfied had greater chances of being admitted to the hospital, had ~9% higher total health-care costs, and 9% higher prescription drug expenditures. Of course, if you’re a for-profit entity (and most “non-profit” hospitals certainly are), higher costs and more prescriptions often just mean more profit. A win-win-win.

But even worse, death rates also were higher: For every 100 people who died over an average period of nearly four years in the least satisfied group, about 126 people died in the most satisfied group.

Moreover, the more satisfied patients had better average physical and mental health status at baseline than the less satisfied patients, and the association between patient satisfaction and death was strongest among the healthiest patients. Perhaps the “worried well” should be worried.

The push to satisfy patients at all costs is no secret. But some doctors are fighting back, like Dr. Eryn Alpert, who sued Kaiser Permanente in 2019:

A doctor who refused to prescribe patients unnecessary opioids has sued Kaiser Permanente, alleging the way the company used patient satisfaction scores hurt her career and incentivized doctors to over-prescribe painkillers.

…

By requiring its employee physicians to achieve certain patient satisfaction scores in departments where those scores are closely related to a physician’s willingness to prescribe opioids, other addictive medications, and to order unnecessary medical testing (e.g. labs, radiology) in response to patient demand, Kaiser’s intent was to increase its profits so that … its executives and physicians would receive higher bonus compensation.”

These sorts of individual fights happen quietly all over the country, but the opiate crises may have created an opportunity for doctors to put the focus back on patient outcomes.

Do no harm in many cases means doing less, but the combination of short visits and Press Ganey pressures makes it harder for doctors to do the right thing. Healthcare may be a business, but patient care isn’t.

This article was originally published in Physician Sense in October 2019.

Older
Newer