Skip to the content

Ben White

  • About
  • Archives
  • Asides
  • Support
    • Paying Surveys for Doctors
  • Medical Advice
    • Book: The Texas Medical Jurisprudence Exam: A Concise Review
    • Book: Student Loans (Free!)
    • Book: Fourth Year & The Match (Free!)
  • Radiology Jobs
  • #
  • #
  • #
  • #
  • About
  • Archives
  • Asides
  • Support
    • Paying Surveys for Doctors
  • Medical Advice
    • Book: The Texas Medical Jurisprudence Exam: A Concise Review
    • Book: Student Loans (Free!)
    • Book: Fourth Year & The Match (Free!)
  • Radiology Jobs
  • Search
  • #
  • #
  • #
  • #

Back on Backtable

03.20.25 // Radiology

My first Backtable episode about the rad shortage, the job market, and PE in radiology was back in 2023.

I’m back on Backtable this week with a wide-ranging conversation about the job market, teleradiology, updates in the world of radiology private equity, etc etc. Always fun to chat with Ally and Mike, they’re awesome. Though, for the record, while I appreciate their kind introduction, I do not condone and categorically reject any overly charitable label that contains or alludes to the phrase “thought leader.”

Some articles for the show notes that are relevant to our discussion:

  • The PE lies that inspired the creation of a new job board
  • More reasons why I made Independent Radiology specifically to support private practice
  • All about the new Aetna vs RP lawsuit (which itself also includes links to the posts about the UHC vs RP lawsuit that settled last year)

//

Hospital Stipends vs Higher Rates

During the show, Ally asked about groups getting paid more by hospitals via stipends (i.e. call-pay/service fees/whatever-you-want-to-call-it) versus a guaranteed per-RVU rate (the latter is often direct pay per study with the hospital doing their own billing but can also be a bump after billing to an agreed-on rate to account for unpaid care/payor mix/shortfall from market rates).

I suggested that for many smaller groups approaching these conversations with hospitals for the first time, a stipend is probably easier. It’s predictable, there are easy precedents the hospital understands (e.g. other call pay), and it doesn’t usually require seismic contract or billing changes.

The reality is, I think, of course, a bit more nuanced. It depends on whether the pay increase is added on to an existing contract and long-standing relationship or part of a new contract negotiation, the size of the group, and the size/volumes of the hospital.

To reiterate, a call stipend or radiology service fee may be much more palatable to some hospitals when added to a preexisting contract as it doesn’t require changing anything else and just falls in line with the preexisting idea of call pay. For a group, it also has the benefit of providing a floor such that even if volumes aren’t high, the group still gets paid for being willing to cover after hours.

However, a per-RVU rate will likely make much more sense for a new hospital coverage paradigm in our current era of radiology contract musical chairs where a group is guaranteed that each case is paid at a good rate and doesn’t need to concern itself with billing reimbursement, bad debt, and other headaches. You read a case, you get paid a predictable rate. This may be especially good when a group’s contracts are not strong with payors and protects against downward reimbursement pressure. It’s also what a lot of the recent teleradiology contracts have been, which doubly makes sense given they are not local, may not have existing local payor contracts, and are often aggregating multiple hospitals together into one feed and spreading the work around.

Some hospitals may also be happier to pay fractionally for the work they’re actually getting than a separate fee for access (but I suspect they are most happy just spending less). Pay per RVU could still be a problem if there is a bad casemix with large numbers of plain films etc. I haven’t personally heard of many hospitals paying per-case on a modality basis, which is something relatively common in the outpatient world, but that doesn’t mean it isn’t happening.

In some ways, you can consider a service fee to have a floor that guarantees a certain level of income despite variable volumes, and a high per RVU rate as a guarantee of fair reimbursement in the setting of high/growing volumes. They also aren’t mutually exclusive.

The reality is that money is fungible, so what really matters for a group’s bottom line is more the actual pay itself than the exact mechanism. It’s not hard to look at your current RVUs and average reimbursement per hour or shift, add in a proposed stipend, and then do the simple math to figure out the effective pay per RVU. Yes, getting paid more per RVU directly is more straightforward. It scales easily with growing volumes, whereas a stipend may need to be increased if more staffing is needed in the future. Again, a small practice trying to remain competitive and putting one person on call at a time is a different beast than a large conglomerate with a large night team that is staffing based on an aggregate of multiple hospitals.

Each one is optimized for a different kind of hospital, a different kind of relationship, and a different kind of future. For groups going to their hospital and negotiating, the real best method is whatever the hospital is willing to do and still gets you the reimbursement you need for recruitment and retention. A credible threat of walking is the best leverage.

The most salient point is that groups can no longer provide services at a loss and still expect to be able to pay competitively in the market.

 

Application Selection using WinActivate + Regular Expressions for AutoHotkey

03.17.25 // Miscellany, Radiology

This is a brief adjunct to my post on using Autohotkey in Radiology (which basically every radiologist should be doing, by the way). I include it here not because I expect many people to run into the same problem I did but rather because it’s a good example of the not-so-challenging troubleshooting that we shouldn’t be scared to do in our quest for a better workflow. I’m a novice and that’s okay! We can still do cool stuff!

In that post, I mentioned an example script I made to streamline launching patient charts in Epic from PACS at home since our automatic integration doesn’t work remotely.

One thing I didn’t describe in that post is an annoying quirk for activating Epic because it runs through Citrix. Since Citrix is weird, and there are presumably multiple servers that can run Epic, the window title of our Epic actually changes with each login. Therefore, the usual static name-matching technique we use to activate Powerscribe, Chrome, or other typical apps doesn’t work.

In our system, Epic always has a title like “ecpprd2/prdapp01” or “ecpprd3/prdapp04”—but the numbers always shift around.

For a while, I used a workaround:

WinActivate, ahk_exe WFICA32.EXE

…which is the name of the Epic/Citrix program .exe file running  on my PC, and as long as only one Citrix application was open at the time, it worked (I had to make sure to close an MModal application that auto-launched with it, but otherwise it was fine). Recently, my hospital started using some useless AI tool that cannot be closed, which broke my script.

The workaround one of my colleagues figured out is to change the AHK TitleMatchMode of that specific hotkey to recognize “regular expressions” (a “RegEx” is a sequence of characters that specifies a pattern of text to match).

SetTitleMatchMode RegEx

Then we can use WinActivate with a few modifiers to recognize an unchanging portion of the window title. In our example above, where the title always contains ecpprd or prdapp, we can use the following to select the EPIC window:

WinActivate i)^ecpprd

In this example, the “i” modifier allows case-insensitive search, and the carat (^) limits the string to the beginning of the window title. You can read more about regular expressions in AKH here.

In reality, if I had just explained my problem to any of the popular LLMs, I’m confident they would have given me the answer. They absolutely excel at this. The rapidly approaching agentic era will allow for some very easy, very powerful scripting in the very near future even if commercial products lag behind.

Feedback as a Radiology Resident

03.13.25 // Radiology

The Good Jobber and the Critical Curmudgeon

Most radiology resident evaluations are a one-way trip on the “keep reading” express. Maybe, in harsher climates, “read more,” which is just a coded way of saying I wish you were better and more knowledgeable with the word “reading” used as a stand-in for “learn more useful stuff please.”

Many attendings are nice but not kind. We don’t want to hurt anyone’s feelings, so we don’t share specific critical feedback other than in cursory, generic, or essentially universal ways.

When more substantive critical/negative feedback is given, it can also be idiosyncratic concerning various pet peeves (i.e. not generalizable or particularly helpful) or a list of mistakes (without direction on how to fix them). Because most of us are cowards, these shortcomings are often a total gut punch of a surprise.

Feedback as a first-year radiology resident is often more a measure of compliance than growth.

But even when helpful, most rotation evaluations feel more like a grade/assessment and less like a pathway forward.

Ideally, you’d get feedback continuously. You don’t want generic ‘good job’ feedback or ‘you suck’ feedback. Neither is very helpful except to tell you that things are generally working or generally not working, and that’s not really going to help guide action except in the broadest sense. When it comes to in-person or at least specific one-on-one style feedback in an ideal world, you would simply get great, actionable feedback. But we don’t live in an ideal world and most feedback you receive will be generically positive or negative in ways that mostly reflect the bias of the person providing the feedback and their personal preferences.

(A classic useless example is a female resident hearing that she should “be more confident” from older male attendings)

So, in all likelihood, you will feel that your evaluations fall into one of two camps: The Good Jobber (gee, thanks) and the Critical Curmudgeon (okay, jerkface). Neither is all that helpful. Chances are, it’s going to be up to you to get the feedback you want/need.

The Painful Request

There are situations where—despite the unpleasant awkwardness—it is in the learner’s best interest to ask for feedback. When you directly ask for feedback, you have a greater chance of receiving helpful specific feedback if you ask for specificity. So don’t just say, “Do you have any feedback for me? How can I get better?” Rather, consider asking about specific ways to improve: How can I improve the conciseness of my reports? How is my organization? Have you noticed any instances where my reporting style may be unclear to clinicians? Is there a certain kind of perceptual mistake that you’ve seen me make multiple times that I should incorporate into my search pattern or my proofreading checklist to do better quality work? A direct question is more likely to get a directly helpful response. Does that sound tool-ish when written out this way? You bet it does. But surgical requests are more likely to generate meaningful responses.

At any given time, you may be working on a specific part of your approach to radiology. You may be working on developing your first-year search pattern. You may be working on the library of if-then pattern extenders that help you address pertinent disease features or whittle down a differential diagnosis. You may be working on your mental checklist so that you do not omit parts of the report. You may be working on trying to hone down and describe findings that matter and leaving out truly extraneous detail. You may be working on making reports that are as short as possible while containing the information that will help decide patient management.

When you ask for feedback or when people give you generic feedback, consider tailoring your request or your follow-up questions to get advice and feedback on the issues that you’re working on actively right now. We simply can’t actively work on every aspect of our jobs every day. That’s not how deliberate practice works. We all get better and more refined in our routines over time organically. Your process, whether it’s optimal or not, will become ingrained through repetition.

But that’s not how people who are experts actually improve to the next level of effectiveness. They do so in a piecemeal fashion. So if you want to work on that process and not just solidify through the inevitable accrual of time, then you may need some guidance on how to deploy that extra thoughtful work. If you aren’t sure what to work on, then consider asking for what is the one thing that is your weakest in a specific context: As in, what sort of finding am I missing? What sort of error am I making in my reports? What is the most irritating thing that you find yourself editing when finalizing my work?

Reception & Acceptance

1. Listen for Patterns

We can be fragile: we can take feedback too personally and miss opportunities to improve.

We can be stubborn: we get used to hearing the same things and start internalizing them, then start ignoring what others say and also miss opportunities to improve.

Patterns are important: Don’t let a single bit of negative feedback crush your self-worth. But, the more often you hear something, the more seriously you should take it. Even when the feedback feels isolated, keep in mind that most feedback you get will fall into the generic nice-but-not-kind good-jobber variety just by dint of attending personality and not your performance.

2. Experts Are Often Wrong

Never forget that not all feedback is good feedback, and many experts do not understand how they arrived at their expertise. They may not know which practice methods would be most efficient to achieve mastery even for themselves let alone for any individual learner. They are trying to help, but meta-learning is a challenge.

Most people do not really know or understand how they learn. You might know how you like to learn, but how you like to learn isn’t necessarily the same as how you learn the best.

Commentary on your deficiencies is likely spot on, but proposed solutions for how you should fix them are a different story.

3. Stop Wrong-Spotting

In defense of our ego, we often look for inaccuracies that allow us to psychologically reject the entire package. Instead of looking for reasons the person is wrong in order to create a straw man (even if they are in some details), look for the parts of feedback that are helpful or potentially true. The goal isn’t to be right; it’s to be better.

The Most Important Feedback is the Case-Loop

It’s not just what you’re doing, but how you’re doing it. It’s a difference in perspective. The way a novice and an experienced reader approach an exam is not the same, and the goal of the learning process is to move efficiently along the path from learner to expert.

One of the amazing baked-in capabilities of radiology residency training is that previewing and dictating a case and then reading out gives you 1) your attempt, 2) a fallible but more experienced person’s attempt, and 3) allows you to see the difference.

Obviously, you cannot see directly into the mind of your attending, and even how they verbalize their thought process or describe their search pattern is not necessarily the same as what they’re actually doing. Our subjective awareness of how we think is not perfect. We are in part black boxes even unto ourselves. At the risk of getting too far into metaphysics, we don’t think we think how we think.

Nonetheless, every case you read is the most plentiful opportunity for feedback. It’s not about just missing a certain finding or whether you were right or wrong. It’s about where you are now and seeing the next steps to getting where you want to be.

Feedback is not just what you get at the end of a rotation, it’s the difference between what you did and what—after the fact—you wish you’d done.

Current Demands for Radiology Subspecialties

03.11.25 // Radiology

As of this week, Independent Radiology features 125 private practices, which gives us an interesting look at a slice of the radiology job market. Here is the breakdown of subspecialty openings today:

  • Mammo: 79% (99)
  • Body: 78% (98)
  • General: 71% (89)
  • Neuro: 66% (83)
  • MSK: 54% (67)
  • VIR: 43% (54)
  • Chest/Cardiovascular: 37% (46)
  • NM/PET: 34% (42)
  • Peds: 26% (32)
  • Neuro IR: 6% (7)

Off-hours positions are also plentiful with 39% (49) hiring swing shifts and 35% (44) hiring overnight radiologists. I suspect that those offerings reflect not just specific group needs but also an attempt to tap into the available remote workforce and meet market conditions. (That reminds me, my group has one opening for each.)

Overall, 67% (84) of groups have remote positions of some variety, and 30% (38) are willing to hire contractors in some fashion.

Open Loop Errors

03.10.25 // Medicine

From “The how we need now: A capacity agenda for 2025 and beyond,” published by the Niskanen Center think tank:

We need a new operating model for government if we are to restore our capacity to achieve our policy goals. This model must close the open loop we described in Part 3: a one-way system from law- and policy-making to implementation to real world outcomes that offers little space for learning and adjustment along the way. We can no longer rely on media coverage and elections, blunt tools that tend to be saved only for the most catastrophic errors, as the main corrective mechanisms.

Closing the loop means that we must apply test-and-learn approaches. This means conducting multiple small-scale experiments at the boundaries of policy and delivery — and doing this permanently, in pursuit of a policy intent or outcome. Incremental changes are scaled up once there is good evidence they work in reality. Test-and-learn does not mean simply running lots of pilots. A pilot implies starting with a phase for learning, which then ends as you move into “roll out.” Responsiveness is an embedded attribute, not a phase on a timeline. Closing the loop means the learning doesn’t stop at an arbitrary moment.

We briefly touched on this paper before and the concept of the Cascade of Rigidity “that occurs when well-intentioned laws and regulations become increasingly inflexible as they step down through bureaucratic hierarchies.” They discuss a healthcare-related Open Loop error with MACRA:

MACRA (Medicare Access and CHIP Reauthorization Act) was designed to pay doctors more for higher-quality care. But an implementation team at the Centers for Medicare and Medicaid Services (CMS) knew that doctors were already frustrated with the burdensome and confusing ways they had to report their data under the existing program, and many were so concerned that the new system would be just as bad that they were threatening to stop taking Medicare patients. Thus, a law designed to improve the quality of care threatened to degrade it, especially for patients in rural areas who relied on the small practices that were most affected.

Recognizing how challenging the administrative requirements could be for practices with fewer resources and limited Medicare revenue, one provision in the law exempted doctors who treated a minimal number of Medicare patients. But CMS’s initial interpretation of this provision would have required all providers to collect and submit a full year’s worth of data in order to demonstrate they fell below the exemption threshold. This meant exempt doctors would still have to comply with all the program’s requirements, including updating their systems and reporting data, only to be excused from all this at a later date. It’s not hard to see why this approach, while technically accurate, would have worked against the intent of lawmakers. Those doctors would have left the program, hurting the very patients the law meant to help.

Another provision allowed smaller practices to form “virtual groups” to gain advantages enjoyed by larger practices. Staff interpreted this provision as a mandate to create a “Facebook for Doctors,” a platform for doctors to find and connect with each other. A staffer on loan from the United States Digital Service, a part of the White House, doubted that Congress intended for CMS to create a social media platform, especially considering the limited time and resources available. She took the almost unheard of step of consulting the House Office of the Legislative Counsel, and confirmed that Congress simply wanted to make it easier for small practices to report together and had no intention of mandating a “Facebook for Doctors.”

Under more common circumstances, these and other overly literal interpretations of the law would have resulted in a burdensome, unwieldy, and ultimately unsuccessful implementation. Doctors would have simply opted out, leaving patients with fewer options, and some in rural areas with none.

Thanks to nimble actions by people at CMS and USDS to ensure that Congressional intent was realized rather than over-relying on literal interpretations, this outcome was avoided. But conflicts like these all too rarely resolve in favor of common sense. Agency staff are commonly taught to treat legal language as literal operating instructions, as if a programmer had written code and they were the computer executing that code. But as any programmer will tell you, code rarely works as intended on the first try. It works after trying one approach, testing it, adjusting, and continuing that cycle over and over again. That cycle of adjustment is very difficult to engineer within policy implementation today.

We run on an open loop, in which implementation teams neither test their programs in the real world nor loop back to the source for adjustments. We need to build the affordances for them to do both, thus closing the loop. Otherwise, the code will more often than not run exactly as Congress wrote it, even if that doesn’t result in what Congress wanted.

This is emblematic of the problem and also insane: a government-created Facebook alternative for doctors for the express purpose of dealing with a procedural nightmare created by a well-intentioned but completely untested, unproven, almost certainly unhelpful, and very gameable quality goals.

 

The Necessity of Internal Moonlighting

03.02.25 // Radiology

I’ve been advising a radiology app startup called LnQ. I think of it like Qgenda for radiology moonlighting. It can link up with your practice schedule and HL7 feed and helps groups/hospitals/etc leverage the excess capacity in their own workforce: a practice can activate LnQ when there is extra work to do and automate telling the people who aren’t currently working when additional work is available, how much work is available, and then allow those people to do that work and get paid quickly for doing it without the multiple manual steps those processes usually require. It was first developed by an independent private practice that was struggling with their lists; since implementation, they were able to not only clear the lists every day but were able to go after some lucrative contracts knowing they had more bandwidth than they’d initially thought.

I think it’s neat, and I think it fulfills a need that many practices have. On top of its purpose of facilitating internal moonlighting, LnQ is also building a network of independent contractor radiologists on the app platform so that LnQ can also be used to directly connect individual rads and groups together without a teleradiology company or locums middleman adding friction and heavy costs. A practice can then notify their ICs when there is work available and at what rate. One of the issues I’ve discussed with practices multiple times since starting Independent Radiology is that many of them could use an IC here or there but not with enough frequency and volume that makes the ongoing hassle worth it for either party. LnQ is taking care of some of the initial vetting, and multiple practices on the platform will mean that everyone has a better chance of cobbling together the excess work and excess labor in one place to help everyone get the patient care done (of course credentialing will still suck until someone fixes that broken system).

If you are a group who wants to hear more or an individual rad looking for contractor work, you can see more here (the direct physician interest form is here).

//

When I first joined my practice in 2018, they’d already realized the importance of leveraging their workforce’s extra capacity so that when volumes were high, excess work could go to those with the energy and time to do it. Back then, however, we used to submit our after-hours cases as an Excel file attachment sent to payroll. It was tedious and prone to mistakes.

Flash forward several years later, and we have a full-time data analytics and computer dude who has built out workflows and internal apps to facilitate submitting reimbursement for expenses, paying for tumor boards and conferences, and essentially automating most of the tracking for our internal moonlighting from our worklist (Clario) database. Our moonlighting is per-click, and we know exactly which cases are being submitted for “after hours.” Our process is easy and fully transparent. We can run whatever analytics we want on it. For marking qualifying work, we’ve done things like mark the cases in Clario that are eligible (After Hours) or just used a list volume watermark, but the underlying principle is—when it comes to asking for help or providing help—friction is the enemy.

The reality is that with growing volumes and this volatile tight job market, recruitment isn’t always enough. And while, on the whole, the radiologist workforce is aging and burnout runs deep, we need to enable those with some juice left to squeeze more options so that those who have some bandwidth to trade time for money have the chance to do so in as many ways as possible.

Many practices have internal sales of call shifts or various swing shifts that are offered up as moonlighting, and if that’s enough to make everyone happy and get the work done? Great, you’re done. But even then…how much are those shifts paid? Does the rate get sweetened if no one wants to do them? How is that extra work tracked? Who does your payroll and how often do they mess up? How much time and effort does all of that take to coordinate? If no one is biting, can you offer up that shift to a contractor?

And, if extra shifts aren’t sufficient or desirable, that’s when ad hoc moonlighting on an hourly, RVU, or flat per-modality basis can become critical. A rad might have time for an extra scan here or there after the kids go to bed or be willing to work for an extra hour before or after their shift on occasion to avoid traffic but not be willing to commit to selling a vacation day or taking a complete extra shift or call weekend.

Taking it a step further, there are so many ways practices are structured to get the work done. Yes, a big practice with all work combined in one massive worklist and lots of overlapping shifts can make certain kinds of coverage very straightforward, but many practices have different kinds of work and multiple different systems to get it done. Is there a way you can choose to decompress a terrible call shift by asking others for a smidge of help?

What many practices need is a way to tell people who aren’t already scheduled to be working that there is work available to do, what/how much work there is (an hour? 7 RVUs?), and how much that work will pay. Maybe that payment amount changes or maybe it’s fixed. One thing you definitely don’t want is an uneven burden of easy or hard shifts disproportionately falling on certain individuals and be stuck with no way to make things fair. What do you do, for example, in a practice with multiple lists if some service lines are overly busy and those rads are stuck staying late to clear the hospital when other folks could just hop on sometimes and in a few minutes clear the list so everyone can go home on time?

Also for burnout mitigation, maybe someone who hates taking call wants to offer up some of their call pay to get some help or maybe it’s the practice just trying to get the work done when the number of warm bodies on the schedule isn’t enough without garnishing time off. Sometimes you can be a little more flexible on PTO or backup coverage if there’s an easy way to spread the work across willing people PRN.

Our group invested time and money into making a custom in-house solution that works for our practice (and, unsurprisingly, it doesn’t do all the things a dedicated company like LnQ has made possible; it’s a startup, so they can also easily add features as groups request them). Not all groups can or should bother creating a complicated tech solution to enable them to leverage their own workforce even if they do ultimately do need to leverage their own workforce.

Part of retention is meeting people where they are, and internal moonlighting is often one of those measures that can make both the slow vs fast and the lifestyle vs hungry readers happy. What more groups need to make the enterprise work is a system that makes it easy to tell the people who could potentially work extra when extra work is available, how much work is available, what kind of work is available, and then allow those people to do that work and get paid quickly for doing it without issues of tracking the work and other hassles.

We need more happy rads.

The ABR’s New EULA

02.25.25 // Radiology

Back in 2020, the American Board of Radiology released new agreements in order to participate in maintenance of certification “continuing” certification, the thing you have to do in order to be board-certified and practice radiology no matter how meaningless the process is (thankfully, the ABR’s OLA process is relatively painless). Back then, there was a bit of drama because they were draconian and frankly a bit sketchy. I wrote about it here.

In case anyone is wondering, the new version folks are signing this year again reads like the legalese you ignore when trying to install iTunes.

Just a few highlights to illustrate the degree of needless bullshit at play (Needless ALL CAPs is all them):

UNDER NO CIRCUMSTANCES, INCLUDING BUT NOT LIMITED TO NEGLIGENCE, SHALL THE BOARD BE LIABLE FOR ANY SPECIAL OR CONSEQUENTIAL DAMAGES THAT RESULT FROM INCORRECT INFORMATION PROVIDED BY THE BOARD TO THE MEDICAL COMMUNITY OR TO THE PUBLIC REGARDING THE STATUS OF MY CERTIFICATION, EVEN IF THE BOARD HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. APPLICABLE LAW MAY NOT ALLOW THE LIMITATION OR EXCLUSION OF LIABILITY OR INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO THE ABOVE LIMITATION OR EXCLUSION MAY NOT APPLY TO ME. I FURTHER AGREE THAT I WILL PROMPTLY NOTIFY THE BOARD OF ANY ERRORS OR OMISSIONS IN MY INFORMATION.

Under no circumstances is the ABR legally responsible for doing its core purpose.

The hedging of true radiologists:

THE CONTENT AND THE SITE ARE PROVIDED “AS IS” AND “AS AVAILABLE” WITHOUT WARRANTIES OF ANY KIND EITHER EXPRESS OR IMPLIED. TO THE FULLEST EXTENT PERMISSIBLE UNDER APPLICABLE LAW, THE ABR DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, IMPLIED WARRANTIES OF AVAILABILITY OF THE SERVICE, NONDISRUPTION, SECURITY, ACCURACY, THE USE OF REASONABLE CARE AND SKILL, QUALITY, MERCHANTABILITY, TITLE OR ENTITLEMENT, FITNESS FOR A PARTICULAR PURPOSE, ABILITY TO ACHIEVE A PARTICULAR RESULT OR FUNCTIONALITY, AND NONINFRINGEMENT OF THIRD-PARTY RIGHTS, AS WELL AS WARRANTIES ARISING BY USAGE OF TRADE, COURSE OF DEALING, AND COURSE OF PERFORMANCE ON THE PART OF THE ABR, RELATING TO THE SITE AND THE CONTENT. THE ABR DOES NOT WARRANT THAT THE FUNCTIONS OF THE SITE OR THE CONTENT WILL BE UNINTERRUPTED OR ERROR-FREE, THAT DEFECTS WILL BE CORRECTED, OR THAT THE SITE OR THE SERVER(S) THAT MAKES THE SITE AVAILABLE ARE FREE OF VIRUSES OR OTHER HARMFUL COMPONENTS. ACCESS TO THE SITE MAY BE SUSPENDED TEMPORARILY AND WITHOUT NOTICE IN THE CASE OF SYSTEM FAILURE, MAINTENANCE, OR REPAIR, OR FOR ANY OTHER CAUSE. APPLICABLE LAW MAY NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO THE ABOVE EXCLUSION MAY NOT APPLY TO ME.

I’m sure this is all normal. And just a final catchall disclaiming liability for anything and everything:

THE BOARD SHALL NOT BE LIABLE FOR ANY DAMAGES OF ANY NATURE SUFFERED BY ANY CUSTOMER, USER, OR ANY THIRD PARTY RESULTING IN WHOLE OR IN PART FROM THE BOARD’S EXERCISE OF ITS RIGHTS UNDER THIS CONTINUING CERTIFICATION AGREEMENT.

I posted the corresponding screenshots on Twitter of the site pop-up that you are forced to sign; the agreement is not available on a public-facing URL. Not included in the above, among other things, is the part where they also explain that they will never identify anyone who reports you to the board so that you could better defend yourself against allegations.

Do lawyers correlate clinically?

The Fool’s Errand of 30-year Radiology Predictions

02.24.25 // Radiology

From a Radiology Business summary of two new JACR papers predicting the future radiology market:

In the next 30 years, the supply of radiologists is expected to grow by nearly 26%, assuming no increases in the number of radiology residents. Meanwhile, imaging utilization will climb between 17-27% during the same time, depending on modality, experts detailed in the JACR.

[…]

The present radiologist shortage is projected to persist unless steps are taken to grow the workforce and/or decrease per person imaging utilization…the shortage is not projected to get worse, nor will it likely improve in the next three decades, without effective action.

The two papers are here and here. To be fair, if you read the papers, there is more nuance to their predictions, and they acknowledge important trends (e.g. higher radiologist attrition in recent years and increasing utilization rates even outside of aging/demographic trends) that could easily result in big differences.

But.

Does anyone think taking any version of the current status quo of either the radiology workforce and current imaging volume trends and extrapolating 30 YEARS into the future generates a meaningful prediction?

Radiology was radically different 30 years ago and multiple predictions during that period were comically wrong. I don’t see a reason to assume the future will be any more predictable. A world where AI changes nothing and the already increasing role of non-radiologists in imaging interpretation (including but limited to midlevels) magically flatlines is not a world I think we live in.

A stable 30-year workforce shortage would be…impressive.

Choosing Rocks

02.20.25 // Miscellany

There’s a common first-things-first productivity parable of the rocks and the jar. It goes like this:

Imagine you have an empty jar that represents your life, and you have different sizes of rocks that represent different priorities and commitments. The big rocks represent the most important things in life, like your family and health. Medium rocks would be secondary priorities like intermediate career goals, social commitments—other worthwhile but less crucial activities. And finally, the small rocks and sand represent the minor daily tasks, distractions, and time-fillers that can easily consume our attention.

The thrust: If you fill your jar with sand and small rocks first, you won’t have room for the big rocks. But if you put the big rocks in first, then the medium rocks, the sand will filter down into the spaces between them—and everything fits.

From Oliver Burkeman’s Four Thousand Weeks: Time Management for Mortals:

Here the story ends—but it’s a lie. The smug teacher is being dishonest. He has rigged his demonstration by bringing only a few big rocks into the classroom, knowing they’ll all fit into the jar. The real problem of time management today, though, isn’t that we’re bad at prioritizing the big rocks. It’s that there are too many rocks—and most of them are never making it anywhere near that jar. The critical question isn’t how to differentiate between activities that matter and those that don’t, but what to do when far too many things feel at least somewhat important, and therefore arguably qualify as big rocks.

That tracks.

The Art of Creative Neglect Principle number one is to pay yourself first when it comes to time. I’m borrowing this phrasing from the graphic novelist and creativity coach Jessica Abel, who borrowed it in turn from the world of personal finance, where it’s long been an article of faith because it works.

Abel saw that her only viable option was to claim time instead—to just start drawing, for an hour or two, every day, and to accept the consequences, even if those included neglecting other activities she sincerely valued. “If you don’t save a bit of your time for you, now, out of every week,” as she puts it, “there is no moment in the future when you’ll magically be done with everything and have loads of free time.”

From both of these passages, my takeaway is that we can’t hope it actually choose all the rocks in some cohesive way. Avoid some of the useless filler sand, sure. But, maybe, don’t wait and just choose a rock sometimes:

Thinking in terms of “paying yourself first” transforms these one-off tips into a philosophy of life, at the core of which lies this simple insight: if you plan to spend some of your four thousand weeks doing what matters most to you, then at some point you’re just going to have to start doing it.

The easy trap is the too many coals in the fire:

The second principle is to limit your work in progress. Perhaps the most appealing way to resist the truth about your finite time is to initiate a large number of projects at once; that way, you get to feel as though you’re keeping plenty of irons in the fire and making progress on all fronts. Instead, what usually ends up happening is that you make progress on no fronts—because each time a project starts to feel difficult, or frightening, or boring, you can bounce off to a different one instead. You get to preserve your sense of being in control of things, but at the cost of never finishing anything important.

I’m trying to work through a backlog of abandoned work, but at this point my inability to focus, attend, and limit possibilities is a core character flaw.

Quality, speed, and “productivity”

02.17.25 // Radiology

The Tension

There is an inherent tension in radiology between quality and speed. Obviously, there are faster radiologists and slower radiologists. And there are better radiologists and worse radiologists. It is not even that you are either fast or slow in all contexts. It is also not a false dichotomy in that you are either slow but good or fast but bad. Everyone exists on a continuum for both.

In general, an individual will experience a decrease in quality past a certain increase in speed, which may be compounded by case mix, complexity, time of day, and number of interruptions. But also: we are unlikely to realize meaningful gains in quality past a certain decrease in speed. You only need so much time reviewing a study before experiencing diminishing returns.

The Incentives

Because groups are comprised of individuals, and individuals fall on a spectrum, it is challenging for a group to incentivize everyone to perform at their optimal point on their speed/quality curve. For one thing, some people, when incentivized in a productivity system, are perfectly willing to churn out garbage if it earns them more money. However, in a completely flat structure where everyone earns the same regardless of the number of work units produced, there is also no incentive for individuals to work hard if their natural pace would lead them past a predetermined watermark. A fast reader has the perverse incentive to slow down and watch streaming video instead of continuing to crank while a slowpoke in the cubicle across town is agonizing about sub-grading neural foraminal stenoses and measuring nonactionable cysts or something else in their report with at most borderline helpful, exhaustive detail.

What is “fair” and how do we achieve it?

A small practice may recruit such that personalities mesh across all partners and democracy works without much effort. Everybody knows everybody. Everybody is accountable to everybody. Everybody puts in the work lest they be publicly shamed or ostracized or simply because it’s part of being on a team. If there is a productivity component, then ideally everybody is equally interested in putting up numbers and making lots of money. It explains why some small groups can be so successful.

Conversely, a larger practice may resort to relatively strict productivity, controls, and incentives because social dynamics play less of a role.

When you are creating a large machine full of cogs, what’s easiest to measure (and to some also most important) is how many widgets that machine can produce. Especially if quality is secondary—and clearly some outfits believe it is—it’s just so much easier, trackable, and profitable to incentivize volume.

And if your practice is designed for maximum profitability—doubly so if that practice requires that profitability in order to meet shareholder expectations or service large debt obligations—then it’s not hard to see how that becomes the dominant paradigm.

The Complications

Where things become more complicated are in medium and large independent practices and academics. These larger groups often used to be smaller groups and they had a legacy culture that may or may not have become diluted or strained with the growth and/or consolidation that many markets have seen over the past 15 years. Sharing the pie equally may have been an easy solution in old times but now increasingly becomes untenable in the setting of enlarging worklists, high volumes, delayed turnaround times, and difficulty recruiting.

Democracy may be desirable but that doesn’t make it easy.

You want a way to discourage loafing and shirking responsibilities but you also don’t want to promote negative behaviors that often arise from RVU-based performance. One big one that many groups face is cherry-picking. The other is a push away from important practice-building but non-remunerative tasks. If you are being paid extra to produce more numbers then why would you want to talk to a clinician on the phone if you could have read another scan during the same amount of time? Why would you want to read plain films or thyroid ultrasounds if there are screening mammograms or negative headache brain MRIs ripe for the taking? And—hardest to measure—quality.

The Solutions

There are ways to mitigate everything but no clear one-size-fits-all solution. There are trade-offs to all choices, and not all practices need complex systems to function. The practical reality is that when pursued these kinds of changes are hard, require much thought and buy-in, almost invariably involve infighting, and are probably best solved via IT solutions that streamline workflows, prevent individually negative behavior, and potentially incorporate ways to reward all desired tasks—even when those don’t generate billable RVUs (e.g. automatic case assignment ± customized internal RVUs to better account for effort ± “work” RVUs for nonbillable tasks). As former Intel CEO Andy Grove said: “Not all problems have a technological answer, but when they do, that is the more lasting solution.”

But it’s not easy, and it requires deliberate choices and strong solutions. An ideal practice doesn’t build itself.

Older
Newer