My internet friends over at Medality are having a big Physician’s Week Spring Sale through March 31 (trainee extra discount link here). Solid use of CME funds before the end of the year + an easy way to support this site.
Some really good follows on the Imaging Wire’s 2025 list of Top 40 Radiology Resources. I’ll happily accept the description of “excellent insights into the vagaries of being a working radiologist.”
If you read my article on using Autohotkey for radiology, I describe that I use a click-lock script to simulate holding down the left mouse button. This allows me to power-scroll by using a single keystroke (in my case, backslash) to toggle scrolling on/off instead of needing to hold the mouse in a death grip for hours a day (which is a great way to destroy your wrist):
;toggle holding down the left mouse button
\::
alt := not alt
if (alt)
{
Click Down
}
else
{
Click Up
}
Return
If you also happened to read my post on radiology equipment or the follow-up deeper dive on how I use the Contour Shuttle for radiology, you may also know that I really enjoy autoscrolling with the Shuttle’s outer dial: When I twist the dial, each “hour” on the clockface repeats the mouse scroll wheel multiple times a second to allow me to scroll to varying speeds without needing to move at all. It takes some getting used to, but it’s awesome.
Not everyone has the Shuttle or the ability to install software on hospital computers, so I was thinking about how to recreate that without the wheel.
The following script was made—with a surprising amount of back and forth—using ChatGPT (I just kept telling it what errors it was getting and it eventually figured it out). I include it here as a potentially helpful tool but mostly to inspire you to play with making your own things to solve your own needs. The LLMs available for free online now make this sort of thing comically easy related to even just a couple of years ago.
The way this example works is by combining Alt + any number key (1-9) to scroll up and Ctrl + 1-9 to scroll down. The higher the number you press, the faster you scroll. As in, Alt+1 scrolls slowly and Alt+9 scrolls quickly. The reality is that anyone using some variant of this would almost certainly want to change the hotkeys used on an actual keyboard (perhaps using ZXC and ASD for slow, medium, and fast scrolling respectively instead of the numbers), but it would probably be best used with a small keypad where you could pick a handful of your favorite speeds and assign them to some obscure key combination that you would map to one of those keypad buttons.
Regardless, the point is that with a small amount of work, we can set up an off-hand alternative to jerking the mouse wheel back and forth incessantly. The more joints we spread these repetitive motions to, the better.
Enjoy:
#Persistent
#SingleInstance Force
SetBatchLines, -1
; Define scroll speeds (in milliseconds per scroll)
scrollSpeeds := [1000, 500, 200, 100, 67, 50, 40, 33, 25]
; Variables to track active scrolling
scrollUpActive := false
scrollDownActive := false;
Function to start scrolling up
StartScrollUp(speed) {
global scrollUpActive
scrollUpActive := true
while (scrollUpActive) {
Send {WheelUp}
Sleep speed
}
}
; Function to start scrolling down
StartScrollDown(speed) {
global scrollDownActive
scrollDownActive := true
while (scrollDownActive) {
Send {WheelDown}
Sleep speed
}
}
; Function to stop scrolling
StopScrolling() {
global scrollUpActive, scrollDownActive
scrollUpActive := false
scrollDownActive := false
}
; Manually Define Hotkeys for Alt + 1-9 (Scroll Up)
~Alt & 1::StartScrollUp(scrollSpeeds[1])
~Alt & 2::StartScrollUp(scrollSpeeds[2])
~Alt & 3::StartScrollUp(scrollSpeeds[3])
~Alt & 4::StartScrollUp(scrollSpeeds[4])
~Alt & 5::StartScrollUp(scrollSpeeds[5])
~Alt & 6::StartScrollUp(scrollSpeeds[6])
~Alt & 7::StartScrollUp(scrollSpeeds[7])
~Alt & 8::StartScrollUp(scrollSpeeds[8])
~Alt & 9::StartScrollUp(scrollSpeeds[9])
; Manually Define Hotkeys for Ctrl + 1-9 (Scroll Down)
~Ctrl & 1::StartScrollDown(scrollSpeeds[1])
~Ctrl & 2::StartScrollDown(scrollSpeeds[2])
~Ctrl & 3::StartScrollDown(scrollSpeeds[3])
~Ctrl & 4::StartScrollDown(scrollSpeeds[4])
~Ctrl & 5::StartScrollDown(scrollSpeeds[5])
~Ctrl & 6::StartScrollDown(scrollSpeeds[6])
~Ctrl & 7::StartScrollDown(scrollSpeeds[7])
~Ctrl & 8::StartScrollDown(scrollSpeeds[8])
~Ctrl & 9::StartScrollDown(scrollSpeeds[9])
; Ensure scrolling stops when releasing Alt or Ctrl
~Alt Up::
~Ctrl Up::
StopScrolling()
return
Note that this script as copy/pasted doesn’t play nicely with my scripts in the other post because I personally use the ctrl key in my macros to control Powerscribe, but changing things up is as easy as just changing a letter or two.
I am not an expert here, and I guarantee there are better ways to achieve this functionality, but stuff like this is a great example of what’s possible for a novice with a little vibe coding enabled by current LLMs.
This month, at the request of the Society of Pediatric Radiology, the ABR announced the addition of pediatric radiology to the “do a fellowship during residency” pathway first pioneered by nuclear medicine several years ago. One surmises this new pathway is not being offered because pediatric radiology is easier or requires less training and expertise than any other type of radiology but merely reflects the reality that we need radiologists with skills in pediatric radiology just as we do in nuclear medicine.
Obviously, there are radiologists in the workforce, especially in academia, practicing nearly 100% nuclear medicine and 100% pediatric radiology, but we need more people with these skills than there are physicians willing to set aside a year of their life after training to do so—especially when those skills aren’t always as marketable as something currently in demand like breast imaging or even as reliably employable as body imaging or neuroradiology.
So while these intra-residency pathways are a reasonable measure to ensure the adequate supply of radiologists with desirable skills, they are also an inconsistency problem in that there is absolutely no reason why those two fields should be different from any other diagnostic radiology subspecialty other than the supply and demand issues within the broader radiology community (and perhaps especially those actively volunteering for the American Board of Radiology or having the ear of those who do).
My point:
If you can now subspecialize early during residency and sit for the pediatric subspeciality examination, then there is no justifiable reason why you shouldn’t be able to do the same thing for neuroradiology, which is the other diagnostic subspecialty that has a CAQ (Certificate of Added Qualification) exam. (Please leave aside for the moment the reality that these tests are not meaningful assessments and that there are plenty of terrible radiologists who manage to hold various ABR certificates.)
Frankly, this would be even more true for any non-ACGME fellowships like body or MSK, but those fellowships don’t actually have any associated tests that place barriers to qualification. As in, the ABR doles out only certain credentials that let you say things like, “Look at me! I’m a real neuroradiologist!” They don’t do that for, say, breast imaging. The ABR doesn’t have any power over deciding how much time it takes for you to be officially “breast-trained” or “body-trained” or anything else—that’s the market (because there is no such officiality). If we all wanted to agree that 9 months of breast imaging as a senior resident is good enough to be a mammo fellowship equivalent, we can do that. Various imaging societies would certainly have an opinion, but no one can stop us. That’s why some institutions already offer various hybrid combo fellowships. Starting right now residencies could start offering their own “Mammo Certificates” documenting a trainee has truly obtained specific breast skillsets and interpreted some even higher minimum number of exams if they so chose. Those certificates would carry whatever weight we as a field choose to ascribe to them. But the ABR subspecialties are in the hands of the ABR, and—I suspect—the ABR sets the tone for the whole field.
Now, perhaps we want to argue that opening up early subspecialization for other fields (e.g. A Neuro Pathway) would be counterproductive for the presumed purpose of encouraging people to dedicate more time during residency to pediatric imaging or nuclear medicine. That sort of early focus would instead just allow more people doing other subfields to forgo an extra year of fellowship instead of focusing on those two subspecialties (facilitating shorter training generally is presumably not the ABR’s goal, though ironically with the current radiologist shortage, many have advocated for just this type of streamlining).
I would argue that this is not an intellectually tenable position for the American Board of Radiology to take, in the sense that the ABR is not a central-planning puppeteer tweaking the strings to direct radiologists to where they are most needed. The ABR’s stated mission is “to certify that our diplomates demonstrate the requisite knowledge, skill, and understanding of their disciplines to the benefit of patients.” If a trainee can now sit for an ABR certification thanks to a given number of months of subspecialty exposure during residency, then it’s hard to understand how that should only be limited to the current two subspecialties. It’s hard to understand how these limitations can be explained by the ABR’s stated mission. The ABR is not the steward of the job market, and such certification changes probably shouldn’t depend on specific external requests from specific stakeholders. Why should the ABR wait for a request from the ASNR? None of these societies speak for radiology any more than the ABR itself does.
Now, to be clear, I’m not arguing here that fellowships aren’t important or that most mini-fellowships are as demanding and educational as most regular fellowships, or any actual real-world implications. Unfortunately, there is no canonical “fellowship” to compare to or any actual criteria we use to determine if training is adequate, let alone good. We have long in medicine just used training time and occasionally training volume combined with a multiple choice test or two to pretend that someone has real-world skills. It’s proxy turtles all the way down.
Residency and fellowship training composition and quality are highly variable, but the various argument permutations that immediately popped into your mind are actually irrelevant. You are absolutely free to think that these pathways shouldn’t exist, and you are equally free to believe that your subspecialty really does require a magical year after graduation.
These pathways already exist; I’m just here to point out the hypocrisy.
Once you say someone can specialize early mostly by completing their senior electives in a single field and then have that qualify as fellowship-equivalent subspecialty training, then logically that should be true regardless of diagnostic subspecialty choice.
My first Backtable episode about the rad shortage, the job market, and PE in radiology was back in 2023.
I’m back on Backtable this week with a wide-ranging conversation about the job market, teleradiology, updates in the world of radiology private equity, etc etc. Always fun to chat with Ally and Mike, they’re awesome. Though, for the record, while I appreciate their kind introduction, I do not condone and categorically reject any overly charitable label that contains or alludes to the phrase “thought leader.”
Some articles for the show notes that are relevant to our discussion:
- The PE lies that inspired the creation of a new job board
- More reasons why I made Independent Radiology specifically to support private practice
- All about the new Aetna vs RP lawsuit (which itself also includes links to the posts about the UHC vs RP lawsuit that settled last year)
//
Hospital Stipends vs Higher Rates
During the show, Ally asked about groups getting paid more by hospitals via stipends (i.e. call-pay/service fees/whatever-you-want-to-call-it) versus a guaranteed per-RVU rate (the latter is often direct pay per study with the hospital doing their own billing but can also be a bump after billing to an agreed-on rate to account for unpaid care/payor mix/shortfall from market rates).
I suggested that for many smaller groups approaching these conversations with hospitals for the first time, a stipend is probably easier. It’s predictable, there are easy precedents the hospital understands (e.g. other call pay), and it doesn’t usually require seismic contract or billing changes.
The reality is, I think, of course, a bit more nuanced. It depends on whether the pay increase is added on to an existing contract and long-standing relationship or part of a new contract negotiation, the size of the group, and the size/volumes of the hospital.
To reiterate, a call stipend or radiology service fee may be much more palatable to some hospitals when added to a preexisting contract as it doesn’t require changing anything else and just falls in line with the preexisting idea of call pay. For a group, it also has the benefit of providing a floor such that even if volumes aren’t high, the group still gets paid for being willing to cover after hours.
However, a per-RVU rate will likely make much more sense for a new hospital coverage paradigm in our current era of radiology contract musical chairs where a group is guaranteed that each case is paid at a good rate and doesn’t need to concern itself with billing reimbursement, bad debt, and other headaches. You read a case, you get paid a predictable rate. This may be especially good when a group’s contracts are not strong with payors and protects against downward reimbursement pressure. It’s also what a lot of the recent teleradiology contracts have been, which doubly makes sense given they are not local, may not have existing local payor contracts, and are often aggregating multiple hospitals together into one feed and spreading the work around.
Some hospitals may also be happier to pay fractionally for the work they’re actually getting than a separate fee for access (but I suspect they are most happy just spending less). Pay per RVU could still be a problem if there is a bad casemix with large numbers of plain films etc. I haven’t personally heard of many hospitals paying per-case on a modality basis, which is something relatively common in the outpatient world, but that doesn’t mean it isn’t happening.
In some ways, you can consider a service fee to have a floor that guarantees a certain level of income despite variable volumes, and a high per RVU rate as a guarantee of fair reimbursement in the setting of high/growing volumes. They also aren’t mutually exclusive.
The reality is that money is fungible, so what really matters for a group’s bottom line is more the actual pay itself than the exact mechanism. It’s not hard to look at your current RVUs and average reimbursement per hour or shift, add in a proposed stipend, and then do the simple math to figure out the effective pay per RVU. Yes, getting paid more per RVU directly is more straightforward. It scales easily with growing volumes, whereas a stipend may need to be increased if more staffing is needed in the future. Again, a small practice trying to remain competitive and putting one person on call at a time is a different beast than a large conglomerate with a large night team that is staffing based on an aggregate of multiple hospitals.
Each one is optimized for a different kind of hospital, a different kind of relationship, and a different kind of future. For groups going to their hospital and negotiating, the real best method is whatever the hospital is willing to do and still gets you the reimbursement you need for recruitment and retention. A credible threat of walking is the best leverage.
The most salient point is that groups can no longer provide services at a loss and still expect to be able to pay competitively in the market.
This is a brief adjunct to my post on using Autohotkey in Radiology (which basically every radiologist should be doing, by the way). I include it here not because I expect many people to run into the same problem I did but rather because it’s a good example of the not-so-challenging troubleshooting that we shouldn’t be scared to do in our quest for a better workflow. I’m a novice and that’s okay! We can still do cool stuff!
In that post, I mentioned an example script I made to streamline launching patient charts in Epic from PACS at home since our automatic integration doesn’t work remotely.
One thing I didn’t describe in that post is an annoying quirk for activating Epic because it runs through Citrix. Since Citrix is weird, and there are presumably multiple servers that can run Epic, the window title of our Epic actually changes with each login. Therefore, the usual static name-matching technique we use to activate Powerscribe, Chrome, or other typical apps doesn’t work.
In our system, Epic always has a title like “ecpprd2/prdapp01” or “ecpprd3/prdapp04”—but the numbers always shift around.
For a while, I used a workaround:
WinActivate, ahk_exe WFICA32.EXE
…which is the name of the Epic/Citrix program .exe file running on my PC, and as long as only one Citrix application was open at the time, it worked (I had to make sure to close an MModal application that auto-launched with it, but otherwise it was fine). Recently, my hospital started using some useless AI tool that cannot be closed, which broke my script.
The workaround one of my colleagues figured out is to change the AHK TitleMatchMode of that specific hotkey to recognize “regular expressions” (a “RegEx” is a sequence of characters that specifies a pattern of text to match).
SetTitleMatchMode RegEx
Then we can use WinActivate with a few modifiers to recognize an unchanging portion of the window title. In our example above, where the title always contains ecpprd or prdapp, we can use the following to select the EPIC window:
WinActivate i)^ecpprd
In this example, the “i” modifier allows case-insensitive search, and the carat (^) limits the string to the beginning of the window title. You can read more about regular expressions in AKH here.
In reality, if I had just explained my problem to any of the popular LLMs, I’m confident they would have given me the answer. They absolutely excel at this. The rapidly approaching agentic era will allow for some very easy, very powerful scripting in the very near future even if commercial products lag behind.
The Good Jobber and the Critical Curmudgeon
Most radiology resident evaluations are a one-way trip on the “keep reading” express. Maybe, in harsher climates, “read more,” which is just a coded way of saying I wish you were better and more knowledgeable with the word “reading” used as a stand-in for “learn more useful stuff please.”
Many attendings are nice but not kind. We don’t want to hurt anyone’s feelings, so we don’t share specific critical feedback other than in cursory, generic, or essentially universal ways.
When more substantive critical/negative feedback is given, it can also be idiosyncratic concerning various pet peeves (i.e. not generalizable or particularly helpful) or a list of mistakes (without direction on how to fix them). Because most of us are cowards, these shortcomings are often a total gut punch of a surprise.
Feedback as a first-year radiology resident is often more a measure of compliance than growth.
But even when helpful, most rotation evaluations feel more like a grade/assessment and less like a pathway forward.
Ideally, you’d get feedback continuously. You don’t want generic ‘good job’ feedback or ‘you suck’ feedback. Neither is very helpful except to tell you that things are generally working or generally not working, and that’s not really going to help guide action except in the broadest sense. When it comes to in-person or at least specific one-on-one style feedback in an ideal world, you would simply get great, actionable feedback. But we don’t live in an ideal world and most feedback you receive will be generically positive or negative in ways that mostly reflect the bias of the person providing the feedback and their personal preferences.
(A classic useless example is a female resident hearing that she should “be more confident” from older male attendings)
So, in all likelihood, you will feel that your evaluations fall into one of two camps: The Good Jobber (gee, thanks) and the Critical Curmudgeon (okay, jerkface). Neither is all that helpful. Chances are, it’s going to be up to you to get the feedback you want/need.
The Painful Request
There are situations where—despite the unpleasant awkwardness—it is in the learner’s best interest to ask for feedback. When you directly ask for feedback, you have a greater chance of receiving helpful specific feedback if you ask for specificity. So don’t just say, “Do you have any feedback for me? How can I get better?” Rather, consider asking about specific ways to improve: How can I improve the conciseness of my reports? How is my organization? Have you noticed any instances where my reporting style may be unclear to clinicians? Is there a certain kind of perceptual mistake that you’ve seen me make multiple times that I should incorporate into my search pattern or my proofreading checklist to do better quality work? A direct question is more likely to get a directly helpful response. Does that sound tool-ish when written out this way? You bet it does. But surgical requests are more likely to generate meaningful responses.
At any given time, you may be working on a specific part of your approach to radiology. You may be working on developing your first-year search pattern. You may be working on the library of if-then pattern extenders that help you address pertinent disease features or whittle down a differential diagnosis. You may be working on your mental checklist so that you do not omit parts of the report. You may be working on trying to hone down and describe findings that matter and leaving out truly extraneous detail. You may be working on making reports that are as short as possible while containing the information that will help decide patient management.
When you ask for feedback or when people give you generic feedback, consider tailoring your request or your follow-up questions to get advice and feedback on the issues that you’re working on actively right now. We simply can’t actively work on every aspect of our jobs every day. That’s not how deliberate practice works. We all get better and more refined in our routines over time organically. Your process, whether it’s optimal or not, will become ingrained through repetition.
But that’s not how people who are experts actually improve to the next level of effectiveness. They do so in a piecemeal fashion. So if you want to work on that process and not just solidify through the inevitable accrual of time, then you may need some guidance on how to deploy that extra thoughtful work. If you aren’t sure what to work on, then consider asking for what is the one thing that is your weakest in a specific context: As in, what sort of finding am I missing? What sort of error am I making in my reports? What is the most irritating thing that you find yourself editing when finalizing my work?
Reception & Acceptance
1. Listen for Patterns
We can be fragile: we can take feedback too personally and miss opportunities to improve.
We can be stubborn: we get used to hearing the same things and start internalizing them, then start ignoring what others say and also miss opportunities to improve.
Patterns are important: Don’t let a single bit of negative feedback crush your self-worth. But, the more often you hear something, the more seriously you should take it. Even when the feedback feels isolated, keep in mind that most feedback you get will fall into the generic nice-but-not-kind good-jobber variety just by dint of attending personality and not your performance.
2. Experts Are Often Wrong
Never forget that not all feedback is good feedback, and many experts do not understand how they arrived at their expertise. They may not know which practice methods would be most efficient to achieve mastery even for themselves let alone for any individual learner. They are trying to help, but meta-learning is a challenge.
Most people do not really know or understand how they learn. You might know how you like to learn, but how you like to learn isn’t necessarily the same as how you learn the best.
Commentary on your deficiencies is likely spot on, but proposed solutions for how you should fix them are a different story.
3. Stop Wrong-Spotting
In defense of our ego, we often look for inaccuracies that allow us to psychologically reject the entire package. Instead of looking for reasons the person is wrong in order to create a straw man (even if they are in some details), look for the parts of feedback that are helpful or potentially true. The goal isn’t to be right; it’s to be better.
The Most Important Feedback is the Case-Loop
It’s not just what you’re doing, but how you’re doing it. It’s a difference in perspective. The way a novice and an experienced reader approach an exam is not the same, and the goal of the learning process is to move efficiently along the path from learner to expert.
One of the amazing baked-in capabilities of radiology residency training is that previewing and dictating a case and then reading out gives you 1) your attempt, 2) a fallible but more experienced person’s attempt, and 3) allows you to see the difference.
Obviously, you cannot see directly into the mind of your attending, and even how they verbalize their thought process or describe their search pattern is not necessarily the same as what they’re actually doing. Our subjective awareness of how we think is not perfect. We are in part black boxes even unto ourselves. At the risk of getting too far into metaphysics, we don’t think we think how we think.
Nonetheless, every case you read is the most plentiful opportunity for feedback. It’s not about just missing a certain finding or whether you were right or wrong. It’s about where you are now and seeing the next steps to getting where you want to be.
Feedback is not just what you get at the end of a rotation, it’s the difference between what you did and what—after the fact—you wish you’d done.
As of this week, Independent Radiology features 125 private practices, which gives us an interesting look at a slice of the radiology job market. Here is the breakdown of subspecialty openings today:
- Mammo: 79% (99)
- Body: 78% (98)
- General: 71% (89)
- Neuro: 66% (83)
- MSK: 54% (67)
- VIR: 43% (54)
- Chest/Cardiovascular: 37% (46)
- NM/PET: 34% (42)
- Peds: 26% (32)
- Neuro IR: 6% (7)
Off-hours positions are also plentiful with 39% (49) hiring swing shifts and 35% (44) hiring overnight radiologists. I suspect that those offerings reflect not just specific group needs but also an attempt to tap into the available remote workforce and meet market conditions. (That reminds me, my group has one opening for each.)
Overall, 67% (84) of groups have remote positions of some variety, and 30% (38) are willing to hire contractors in some fashion.
From “The how we need now: A capacity agenda for 2025 and beyond,” published by the Niskanen Center think tank:
We need a new operating model for government if we are to restore our capacity to achieve our policy goals. This model must close the open loop we described in Part 3: a one-way system from law- and policy-making to implementation to real world outcomes that offers little space for learning and adjustment along the way. We can no longer rely on media coverage and elections, blunt tools that tend to be saved only for the most catastrophic errors, as the main corrective mechanisms.
Closing the loop means that we must apply test-and-learn approaches. This means conducting multiple small-scale experiments at the boundaries of policy and delivery — and doing this permanently, in pursuit of a policy intent or outcome. Incremental changes are scaled up once there is good evidence they work in reality. Test-and-learn does not mean simply running lots of pilots. A pilot implies starting with a phase for learning, which then ends as you move into “roll out.” Responsiveness is an embedded attribute, not a phase on a timeline. Closing the loop means the learning doesn’t stop at an arbitrary moment.
We briefly touched on this paper before and the concept of the Cascade of Rigidity “that occurs when well-intentioned laws and regulations become increasingly inflexible as they step down through bureaucratic hierarchies.” They discuss a healthcare-related Open Loop error with MACRA:
MACRA (Medicare Access and CHIP Reauthorization Act) was designed to pay doctors more for higher-quality care. But an implementation team at the Centers for Medicare and Medicaid Services (CMS) knew that doctors were already frustrated with the burdensome and confusing ways they had to report their data under the existing program, and many were so concerned that the new system would be just as bad that they were threatening to stop taking Medicare patients. Thus, a law designed to improve the quality of care threatened to degrade it, especially for patients in rural areas who relied on the small practices that were most affected.
Recognizing how challenging the administrative requirements could be for practices with fewer resources and limited Medicare revenue, one provision in the law exempted doctors who treated a minimal number of Medicare patients. But CMS’s initial interpretation of this provision would have required all providers to collect and submit a full year’s worth of data in order to demonstrate they fell below the exemption threshold. This meant exempt doctors would still have to comply with all the program’s requirements, including updating their systems and reporting data, only to be excused from all this at a later date. It’s not hard to see why this approach, while technically accurate, would have worked against the intent of lawmakers. Those doctors would have left the program, hurting the very patients the law meant to help.
Another provision allowed smaller practices to form “virtual groups” to gain advantages enjoyed by larger practices. Staff interpreted this provision as a mandate to create a “Facebook for Doctors,” a platform for doctors to find and connect with each other. A staffer on loan from the United States Digital Service, a part of the White House, doubted that Congress intended for CMS to create a social media platform, especially considering the limited time and resources available. She took the almost unheard of step of consulting the House Office of the Legislative Counsel, and confirmed that Congress simply wanted to make it easier for small practices to report together and had no intention of mandating a “Facebook for Doctors.”
Under more common circumstances, these and other overly literal interpretations of the law would have resulted in a burdensome, unwieldy, and ultimately unsuccessful implementation. Doctors would have simply opted out, leaving patients with fewer options, and some in rural areas with none.
Thanks to nimble actions by people at CMS and USDS to ensure that Congressional intent was realized rather than over-relying on literal interpretations, this outcome was avoided. But conflicts like these all too rarely resolve in favor of common sense. Agency staff are commonly taught to treat legal language as literal operating instructions, as if a programmer had written code and they were the computer executing that code. But as any programmer will tell you, code rarely works as intended on the first try. It works after trying one approach, testing it, adjusting, and continuing that cycle over and over again. That cycle of adjustment is very difficult to engineer within policy implementation today.
We run on an open loop, in which implementation teams neither test their programs in the real world nor loop back to the source for adjustments. We need to build the affordances for them to do both, thus closing the loop. Otherwise, the code will more often than not run exactly as Congress wrote it, even if that doesn’t result in what Congress wanted.
This is emblematic of the problem and also insane: a government-created Facebook alternative for doctors for the express purpose of dealing with a procedural nightmare created by a well-intentioned but completely untested, unproven, almost certainly unhelpful, and very gameable quality goals.
I’ve been advising a radiology app startup called LnQ. I think of it like Qgenda for radiology moonlighting. It can link up with your practice schedule and HL7 feed and helps groups/hospitals/etc leverage the excess capacity in their own workforce: a practice can activate LnQ when there is extra work to do and automate telling the people who aren’t currently working when additional work is available, how much work is available, and then allow those people to do that work and get paid quickly for doing it without the multiple manual steps those processes usually require. It was first developed by an independent private practice that was struggling with their lists; since implementation, they were able to not only clear the lists every day but were able to go after some lucrative contracts knowing they had more bandwidth than they’d initially thought.
I think it’s neat, and I think it fulfills a need that many practices have. On top of its purpose of facilitating internal moonlighting, LnQ is also building a network of independent contractor radiologists on the app platform so that LnQ can also be used to directly connect individual rads and groups together without a teleradiology company or locums middleman adding friction and heavy costs. A practice can then notify their ICs when there is work available and at what rate. One of the issues I’ve discussed with practices multiple times since starting Independent Radiology is that many of them could use an IC here or there but not with enough frequency and volume that makes the ongoing hassle worth it for either party. LnQ is taking care of some of the initial vetting, and multiple practices on the platform will mean that everyone has a better chance of cobbling together the excess work and excess labor in one place to help everyone get the patient care done (of course credentialing will still suck until someone fixes that broken system).
If you are a group who wants to hear more or an individual rad looking for contractor work, you can see more here (the direct physician interest form is here).
//
When I first joined my practice in 2018, they’d already realized the importance of leveraging their workforce’s extra capacity so that when volumes were high, excess work could go to those with the energy and time to do it. Back then, however, we used to submit our after-hours cases as an Excel file attachment sent to payroll. It was tedious and prone to mistakes.
Flash forward several years later, and we have a full-time data analytics and computer dude who has built out workflows and internal apps to facilitate submitting reimbursement for expenses, paying for tumor boards and conferences, and essentially automating most of the tracking for our internal moonlighting from our worklist (Clario) database. Our moonlighting is per-click, and we know exactly which cases are being submitted for “after hours.” Our process is easy and fully transparent. We can run whatever analytics we want on it. For marking qualifying work, we’ve done things like mark the cases in Clario that are eligible (After Hours) or just used a list volume watermark, but the underlying principle is—when it comes to asking for help or providing help—friction is the enemy.
The reality is that with growing volumes and this volatile tight job market, recruitment isn’t always enough. And while, on the whole, the radiologist workforce is aging and burnout runs deep, we need to enable those with some juice left to squeeze more options so that those who have some bandwidth to trade time for money have the chance to do so in as many ways as possible.
Many practices have internal sales of call shifts or various swing shifts that are offered up as moonlighting, and if that’s enough to make everyone happy and get the work done? Great, you’re done. But even then…how much are those shifts paid? Does the rate get sweetened if no one wants to do them? How is that extra work tracked? Who does your payroll and how often do they mess up? How much time and effort does all of that take to coordinate? If no one is biting, can you offer up that shift to a contractor?
And, if extra shifts aren’t sufficient or desirable, that’s when ad hoc moonlighting on an hourly, RVU, or flat per-modality basis can become critical. A rad might have time for an extra scan here or there after the kids go to bed or be willing to work for an extra hour before or after their shift on occasion to avoid traffic but not be willing to commit to selling a vacation day or taking a complete extra shift or call weekend.
Taking it a step further, there are so many ways practices are structured to get the work done. Yes, a big practice with all work combined in one massive worklist and lots of overlapping shifts can make certain kinds of coverage very straightforward, but many practices have different kinds of work and multiple different systems to get it done. Is there a way you can choose to decompress a terrible call shift by asking others for a smidge of help?
What many practices need is a way to tell people who aren’t already scheduled to be working that there is work available to do, what/how much work there is (an hour? 7 RVUs?), and how much that work will pay. Maybe that payment amount changes or maybe it’s fixed. One thing you definitely don’t want is an uneven burden of easy or hard shifts disproportionately falling on certain individuals and be stuck with no way to make things fair. What do you do, for example, in a practice with multiple lists if some service lines are overly busy and those rads are stuck staying late to clear the hospital when other folks could just hop on sometimes and in a few minutes clear the list so everyone can go home on time?
Also for burnout mitigation, maybe someone who hates taking call wants to offer up some of their call pay to get some help or maybe it’s the practice just trying to get the work done when the number of warm bodies on the schedule isn’t enough without garnishing time off. Sometimes you can be a little more flexible on PTO or backup coverage if there’s an easy way to spread the work across willing people PRN.
Our group invested time and money into making a custom in-house solution that works for our practice (and, unsurprisingly, it doesn’t do all the things a dedicated company like LnQ has made possible; it’s a startup, so they can also easily add features as groups request them). Not all groups can or should bother creating a complicated tech solution to enable them to leverage their own workforce even if they do ultimately do need to leverage their own workforce.
Part of retention is meeting people where they are, and internal moonlighting is often one of those measures that can make both the slow vs fast and the lifestyle vs hungry readers happy. What more groups need to make the enterprise work is a system that makes it easy to tell the people who could potentially work extra when extra work is available, how much work is available, what kind of work is available, and then allow those people to do that work and get paid quickly for doing it without issues of tracking the work and other hassles.
We need more happy rads.