Giving advice and selling can’t be the same thing.
Nassim Taleb, pithily summarizing a lot of problems. For example, the core problem of much of the financial planning industry.
Giving advice and selling can’t be the same thing.
Nassim Taleb, pithily summarizing a lot of problems. For example, the core problem of much of the financial planning industry.
Firstly, it should almost go without saying, but: you can do this.
I’d also like to acknowledge that nothing below is particularly noteworthy or novel advice. The Core Exam is like the other high-stakes multiple choice exams you’ve taken except for the fact that it has more pictures.
And, of course, the question of how to pass the Core Exam after a failure is mostly the same as asking how to pass it in the first place. Before we get further, I published a series of articles grouped here as a “Guide to the Core Exam” that lays out a lot of helpful information. There are some out-of-date passages about failing because physics and in-person details, but the core information is unchanged.
As you no doubt noticed during your first attempt(s), the questions on the ABR exams are somewhat arbitrary in what they include, so part of your performance is luck of the draw. While the difficulty is curated, the specific content breakdown is not a perfect cross section of possible topics. You can have the same diagnosis multiple times but then zero questions on broad swaths of important material. How your knowledge gaps line up can make a big difference.
Your performance on a given day is a product of your variable skill (energy, focus, attention, etc) and the exact exam you get. All things being equal, that means that a borderline failure is also truly a near pass.
Look at the two breakdowns: organ (breast, cardiovascular, GI, GU, MSK, neuro, peds, and thoracic) and modality (CT, IR, MR, Nucs, radiography, US, NIS, and Physics). Find if you have outliers, and plan to shore up those shortfalls with extra dedicated targeted review.
At the same time, do not neglect your strengths entirely. Backsliding is counterproductive.
The nature of spaced repetition is that you need more reps more frequently for new knowledge and problem areas and fewer reps spaced further apart for your strengths—but you still need reps across the board.
Further Reading: Learning & The Transfer Problem
What exactly was your study method and plan for your initial attempt(s)?
There are a couple of maladaptive tendencies common amongst medical students that can persist into residency:
When it comes to doing practice questions, you also need zoom out and look for trends:
More than just stronger/weaker subspecialty performance, are there any themes to why you get questions wrong? Is there a time component? Is it that you often don’t see the finding in the picture? That you simply don’t know the diagnosis? Or that you’re being fooled by differential considerations and need to focus on key features that distinguish between plausible alternatives? Is it a confidence issue and you’re overthinking it, getting spooked by questions that seem too easy? If you change answers, are you more likely to change from wrong to right or right to wrong? (I think most people overthink it and change for the worse).
If there’s a pattern, it can be the key to unlocking performance.
Further Reading: Dealing with Test Anxiety and Demoralization
First: Horses > Zebras.
In general, the biggest bang for your buck is still going to be common diagnoses (including weird looks of common things) and normal variants over esoterica. Rare things arise most when they are absolute Aunt Minnies that you can tell at a glance (hence the need for lots of image-based questions).
On a related note, if you never saw them, the ancient free official ABR practice test from 2014 is available on the Wayback machine here.
Also worth mentioning: NIS is a bunch of random material. Many people can read the manual a copy of times and get the job done here, but the reality is that these need to mostly be easy points. If you don’t retain pseudoscientific business jargon naturally, then don’t shirk the review here. The NIS App, for example, is well-reviewed, but there is also an Anki deck as well.
You can use the ACR DXIT/in-service Anki deck for a large number of good free questions. You could also use one of the massive Core Radiology decks. But for the second round of studying after a failure, making quick cards of every question you guess or get wrong from whatever source you’re using with your phone’s camera or with screenshots and incorporating that into repeated longitudinal review may be the highest yield.
In Windows, for example, Windows+Shift+S opens up a quick adjustable screenshot reticle that will copy that portion of your screen to the clipboard.
On Mac, the adjustable screenshot shortcut is Shift+Command+4, which automatically saves to desktop. To save to your clipboard, add Control, so Ctrl+Shift+Command+4.
Passing is almost certainly a matter of doing more, high-quality reps while not over-focusing on weaknesses such that you backslide on your relative strengths. The Core Exam is arbitrary enough that some of your high and low-performance areas may not be as meaningful as you think, so you need to continue broad reps in addition to the extra targeted review.
Once you can emotionally recover from the setback and just get back to it, it’s going to work out.
Further Reading: Focused Nonchalance
One thing our recent discussions of the nationwide shortage of radiologists didn’t include (in addition to a solution) is how the shortage has impacted the culture of radiology:
One of the concerning features of the current staffing shortage is the desperation with which many universities/hospitals/groups are recruiting new talent. When the market is tight and groups are well-staffed, groups get to be picky about cultural fit. Groups prioritizing compensation or efficiency can recruit fast radiologists or those with a broad skillset and flexible attitude. Groups that prioritize quality, academic productivity, or relationship-building can at least attempt to select for proxy features. Groups can grow in ways that align with their culture and mission.
But when there is far too much work to do and not enough people to do it, priorities shift to things like having a pulse and an active medical license, hopefully without board complaints or a felony record.
In Jim Collins’ Good to Great, there is a whole chapter dedicated to hiring. He argues that the key to creating a great business is not the mission or values, a charismatic leader, technology, or a clever strategic plan. The key foundation on which all other components rest is having the right people.
Sabotaging culture to generate revenues or get the work done may feel like a necessary choice in the short term (it also may be unavoidable when the alternative is operational insolvency), but it also has long-term consequences. Culture isn’t something you speak to; it is something that arises organically from good people doing things they believe in. As Collins says, “The best people don’t need to be managed.”
One of the ways that cultural breakdowns manifest in radiology practices is the conflict between fast readers and practice builders (or, in academics: gold-star earners vs worker bees). Obviously, there are efficient radiologists who can do everything well, churning out an incredible volume of high-quality reports, and there are lazy people who do a bad job producing a relative molehill. Real life is a continuum, but the fake dichotomy is helpful for discussion.
I’m not even going to pretend that one is good or bad. The reality is that each of us is on a continuous spectrum as opposed to a caricature at the ends of the curve. The problem is that both extremes are real approaches and that good people in each camp become frustrated when the culture of the group does not align with their personal preferences. I see this discussed online all the time. Fast readers bemoan the freeloaders who are being paid the same or a similar amount for “less work.” But practices also rely on good citizens to get the important but non-remunerative work done. As an organization scales—and many groups have grown significantly over the past decade—reconciling these competing visions for an ideal radiology practice can seem impossible.
From a practice competition standpoint, it’s easy for a group to fall into a no man’s land where the compensation plan doesn’t reward speed enough for the high-productivity readers to be happy or rewards speed too much for the less “productive” members who get bogged down in the most complex cases, want more time to produce helpfully detailed reports, speak to clinicians, answer technologist questions, or teach residents. This can be compounded to disastrous effect by the cherry-picking that ravages some practices utilizing a productivity model, especially those that do not enforce list hygiene through automatic case assignment or customized internal work units to balance case desirability. If you log into a list and it’s full of plain films, thyroid ultrasounds, and chest CTs, you are logging into an environment where this hasn’t been figured out yet.
We touched on this previously in quality, speed, and “productivity”—and I didn’t have a magic bullet in that discussion either. These are real problems, and if there was a universal easy solution, everyone would be doing it. My feeling, as concluded in that brief article, is that the table stakes in the future are to automate case assignment ± customized internal RVUs to better account for effort ± account for “work” RVUs for nonbillable tasks (but bean counting every single thing you do has its own very real negative consequences). The quality/speed tradeoff is inherent to radiology work, but a productivity model that doesn’t take some variation of this into account is too ripe for cheating and gamesmanship.
My argument with the first two sections of this post is that compromising on hiring and being passive with internal work divisions and the productivity question are a one-two punch. I increasingly believe that places that don’t figure this out become revolving doors. At that point, recruitment is purely mercenary based on measurables. There are people who are willing to work that way, but long term, I don’t believe that leads to satisfaction or stability.
Culture happens whether we want it to or not, and creating a job that people enjoy and are loyal to doesn’t happen by accident. We are in a period of increased volatility in the radiology workforce regardless of what a practice does, but any job can become more stable if it feels like a meaningful career.
In this previous post about breast imaging, we briefly touched on the soon-to-be-growing-and-maybe-even-critical problem of automation bias in radiology caused by the growing use of AI.
This study evaluating AI-assisted detection of cerebral aneurysms had similar findings:
Results
False-positive AI results led to significantly higher suspicion of aneurysm findings (p = 0.01). Inexperienced readers further recommended significantly more intense follow-up examinations when presented with false-positive AI findings (p = 0.005). Reading times were significantly shorter with AI assistance in inexperienced (164.1 vs 228.2 s; p < 0.001), moderately experienced (126.2 vs 156.5 s; p < 0.009), and very experienced (117.9 vs 153.5 s; p < 0.001) readers alike.Conclusion
Our results demonstrate the susceptibility of radiology readers to automation bias in detecting cerebral aneurysms in TOF-MRA studies when encountering false-positive AI findings. While AI systems for cerebral aneurysm detection can provide benefits, challenges in human–AI interaction need to be mitigated to ensure safe and effective adoption.
Everyone got faster, but inexperienced readers were fooled by false positives.
This is going to be such a problem.
The reality is that using AI to make us faster is so incredibly ripe for these outcomes. Sure, we could using AI to catch mistakes after an initial independent rad interpretation, and then we could even set up such a system to then use a third party to adjudicate persistent disagreements in a blinded fashion (i.e. a neutral third party radiologist or maybe a different AI agent picks the winner without knowing who they side with)—but the raw economics absolutely point to us using AI as a resident first draft as soon as feasible. It’s going to get messy.
There is an argument that you will have to increasingly be an expert in order to outperform an increasingly competent algorithm. While many current machine mistakes are obvious to experienced radiologists, failures won’t always be comically clear in the future. Assuming we need humans for the long term, training and training quality are critical, and doing so in a way that shields humans from tainting and overreliance on computers will be key.
Yes, pilots use autopilot, but some of those big life-saving stories make the news precisely because pilots also sometimes need to take control.
Some really good follows on the Imaging Wire’s 2025 list of Top 40 Radiology Resources. I’ll happily accept the description of “excellent insights into the vagaries of being a working radiologist.”
If you read my article on using Autohotkey for radiology, I describe that I use a click-lock script to simulate holding down the left mouse button. This allows me to power-scroll by using a single keystroke (in my case, backslash) to toggle scrolling on/off instead of needing to hold the mouse in a death grip for hours a day (which is a great way to destroy your wrist):
;toggle holding down the left mouse button
\::
alt := not alt
if (alt)
{
Click Down
}
else
{
Click Up
}
Return
If you also happened to read my post on radiology equipment or the follow-up deeper dive on how I use the Contour Shuttle for radiology, you may also know that I really enjoy autoscrolling with the Shuttle’s outer dial: When I twist the dial, each “hour” on the clockface repeats the mouse scroll wheel multiple times a second to allow me to scroll to varying speeds without needing to move at all. It takes some getting used to, but it’s awesome.
Not everyone has the Shuttle or the ability to install software on hospital computers, so I was thinking about how to recreate that without the wheel.
The following script was made—with a surprising amount of back and forth—using ChatGPT (I just kept telling it what errors it was getting and it eventually figured it out). I include it here as a potentially helpful tool but mostly to inspire you to play with making your own things to solve your own needs. The LLMs available for free online now make this sort of thing comically easy related to even just a couple of years ago.
The way this example works is by combining Alt + any number key (1-9) to scroll up and Ctrl + 1-9 to scroll down. The higher the number you press, the faster you scroll. As in, Alt+1 scrolls slowly and Alt+9 scrolls quickly. The reality is that anyone using some variant of this would almost certainly want to change the hotkeys used on an actual keyboard (perhaps using ZXC and ASD for slow, medium, and fast scrolling respectively instead of the numbers), but it would probably be best used with a small keypad where you could pick a handful of your favorite speeds and assign them to some obscure key combination that you would map to one of those keypad buttons.
Regardless, the point is that with a small amount of work, we can set up an off-hand alternative to jerking the mouse wheel back and forth incessantly. The more joints we spread these repetitive motions to, the better.
Enjoy:
#Persistent
#SingleInstance Force
SetBatchLines, -1
; Define scroll speeds (in milliseconds per scroll)
scrollSpeeds := [1000, 500, 200, 100, 67, 50, 40, 33, 25]
; Variables to track active scrolling
scrollUpActive := false
scrollDownActive := false;
Function to start scrolling up
StartScrollUp(speed) {
global scrollUpActive
scrollUpActive := true
while (scrollUpActive) {
Send {WheelUp}
Sleep speed
}
}
; Function to start scrolling down
StartScrollDown(speed) {
global scrollDownActive
scrollDownActive := true
while (scrollDownActive) {
Send {WheelDown}
Sleep speed
}
}
; Function to stop scrolling
StopScrolling() {
global scrollUpActive, scrollDownActive
scrollUpActive := false
scrollDownActive := false
}
; Manually Define Hotkeys for Alt + 1-9 (Scroll Up)
~Alt & 1::StartScrollUp(scrollSpeeds[1])
~Alt & 2::StartScrollUp(scrollSpeeds[2])
~Alt & 3::StartScrollUp(scrollSpeeds[3])
~Alt & 4::StartScrollUp(scrollSpeeds[4])
~Alt & 5::StartScrollUp(scrollSpeeds[5])
~Alt & 6::StartScrollUp(scrollSpeeds[6])
~Alt & 7::StartScrollUp(scrollSpeeds[7])
~Alt & 8::StartScrollUp(scrollSpeeds[8])
~Alt & 9::StartScrollUp(scrollSpeeds[9])
; Manually Define Hotkeys for Ctrl + 1-9 (Scroll Down)
~Ctrl & 1::StartScrollDown(scrollSpeeds[1])
~Ctrl & 2::StartScrollDown(scrollSpeeds[2])
~Ctrl & 3::StartScrollDown(scrollSpeeds[3])
~Ctrl & 4::StartScrollDown(scrollSpeeds[4])
~Ctrl & 5::StartScrollDown(scrollSpeeds[5])
~Ctrl & 6::StartScrollDown(scrollSpeeds[6])
~Ctrl & 7::StartScrollDown(scrollSpeeds[7])
~Ctrl & 8::StartScrollDown(scrollSpeeds[8])
~Ctrl & 9::StartScrollDown(scrollSpeeds[9])
; Ensure scrolling stops when releasing Alt or Ctrl
~Alt Up::
~Ctrl Up::
StopScrolling()
return
Note that this script as copy/pasted doesn’t play nicely with my scripts in the other post because I personally use the ctrl key in my macros to control Powerscribe, but changing things up is as easy as just changing a letter or two.
I am not an expert here, and I guarantee there are better ways to achieve this functionality, but stuff like this is a great example of what’s possible for a novice with a little vibe coding enabled by current LLMs.
This month, at the request of the Society of Pediatric Radiology, the ABR announced the addition of pediatric radiology to the “do a fellowship during residency” pathway first pioneered by nuclear medicine several years ago. One surmises this new pathway is not being offered because pediatric radiology is easier or requires less training and expertise than any other type of radiology but merely reflects the reality that we need radiologists with skills in pediatric radiology just as we do in nuclear medicine.
Obviously, there are radiologists in the workforce, especially in academia, practicing nearly 100% nuclear medicine and 100% pediatric radiology, but we need more people with these skills than there are physicians willing to set aside a year of their life after training to do so—especially when those skills aren’t always as marketable as something currently in demand like breast imaging or even as reliably employable as body imaging or neuroradiology.
So while these intra-residency pathways are a reasonable measure to ensure the adequate supply of radiologists with desirable skills, they are also an inconsistency problem in that there is absolutely no reason why those two fields should be different from any other diagnostic radiology subspecialty other than the supply and demand issues within the broader radiology community (and perhaps especially those actively volunteering for the American Board of Radiology or having the ear of those who do).
My point:
If you can now subspecialize early during residency and sit for the pediatric subspeciality examination, then there is no justifiable reason why you shouldn’t be able to do the same thing for neuroradiology, which is the other diagnostic subspecialty that has a CAQ (Certificate of Added Qualification) exam. (Please leave aside for the moment the reality that these tests are not meaningful assessments and that there are plenty of terrible radiologists who manage to hold various ABR certificates.)
Frankly, this would be even more true for any non-ACGME fellowships like body or MSK, but those fellowships don’t actually have any associated tests that place barriers to qualification. As in, the ABR doles out only certain credentials that let you say things like, “Look at me! I’m a real neuroradiologist!” They don’t do that for, say, breast imaging. The ABR doesn’t have any power over deciding how much time it takes for you to be officially “breast-trained” or “body-trained” or anything else—that’s the market (because there is no such officiality). If we all wanted to agree that 9 months of breast imaging as a senior resident is good enough to be a mammo fellowship equivalent, we can do that. Various imaging societies would certainly have an opinion, but no one can stop us. That’s why some institutions already offer various hybrid combo fellowships. Starting right now residencies could start offering their own “Mammo Certificates” documenting a trainee has truly obtained specific breast skillsets and interpreted some even higher minimum number of exams if they so chose. Those certificates would carry whatever weight we as a field choose to ascribe to them. But the ABR subspecialties are in the hands of the ABR, and—I suspect—the ABR sets the tone for the whole field.
Now, perhaps we want to argue that opening up early subspecialization for other fields (e.g. A Neuro Pathway) would be counterproductive for the presumed purpose of encouraging people to dedicate more time during residency to pediatric imaging or nuclear medicine. That sort of early focus would instead just allow more people doing other subfields to forgo an extra year of fellowship instead of focusing on those two subspecialties (facilitating shorter training generally is presumably not the ABR’s goal, though ironically with the current radiologist shortage, many have advocated for just this type of streamlining).
I would argue that this is not an intellectually tenable position for the American Board of Radiology to take, in the sense that the ABR is not a central-planning puppeteer tweaking the strings to direct radiologists to where they are most needed. The ABR’s stated mission is “to certify that our diplomates demonstrate the requisite knowledge, skill, and understanding of their disciplines to the benefit of patients.” If a trainee can now sit for an ABR certification thanks to a given number of months of subspecialty exposure during residency, then it’s hard to understand how that should only be limited to the current two subspecialties. It’s hard to understand how these limitations can be explained by the ABR’s stated mission. The ABR is not the steward of the job market, and such certification changes probably shouldn’t depend on specific external requests from specific stakeholders. Why should the ABR wait for a request from the ASNR? None of these societies speak for radiology any more than the ABR itself does.
Now, to be clear, I’m not arguing here that fellowships aren’t important or that most mini-fellowships are as demanding and educational as most regular fellowships, or any actual real-world implications. Unfortunately, there is no canonical “fellowship” to compare to or any actual criteria we use to determine if training is adequate, let alone good. We have long in medicine just used training time and occasionally training volume combined with a multiple choice test or two to pretend that someone has real-world skills. It’s proxy turtles all the way down.
Residency and fellowship training composition and quality are highly variable, but the various argument permutations that immediately popped into your mind are actually irrelevant. You are absolutely free to think that these pathways shouldn’t exist, and you are equally free to believe that your subspecialty really does require a magical year after graduation.
These pathways already exist; I’m just here to point out the hypocrisy.
Once you say someone can specialize early mostly by completing their senior electives in a single field and then have that qualify as fellowship-equivalent subspecialty training, then logically that should be true regardless of diagnostic subspecialty choice.
My first Backtable episode about the rad shortage, the job market, and PE in radiology was back in 2023.
I’m back on Backtable this week with a wide-ranging conversation about the job market, teleradiology, updates in the world of radiology private equity, etc etc. Always fun to chat with Ally and Mike, they’re awesome. Though, for the record, while I appreciate their kind introduction, I do not condone and categorically reject any overly charitable label that contains or alludes to the phrase “thought leader.”
Some articles for the show notes that are relevant to our discussion:
//
During the show, Ally asked about groups getting paid more by hospitals via stipends (i.e. call-pay/service fees/whatever-you-want-to-call-it) versus a guaranteed per-RVU rate (the latter is often direct pay per study with the hospital doing their own billing but can also be a bump after billing to an agreed-on rate to account for unpaid care/payor mix/shortfall from market rates).
I suggested that for many smaller groups approaching these conversations with hospitals for the first time, a stipend is probably easier. It’s predictable, there are easy precedents the hospital understands (e.g. other call pay), and it doesn’t usually require seismic contract or billing changes.
The reality is, I think, of course, a bit more nuanced. It depends on whether the pay increase is added on to an existing contract and long-standing relationship or part of a new contract negotiation, the size of the group, and the size/volumes of the hospital.
To reiterate, a call stipend or radiology service fee may be much more palatable to some hospitals when added to a preexisting contract as it doesn’t require changing anything else and just falls in line with the preexisting idea of call pay. For a group, it also has the benefit of providing a floor such that even if volumes aren’t high, the group still gets paid for being willing to cover after hours.
However, a per-RVU rate will likely make much more sense for a new hospital coverage paradigm in our current era of radiology contract musical chairs where a group is guaranteed that each case is paid at a good rate and doesn’t need to concern itself with billing reimbursement, bad debt, and other headaches. You read a case, you get paid a predictable rate. This may be especially good when a group’s contracts are not strong with payors and protects against downward reimbursement pressure. It’s also what a lot of the recent teleradiology contracts have been, which doubly makes sense given they are not local, may not have existing local payor contracts, and are often aggregating multiple hospitals together into one feed and spreading the work around.
Some hospitals may also be happier to pay fractionally for the work they’re actually getting than a separate fee for access (but I suspect they are most happy just spending less). Pay per RVU could still be a problem if there is a bad casemix with large numbers of plain films etc. I haven’t personally heard of many hospitals paying per-case on a modality basis, which is something relatively common in the outpatient world, but that doesn’t mean it isn’t happening.
In some ways, you can consider a service fee to have a floor that guarantees a certain level of income despite variable volumes, and a high per RVU rate as a guarantee of fair reimbursement in the setting of high/growing volumes. They also aren’t mutually exclusive.
The reality is that money is fungible, so what really matters for a group’s bottom line is more the actual pay itself than the exact mechanism. It’s not hard to look at your current RVUs and average reimbursement per hour or shift, add in a proposed stipend, and then do the simple math to figure out the effective pay per RVU. Yes, getting paid more per RVU directly is more straightforward. It scales easily with growing volumes, whereas a stipend may need to be increased if more staffing is needed in the future. Again, a small practice trying to remain competitive and putting one person on call at a time is a different beast than a large conglomerate with a large night team that is staffing based on an aggregate of multiple hospitals.
Each one is optimized for a different kind of hospital, a different kind of relationship, and a different kind of future. For groups going to their hospital and negotiating, the real best method is whatever the hospital is willing to do and still gets you the reimbursement you need for recruitment and retention. A credible threat of walking is the best leverage.
The most salient point is that groups can no longer provide services at a loss and still expect to be able to pay competitively in the market.
This is a brief adjunct to my post on using Autohotkey in Radiology (which basically every radiologist should be doing, by the way). I include it here not because I expect many people to run into the same problem I did but rather because it’s a good example of the not-so-challenging troubleshooting that we shouldn’t be scared to do in our quest for a better workflow. I’m a novice and that’s okay! We can still do cool stuff!
In that post, I mentioned an example script I made to streamline launching patient charts in Epic from PACS at home since our automatic integration doesn’t work remotely.
One thing I didn’t describe in that post is an annoying quirk for activating Epic because it runs through Citrix. Since Citrix is weird, and there are presumably multiple servers that can run Epic, the window title of our Epic actually changes with each login. Therefore, the usual static name-matching technique we use to activate Powerscribe, Chrome, or other typical apps doesn’t work.
In our system, Epic always has a title like “ecpprd2/prdapp01” or “ecpprd3/prdapp04”—but the numbers always shift around.
For a while, I used a workaround:
WinActivate, ahk_exe WFICA32.EXE
…which is the name of the Epic/Citrix program .exe file running on my PC, and as long as only one Citrix application was open at the time, it worked (I had to make sure to close an MModal application that auto-launched with it, but otherwise it was fine). Recently, my hospital started using some useless AI tool that cannot be closed, which broke my script.
The workaround one of my colleagues figured out is to change the AHK TitleMatchMode of that specific hotkey to recognize “regular expressions” (a “RegEx” is a sequence of characters that specifies a pattern of text to match).
SetTitleMatchMode RegEx
Then we can use WinActivate with a few modifiers to recognize an unchanging portion of the window title. In our example above, where the title always contains ecpprd or prdapp, we can use the following to select the EPIC window:
WinActivate i)^ecpprd
In this example, the “i” modifier allows case-insensitive search, and the carat (^) limits the string to the beginning of the window title. You can read more about regular expressions in AKH here.
In reality, if I had just explained my problem to any of the popular LLMs, I’m confident they would have given me the answer. They absolutely excel at this. The rapidly approaching agentic era will allow for some very easy, very powerful scripting in the very near future even if commercial products lag behind.
Most radiology resident evaluations are a one-way trip on the “keep reading” express. Maybe, in harsher climates, “read more,” which is just a coded way of saying I wish you were better and more knowledgeable with the word “reading” used as a stand-in for “learn more useful stuff please.”
Many attendings are nice but not kind. We don’t want to hurt anyone’s feelings, so we don’t share specific critical feedback other than in cursory, generic, or essentially universal ways.
When more substantive critical/negative feedback is given, it can also be idiosyncratic concerning various pet peeves (i.e. not generalizable or particularly helpful) or a list of mistakes (without direction on how to fix them). Because most of us are cowards, these shortcomings are often a total gut punch of a surprise.
Feedback as a first-year radiology resident is often more a measure of compliance than growth.
But even when helpful, most rotation evaluations feel more like a grade/assessment and less like a pathway forward.
Ideally, you’d get feedback continuously. You don’t want generic ‘good job’ feedback or ‘you suck’ feedback. Neither is very helpful except to tell you that things are generally working or generally not working, and that’s not really going to help guide action except in the broadest sense. When it comes to in-person or at least specific one-on-one style feedback in an ideal world, you would simply get great, actionable feedback. But we don’t live in an ideal world and most feedback you receive will be generically positive or negative in ways that mostly reflect the bias of the person providing the feedback and their personal preferences.
(A classic useless example is a female resident hearing that she should “be more confident” from older male attendings)
So, in all likelihood, you will feel that your evaluations fall into one of two camps: The Good Jobber (gee, thanks) and the Critical Curmudgeon (okay, jerkface). Neither is all that helpful. Chances are, it’s going to be up to you to get the feedback you want/need.
There are situations where—despite the unpleasant awkwardness—it is in the learner’s best interest to ask for feedback. When you directly ask for feedback, you have a greater chance of receiving helpful specific feedback if you ask for specificity. So don’t just say, “Do you have any feedback for me? How can I get better?” Rather, consider asking about specific ways to improve: How can I improve the conciseness of my reports? How is my organization? Have you noticed any instances where my reporting style may be unclear to clinicians? Is there a certain kind of perceptual mistake that you’ve seen me make multiple times that I should incorporate into my search pattern or my proofreading checklist to do better quality work? A direct question is more likely to get a directly helpful response. Does that sound tool-ish when written out this way? You bet it does. But surgical requests are more likely to generate meaningful responses.
At any given time, you may be working on a specific part of your approach to radiology. You may be working on developing your first-year search pattern. You may be working on the library of if-then pattern extenders that help you address pertinent disease features or whittle down a differential diagnosis. You may be working on your mental checklist so that you do not omit parts of the report. You may be working on trying to hone down and describe findings that matter and leaving out truly extraneous detail. You may be working on making reports that are as short as possible while containing the information that will help decide patient management.
When you ask for feedback or when people give you generic feedback, consider tailoring your request or your follow-up questions to get advice and feedback on the issues that you’re working on actively right now. We simply can’t actively work on every aspect of our jobs every day. That’s not how deliberate practice works. We all get better and more refined in our routines over time organically. Your process, whether it’s optimal or not, will become ingrained through repetition.
But that’s not how people who are experts actually improve to the next level of effectiveness. They do so in a piecemeal fashion. So if you want to work on that process and not just solidify through the inevitable accrual of time, then you may need some guidance on how to deploy that extra thoughtful work. If you aren’t sure what to work on, then consider asking for what is the one thing that is your weakest in a specific context: As in, what sort of finding am I missing? What sort of error am I making in my reports? What is the most irritating thing that you find yourself editing when finalizing my work?
We can be fragile: we can take feedback too personally and miss opportunities to improve.
We can be stubborn: we get used to hearing the same things and start internalizing them, then start ignoring what others say and also miss opportunities to improve.
Patterns are important: Don’t let a single bit of negative feedback crush your self-worth. But, the more often you hear something, the more seriously you should take it. Even when the feedback feels isolated, keep in mind that most feedback you get will fall into the generic nice-but-not-kind good-jobber variety just by dint of attending personality and not your performance.
Never forget that not all feedback is good feedback, and many experts do not understand how they arrived at their expertise. They may not know which practice methods would be most efficient to achieve mastery even for themselves let alone for any individual learner. They are trying to help, but meta-learning is a challenge.
Most people do not really know or understand how they learn. You might know how you like to learn, but how you like to learn isn’t necessarily the same as how you learn the best.
Commentary on your deficiencies is likely spot on, but proposed solutions for how you should fix them are a different story.
In defense of our ego, we often look for inaccuracies that allow us to psychologically reject the entire package. Instead of looking for reasons the person is wrong in order to create a straw man (even if they are in some details), look for the parts of feedback that are helpful or potentially true. The goal isn’t to be right; it’s to be better.
It’s not just what you’re doing, but how you’re doing it. It’s a difference in perspective. The way a novice and an experienced reader approach an exam is not the same, and the goal of the learning process is to move efficiently along the path from learner to expert.
One of the amazing baked-in capabilities of radiology residency training is that previewing and dictating a case and then reading out gives you 1) your attempt, 2) a fallible but more experienced person’s attempt, and 3) allows you to see the difference.
Obviously, you cannot see directly into the mind of your attending, and even how they verbalize their thought process or describe their search pattern is not necessarily the same as what they’re actually doing. Our subjective awareness of how we think is not perfect. We are in part black boxes even unto ourselves. At the risk of getting too far into metaphysics, we don’t think we think how we think.
Nonetheless, every case you read is the most plentiful opportunity for feedback. It’s not about just missing a certain finding or whether you were right or wrong. It’s about where you are now and seeing the next steps to getting where you want to be.
Feedback is not just what you get at the end of a rotation, it’s the difference between what you did and what—after the fact—you wish you’d done.