Skip to the content

Ben White

  • About
  • Archives
  • Asides
  • Support
    • Paying Surveys for Doctors
  • Medical Advice
    • Book: The Texas Medical Jurisprudence Exam: A Concise Review
    • Book: Student Loans (Free!)
    • Book: Fourth Year & The Match (Free!)
  • Radiology Jobs
  • #
  • #
  • #
  • #
  • About
  • Archives
  • Asides
  • Support
    • Paying Surveys for Doctors
  • Medical Advice
    • Book: The Texas Medical Jurisprudence Exam: A Concise Review
    • Book: Student Loans (Free!)
    • Book: Fourth Year & The Match (Free!)
  • Radiology Jobs
  • Search
  • #
  • #
  • #
  • #

Perceptions of Radiology MOC

10.21.22 // Radiology

In August, the results of a large ACR survey about radiologists’ opinions concerning MOC were released. The summary:

Similar proportions judged the existing program as excellent or very good (36%), or fair or poor (35%), with 27% neutral. MOC–CC was perceived more often as excellent or very good by those who were grandfathered yet still participating in MOC, were in academic practice, were in an urban setting, were older, or had a role with the ABR. In contrast, MOC–CC was more often judged as fair or poor by those who were not grandfathered, were in private practice, were in a rural setting, or were younger.

It’s a pretty sharp divide. Perhaps it is no great surprise that ABR volunteers and grandfathered academics are among those who view the ABR’s offering most favorably. The whole paper is worth a read, and the survey construction itself was very involved.

I’m not personally involved in any of this work, but the story behind why the survey even occurred (which I’m relaying secondhand) is perhaps the most interesting part.

If you recall, there was an ACR Taskforce on Certification in Radiology that was initially authorized in 2019 and concluded in 2020. You can read my highlights and analysis of their work here.

You also might not recall said task force, because their work apparently marks the only time in history that the ACR Board of Chancellors voted against authorizing a task force to submit their findings as a paper to the JACR. What could have been a paper shared with the broader radiology community was instead buried in a lonely random corner of the ACR website.

This is politics at work, of course.

Behind the scenes, the executive committee asked the task force to water down their language and conclusions, remove certain points, and generally “be nice.” The ACR, trying to repair some historically sour relationships with other radiology societies, didn’t want to be mean to the ABR. It probably doesn’t help when inbred leadership positions across multiple societies read like a game of musical chairs. It was apparently after multiple rounds of softening edits that the task force report was eventually buried anyway.

As a consolation, the board did permit a next-step survey in order to ascertain the true feelings of the radiology community (and not just the task force’s presumably squeaky wheels). The ACR used an outside consultant to help generate a fair survey, and then at the subsequent request of leadership, all “irrelevant” questions concerning the ongoing lawsuit, handling of COVID-19/testing delays, and the kerfuffle over the MOC agreement, etc were excised.

The survey results paper was initially submitted to JACR in 2021 and was—as you may have surmised—also rejected (though please note that the JACR is editorially independent). Much back and forth ensued—largely in order to limit perceived “bias against the ABR”—and the paper you see was finally published a year later.

In the end, thanks to editorial assistance, the limitations section is longer than the neutered discussion.

Joining and Leaving Private Equity: A Radiologist’s Story

10.19.22 // Radiology

Previously in the PE series, we spoke with someone who joined a practice that had previously been purchased (before eventually leaving). In this entry, we’re hearing from someone who joined an independent practice and was an associate in the work-up when the group sold.

Just like last time, I’ve sanitized names and some details. This case study is food for thought, not an indictment of a specific group or corporate entity.

(more…)

Losing the Track is Part of Tracking

09.19.22 // Radiology, Reading

From The Lion Tracker’s Guide To Life by Boyd Varty:

You must train yourself to see what you are looking for.

Perhaps the most concise description of radiology training.

“I don’t know where we are going but I know exactly how to get there,” he says.

Process > outcome.

I think of all the people I have spoken to who have said, “When I know exactly what the next thing is, I will make a move.” I think of all the people whom I have taught to track who froze when they lost the track, wanting to be certain of the right path forward before they would move. Trackers try things. The tracker on a lost track enters a process of rediscovery that is fluid. He relies on a process of elimination, inquiry, confirmation; a process of discovery and feedback. He enters a ritual of focused attention. As paradoxical as it sounds, going down a path and not finding a track is part of finding the track.

Uncertainty is part of life, but a search pattern helps.

On the long list for second place

09.06.22 // Radiology

It was a nice surprise to see over my busy call long weekend that I was nominated as a semifinalist for Aunt Minnie’s “most effective radiology educator” this year.

Or something like that:

https://twitter.com/PrometheusLion/status/1565118659240001539?s=20&t=PUxRKJnzqPV3qiuqppaTuw

 

As always, thanks for reading.

You Should Be Correlating Clinically

08.09.22 // Radiology

While I generally like to stay away from absolutely prescriptive advice, I think most radiologists would agree that the specific phrase “correlate clinically” is basically a microaggression against clinicians. It’s a triggering common joke that automatically lowers your work in the eyes of the reader. If somebody must correlate, then they should be told what they should correlate with: direct inspection, physical exam, CBC, a possible history of X symptom, sign, or disease, etc. Most of the “never say this” and “always say that” saber-rattling in radiology is nonsense, but this is an easy way to make friends.

Going further:

A new radiology resident typically begins training without much meaningful radiology experience but with substantial clinical knowledge. Don’t give it up. Of course, you will likely not stay up-to-date with every specific medical therapy used to treat the diseases you used to manage as an intern, but good radiologists retain a significant fraction of the pathophysiology that underlies the imaging manifestations of the diseases we train to discern and then supplements that foundation with a growing understanding of subspecialized management. That combination informs their approach in creating actionable reports for referring clinicians, reports that contain more of the things they care about and fewer that they don’t.

In the world of outpatient radiology, it’s common for patient histories to be lackluster. Frequently the only available information from the ordering provider is the diagnosis code(s) used to justify insurance reimbursement. In many cases, radiologists rely more on a few words provided by the patient directly (filtered through the technologist that performs the imaging study). We don’t always have the context we need to do our best work. It’s as frustrating as it is unavoidable.

In the more inpatient (or academic medical center) world that dominates residency training, it’s common to see at first glance a similar diagnosis code or short “reason for exam” text from the EMR, frequently limited in length and sometimes further limited to specific indications in the name of appropriate use (e.g. “head trauma, mod-severe, peds 0-18y”).

As a young radiologist, it is in your best interest to not rely on so thin a justification as what is readily dropped into the report via a Powerscribe merge field if you have access to richer information. You may know very little radiology, but you remain literate. You will do yourself and your patients a favor by supplementing your nascent diagnostic acumen with a real history obtained from reading actual notes written by actual humans. So often the provided “reason for exam” is willfully incomplete or frankly deliberately misleading, like the patient with acute-onset left hemiparesis resulting in a ground-level fall arriving with a history of “head trauma” instead of stroke. Or pretty much everyone with a history of “altered mental status.” So often, the clinical correlation was there all along. It’s part of the learning process that helps make the most of your limited training time.

“You can’t see what you’re not looking for” is a classic adage for a reason. You sometimes have to know the real history—as much as realistically feasible—in order to either make the finding or to put them into context.

So, before you ask anyone else to “correlate clinically,” maybe see if you can do it yourself.

Working for Private Equity: A Radiologist’s Experience

07.25.22 // Radiology

This is part three in a series of posts about private equity in radiology. The first was this essay. The second was an interview with former PE analyst and current independent radiologist Dr. Kurt Schoppe.

This third entry is a Q&A with a radiologist who recently left a PE-owned practice and their experience as someone who joined a freshly purchased practice, made “partner,” and then left anyway.

I suspect this radiologist’s experience is very generalizable, but regardless it’s a rare and interesting perspective to hear, especially regarding their equity/stock holdings. The person providing their perspective will remain anonymous, and I’m also not interested in naming and shaming the group. This is intended to share a novel viewpoint and be helpful for trainees (and maybe also be interesting to spectators):

(more…)

Sigh-RADS

07.18.22 // Radiology

This is a work in progress, but I humbly submit a draft proposal for a new multimodality standardized radiology grading schema: Sigh-RADS.

Sigh-RADS 1: Unwarranted & unremarkable

Sigh-RADS 2: Irrelevant incidental finding to be buried deep in the body of the report

Sigh-RADS 3: Incidental finding requiring nonemergent outpatient follow-up (e.g. pancreatic cystic lesion)

Sigh-RADS 4: Off-target clinically significant management-changing finding by sheer chance.

Sigh-RADS 5: Even broken clocks are correct twice a day (e.g. PE actually present on a CTA of the pulmonary arteries).

Sigh-RADS 6: Known malignancy staged/restaged STAT from the ED

 

Update (h/t @eliebalesh):

Sigh-RADS 0: Inappropriate and/or technically non-diagnostic exam for the stated clinical indication.

Radiology Call Tips

07.06.22 // Radiology

It’s July, and that means a new generation starting radiology call. I’m not sure I’ve ever done a listicle or top ten, so here are fifteen.

The Images

  1. Look at the priors. For CTs of the spine, that may be CTs of the chest/abdomen/pelvis, PET scans, or *gasp* even radiographs.
  2. Look at all reformats available to you. On a CT head, for example, that means looking at the midline sagittal on every CT head (especially the sella, clivus, and cerebellar tonsils) as well as clearing the vertex on every coronal.
  3. Become a master of manipulation. If your PACS has the ability to generate multiplanar reformats or MIPS, don’t just rely on what the tech sends as a dedicated series. Your goal is to make the findings, and you should be facile with the software enough to adjust the images to help you make efficient confident decisions, such as adjusting the axials to deal with spinal curvature or tweaking images to align the anatomy to the usual planes when a patient is tilted in the scanner. MPRs are your tool to fight against confusing cases of volume averaging.

Reporting

  1. Your reports are a reflection of you. I don’t know if your program has standard templates or if those templates have pre-filled verbiage or just blank fields.  There is nothing I’ve seen radiologists bicker about more than the “right” way to dictate. What is clear is that you should seriously try to avoid errors, which include dictation/transcription errors as well as leaving in false standard verbiage. We are all fallible, and Powerscribe is a tool. Do whatever it takes to have as close to error-free reports as humanely possible.
  2. Seriously, please proofread that impression. Especially for mistakingly missing words like no.
  3. Templates and macros are powerful, useful, and easily abused tools, just like dot phrases and copy-forward in Epic. I am all for using every tool you have, but you need to use them in a way that comports with your psychology and doesn’t make you cut corners or include inadvertently incorrect information.
  4. Dictate efficiently. If you are saying the same thing over and over again, it should be a macro. If you use PowerScribe, you can highlight that magical text and say “macro that” to create a new macro. (On a related note, “macro” is a shorter trigger word than “Powerscribe”.)
  5. More words ≠ more caring/thoughtful. As Mark Twain famously said, “I didn’t have time to write a short letter, so I wrote a long one instead.” It’s easier to word vomit than to dictate thoughtfully, but no one wants to read a long (or disorganized) report. Thorough is good, but verbose doesn’t mean thorough. It usually means unfiltered stream of consciousness. The more you write, the less they read.
  6. Never forget why you’re working. The purpose of the radiology report is to create the right frame of mind for the reader. Our job is to translate context/pretest probability (history/indication) and images (findings) into a summary that guides management (impression).
  7. Address the clinical question. This is especially true in the impression. If your template for CTAs was designed for stroke cases and says some variation of “No stenosis,” that impression would be inappropriate for a trauma case looking for vascular injury.
  8. Include a real history. Yes, there are cases where an autogenerated indication from the EMR is appropriate, but there are many more where that history is either insufficient or frankly misleading/untrue. You need to check the EMR on every case for the real history. Then, including a few words of that history is both the right thing to do and also very helpful for the attending who is overreading you.

Your Mindset

  1. Radiologists practice Bayesian statistics every day. This is to say: context matters. A subtle questionable finding that would perfectly explain the clinical situation or be more likely given the history should be given more psychological weight in your decision-making process than if it would be completely irrelevant to the presentation. For example, a sorta dense basilar artery is a very different finding in someone acutely locked-in than somebody with a bad episode of a chronic headache.
  2. Work on your tired moves. We can’t all make Herculean calls at 4 am. When you’re exhausted and depleted, you rely on the skills you’ve overtrained to not require exceptional effort. For radiologists, this boils down to your search pattern. You need to not just have well-developed search patterns but also to have sets of knee-jerk associations and mental checklists of findings to confirm/exclude in different scenarios to prevent satisfaction of search (e.g whenever you see mastoid opacification in a trauma case, you will make sure to look carefully for a temporal bone fracture).
  3. Everyone is a person. The patients, the clinicians, the technologists, and any other faceless person you talk to on the phone. It’s easy to feel distanced and disrespected sitting in your institution’s dungeon. But even you will feel better after a hard night’s work when you’re a good version of yourself and not just someone sighing loudly and picking fights with strangers.
  4. Music modulates the mood.

Should You Care About the ACR’s DXIT Exam?

06.08.22 // Radiology

Who & What

The Diagnostic Radiology In-Training (DXIT) is radiology’s in-service/in-training exam and it’s typically taken by R1, R2, and R3 residents midway through the year. Every(?) specialty has one.

It’s important to note that the ACR (the American College of Radiology) offers the DXIT exam, whereas the ABR (the American Board of Radiology) controls the longer multiple-choice nightmare that is initial board certification in the form of the Core and Certifying exams. These are different organizations that exist for different purposes that just happen to both provide multiple-choice tests for radiologists. The DXIT is one of many things the ACR does. Torturing you with their exams is literally all the ABR exists to do. The ACR is a democratic deliberative body. The ABR is not. They also don’t necessarily see eye to eye on the topic of certification.

Does the DXIT matter?

In the strictest sense, the DXIT simply does not matter. The ACR has no control over your future. The ACGME requires programs to perform a “summative assessment,” and the DIXT simply fulfills that need.

DXIT performance only matters if your program thinks it matters. There are no inherent consequences, good or bad, for your performance. Whether or not you get a high five or put into a remediation plan or just receive an email with your scores is up to your program.

It’s my anecdotal impression that most program directors don’t care all that much about the DXIT. If you get 95th percentile, you’ll probably get a pat on the pack? I know my program used to give out an award every year to the resident with the highest score. (The only awards available to non-senior residents were the highest in-service exam score and the research award. Perhaps a very narrow set of behaviors to promote and recognize, but that’s just my take.)

So if you are wondering if it “matters” for you, the answer is probably not: Statistically, most people are not going to perform at the extremes, and even programs that do care are likely to care most about poor performance. All things being equal, programs would obviously prefer their residents perform better than average (and our residents certainly do by a large margin).

But if you’re somewhere in the middle of the pack (i.e. ~50th percentile), I suspect it won’t really matter for you in your program unless otherwise specified. Every group has an average. If your program thinks you need to be at the 70th percentile or some other arbitrary floor in order to not get dissed in your semiannual review, then they should tell you.

Interpreting performance

One of the key limitations in interpreting in-training performance is this: did you study? Presumably, one key factor influencing how much you study is how big of a deal your program makes about the in-service exam. Therefore, residents in those programs may study more and outperform their national peers. Is this meaningful in the real world? Probably not? How does one compare the real-world performance of a resident who studies for two weeks and achieves 70th percentile and one who doesn’t study and gets 50th? I have no idea. High performance without dedicated review and low performance despite preparation are probably substantially more meaningful—but the test isn’t designed for any of that. It’s just a summative assessment.

However, if you are below average, and particularly if you fall in the bottom quintile, I would take the results more seriously. I want to be clear that such a score doesn’t mean you are a bad radiologist or a bad resident or that you do bad work. These are knowledge-based assessments that include a broad and idiosyncratic swath of radiology concepts. These tests are a poor proxy for real-world competence. However, standing between you-as-you-stand-now and you-the-independent-practitioner-of-radiology is a longer and more arduous exam that is the DXIT’s spiritual cousin. This is a wake-up call that you need to start or change the process you’re using to learn radiology. Either you need to up your radiology book learning earlier and/or begin the serious incorporation of high-quality questions into your regular review (e.g. if you haven’t been using your program’s RadPrimer subscription, then start).

Why it really matters

So, I would argue the real reason the DXIT matters, if at all, is as a predictor of Core Exam passage and therefore a possible wake-up call that your personal status quo may not be sufficient.

The reality is that the passage rate for the Core Exam is around 10-15% and that everyone will study much, much more for the Core Exam than they do for the in-service exam. Therefore, the conclusion that people in the bottom quintile of the DXIT are at substantially higher risk of Core Exam failure is highly intuitive. The Core Exam isn’t curved, but logically it’s not a stretch to imagine that the people performing the worst on one radiology multiple-choice exam might also be the same group that would perform worst on another radiology multiple-choice exam.

It’s obviously not perfect, but it’s probably the best thing we have access to at the moment.

There’s some data to back this up, and there are a few relatively recent papers that cover this:

  • Predictors for Failing the American Board of Radiology Core Examination
  • Predictors of Success on the ABR Core Examination
  • The Relationship Between ACR Diagnostic Radiology In-Training Examination Scores and ABR Core Examination Outcome and Performance: A Multi-Institutional Study

From the first paper:

This chart is from the first paper. Despite a relatively thick confidence interval, their conclusion was that “residents with scores below 20th percentile should be concerned.”

From the second paper:

In our study, higher percentile performance on the ACR ITE strongly predicted success on the Core Examination (P = .003322). [This was based on survey results where the average percentile of the passing group n=251 was 58th and of the failing group n=19 was 33.2th.]

Residents should see the ACR ITE as a crucial step in their training and adequately prepare for it.

From the third paper:

DXIT and Core outcome data were available for 446 residents. The Core examination failure rate for the lowest quintile R1, R2, and R3 DXIT scores was 20.3%, 34.2%, and 38.0%, respectively. Core performance improved with higher R3 DXIT quintiles. Only 2 of 229 residents with R3 DXIT score ≥ 50th percentile failed the Core examination, with both failing residents having R2 DXIT scores in the lowest quintile.

Interestingly, in both studies that tracked multiyear DXIT performance, the relationship between DXIT and Core performance was stronger for R1 and R3 residents and statistically insignificant for R2 residents. Why might that be? If anything, one might assume that the R1 performance would be the least useful given that many R1 residents are taking the exam before taking many of the relevant rotations. (For example, I hadn’t had body CT, ultrasound, mammo, or peds, to name a few). Perhaps the R1 results more reflect generic test-taking acumen, the R3 results partially capture early Core prep, and the R2 results are garbage because everyone is too tired from joining the call pool to bother preparing. Or perhaps the data is just messy garbage. Who knows?

Some possible takeaways

Fact: The statistical trends above are not destiny. It is information. What you choose to do with it is up to you.

If you feel like you studied for the DXIT but still performed in the lowest (or perhaps second lowest) quintile, then you need to look at your study methods. There’s a concept in educational literature that testing is learning. If you aren’t regularly testing yourself (e.g. question banks, flashcards), you are missing an opportunity for effective active learning as well as an important tool in the fight against the forgetting curve. Learning to practice radiology at the workstation and learning to master radiology testing are related but ultimately different tasks.

If you destroyed it, particularly as an R3, then your odds of failing the Core Exam probably approach zero (not that you are ready without studying per se (although maybe!?), but rather that with reasonable preparation you should be able to succeed by a clear margin (especially now that conditioning physics is no longer possible).

Resources

Unlike the ABR, the ACR is not so precious with its questions. Every year they release the test and its answers. See ‘previous exam sets’ on their In-Training Exam page.

For the flashcard fans among you, someone collected all those kindly released in-service exam questions from 2002-2021 into a big DXIT Anki deck, which in addition to being an extremely efficient DXIT review is also essentially a free high-quality almost ~2500 question Core Exam qbank.

The ACR also made a slew of question-containing ebooks for Continuous Professional Improvement (CPI) free during the pandemic, and those are still available for download. The files are all ebooks (.epub), so you’ll need an e-reader app (e.g. Apple Books) to view them.

You should also at some point read the ACR Contrast Manual.

Take home

I wish we had better predictive tools/exams for radiology that trainees could use during dedicated prep time to predict Core Exam passage like the score estimation available for the USMLE exams. We don’t. For every person that fails the Core Exam, there are also probably two who wildly overprepare.

Even though its interpretation is a little muddy, the DXIT is the best we have for now. It can give you a blurry snapshot of where you are versus where you likely want/need to be.

The Truth about Private Equity and Radiology with Dr. Kurt Schoppe

05.09.22 // Medicine, Radiology

Have you ever talked to someone above you on the food chain—usually with the word manager, director, or Vice President somewhere in their job title—and after they depart, you just stared blankly into the distance while slowly shaking your head thinking, Wow, they really don’t get it. What a useless bag of skin?

Well, that’s the opposite of my friend Dr. Kurt Schoppe, a radiologist on the board of directors at (my friendly local competitor) Radiology Associates of North Texas and payment policy guru for the American College of Radiology where he works on that fun zero-sum game of CMS reimbursement as part of the RUC. He’s whip-smart and has a unique perspective: Before pursuing medicine, Dr. Schoppe was a private equity analyst.

Consider this transcribed interview a follow-up to my essay about private equity in medicine published a few months ago.

Here’s our (lightly edited conversation):

(more…)

Older
Newer