Skip to the content

Ben White

  • About
  • Archives
  • Asides
  • Support
    • Paying Surveys for Doctors
  • Medical Advice
    • Book: The Texas Medical Jurisprudence Exam: A Concise Review
    • Book: Student Loans (Free!)
    • Book: Fourth Year & The Match (Free!)
  • Radiology Jobs
  • #
  • #
  • #
  • #
  • About
  • Archives
  • Asides
  • Support
    • Paying Surveys for Doctors
  • Medical Advice
    • Book: The Texas Medical Jurisprudence Exam: A Concise Review
    • Book: Student Loans (Free!)
    • Book: Fourth Year & The Match (Free!)
  • Radiology Jobs
  • Search
  • #
  • #
  • #
  • #

Asking AI about AI in Medicine

12.06.22 // Medicine, Radiology

I had a brief chat with OpenAI’s ChatGPT about noise, radiology, and AI in medicine:

Human: Summarize the book Noise by Daniel Kahneman
AI: Noise by Daniel Kahneman is a book that examines how our minds work in a noisy, uncertain world. It dives into the cognitive biases and heuristics we use to make decisions with limited information, and explains how they can lead us astray. The book covers topics such as perception, intuition, and decision-making, and explores how noise and confusion can shape our lives and our thinking.

Human: How does the book argue that noise impacts radiology?
AI: In Noise, Daniel Kahneman argues that noise can have a major impact on radiology. He explains that radiologists are often subject to cognitive biases like confirmation bias and anchoring bias, which can lead them to make decisions that are not in the best interest of their patients. Additionally, he shows how noise can affect diagnostic accuracy and decision-making, making it harder for radiologists to provide accurate treatments and diagnoses.

Human: How could radiology address those limitations?
AI: Radiologists can address the limitations of noise in radiology by taking steps to reduce cognitive biases and increase accuracy. These steps include utilizing evidence-based decision-making, double-checking results for accuracy, and employing advanced imaging technologies such as computer-aided detection systems. Additionally, radiologists should strive to keep up with advances in medical technology and imaging techniques so that they can identify any noise or errors in their images.

Human: What are the current limitations in utilizing AI in radiology?
AI: Currently, one of the biggest limitations in utilizing AI in radiology is the potential for errors and false positives. While AI systems have the potential to be more accurate than human radiologists, there is still a risk that the AI system will miss diagnoses or misinterpret images. Additionally, AI systems can be costly to implement, and require significant computing power, which may not be feasible in many medical settings.

Human: What would be the easiest type of medicine to outsource to AI?
AI: The easiest type of medicine to outsource to AI would be administrative tasks such as scheduling appointments and completing insurance forms. AI can quickly and accurately complete these types of tasks, saving time and resources for medical professionals.

Even the machines think there’s some low-hanging fruit out there.

ABR soliciting nominations for its Board of Governors

11.12.22 // Radiology

The bylaws of the American Board of Radiology are interesting. I’ve written about them here, and you should read that post. Go ahead, I’ll wait.

Relevent to an email blast the ABR sent out this week is section 4.3:

Section 4.3. Election of Governors. Nominees shall be solicited from the Board of Trustees and Board of Governors, and may be solicited from any appropriate professional organization. Professional organizations shall provide such nominations in writing. An affirmative vote of at least three-fourths (3/4ths) of the entire Board of Governors shall be necessary for the election of any nominee to the Board of Governors.

The Board of Governors is the main decision-making body of the ABR. Historically, as you might surmise, it’s been a pretty insular group. The people already in charge nominate their friends and colleagues, most of whom have already put in the time and effort to earn a place in this upper echelon through years of service to the organization.

As I wrote in that post:

If you didn’t know, there are currently 8 Governors, and they basically run the show. Lincoln’s famous “team of rivals” approach this is not. The current people in power shall nominate their replacements and other organizations may, but the key for any hopeful member is making sure that you fit in with the cool kids, essentially guaranteeing that no one with substantially differing views would ever make it to the upper echelon.

On November 7, the ABR sent an email out to all diplomates (people who are board-certified):

The American Board of Radiology (ABR) is seeking interested candidates to serve on its Board of Governors. The Board of Governors discharges the fiduciary duties of the Board through its oversight of the business and affairs of the ABR.

The ABR recognizes the benefits inherent in a diverse community and seeks individuals with varied experiences, perspectives, and backgrounds. Interested candidates must be board certified by the ABR and willing to participate in Continuing Certification (MOC). Those from nonacademic practice environments are especially encouraged to apply.

And the nomination process for those interested candidates?

Nomination Process: Interested candidates should send a curriculum vitae and a letter of interest before November 15, 2022, to Brent Wagner, MD, ABR Executive Director (bwagner@theabr.org).

How about that.

Now presumably the actually election process is still the same (as is the impressive Conflict of Interest policy), but I believe this is the first time the ABR has publicly asked for all-comers. Now that doesn’t mean they’ll actually take any of those candidates seriously or that such candidates would ever comprise more than a single spot on the board, but—nonetheless—I would encourage anyone interested to apply.

Perceptions of Radiology MOC

10.21.22 // Radiology

In August, the results of a large ACR survey about radiologists’ opinions concerning MOC were released. The summary:

Similar proportions judged the existing program as excellent or very good (36%), or fair or poor (35%), with 27% neutral. MOC–CC was perceived more often as excellent or very good by those who were grandfathered yet still participating in MOC, were in academic practice, were in an urban setting, were older, or had a role with the ABR. In contrast, MOC–CC was more often judged as fair or poor by those who were not grandfathered, were in private practice, were in a rural setting, or were younger.

It’s a pretty sharp divide. Perhaps it is no great surprise that ABR volunteers and grandfathered academics are among those who view the ABR’s offering most favorably. The whole paper is worth a read, and the survey construction itself was very involved.

I’m not personally involved in any of this work, but the story behind why the survey even occurred (which I’m relaying secondhand) is perhaps the most interesting part.

If you recall, there was an ACR Taskforce on Certification in Radiology that was initially authorized in 2019 and concluded in 2020. You can read my highlights and analysis of their work here.

You also might not recall said task force, because their work apparently marks the only time in history that the ACR Board of Chancellors voted against authorizing a task force to submit their findings as a paper to the JACR. What could have been a paper shared with the broader radiology community was instead buried in a lonely random corner of the ACR website.

This is politics at work, of course.

Behind the scenes, the executive committee asked the task force to water down their language and conclusions, remove certain points, and generally “be nice.” The ACR, trying to repair some historically sour relationships with other radiology societies, didn’t want to be mean to the ABR. It probably doesn’t help when inbred leadership positions across multiple societies read like a game of musical chairs. It was apparently after multiple rounds of softening edits that the task force report was eventually buried anyway.

As a consolation, the board did permit a next-step survey in order to ascertain the true feelings of the radiology community (and not just the task force’s presumably squeaky wheels). The ACR used an outside consultant to help generate a fair survey, and then at the subsequent request of leadership, all “irrelevant” questions concerning the ongoing lawsuit, handling of COVID-19/testing delays, and the kerfuffle over the MOC agreement, etc were excised.

The survey results paper was initially submitted to JACR in 2021 and was—as you may have surmised—also rejected (though please note that the JACR is editorially independent). Much back and forth ensued—largely in order to limit perceived “bias against the ABR”—and the paper you see was finally published a year later.

In the end, thanks to editorial assistance, the limitations section is longer than the neutered discussion.

Joining and Leaving Private Equity: A Radiologist’s Story

10.19.22 // Radiology

Previously in the PE series, we spoke with someone who joined a practice that had previously been purchased (before eventually leaving). In this entry, we’re hearing from someone who joined an independent practice and was an associate in the work-up when the group sold.

Just like last time, I’ve sanitized names and some details. This case study is food for thought, not an indictment of a specific group or corporate entity.

Read More →

Losing the Track is Part of Tracking

09.19.22 // Radiology, Reading

From The Lion Tracker’s Guide To Life by Boyd Varty:

You must train yourself to see what you are looking for.

Perhaps the most concise description of radiology training.

“I don’t know where we are going but I know exactly how to get there,” he says.

Process > outcome.

I think of all the people I have spoken to who have said, “When I know exactly what the next thing is, I will make a move.” I think of all the people whom I have taught to track who froze when they lost the track, wanting to be certain of the right path forward before they would move. Trackers try things. The tracker on a lost track enters a process of rediscovery that is fluid. He relies on a process of elimination, inquiry, confirmation; a process of discovery and feedback. He enters a ritual of focused attention. As paradoxical as it sounds, going down a path and not finding a track is part of finding the track.

Uncertainty is part of life, but a search pattern helps.

On the long list for second place

09.06.22 // Radiology

It was a nice surprise to see over my busy call long weekend that I was nominated as a semifinalist for Aunt Minnie’s “most effective radiology educator” this year.

Or something like that:

https://twitter.com/PrometheusLion/status/1565118659240001539?s=20&t=PUxRKJnzqPV3qiuqppaTuw

 

As always, thanks for reading.

You Should Be Correlating Clinically

08.09.22 // Radiology

While I generally like to stay away from absolutely prescriptive advice, I think most radiologists would agree that the specific phrase “correlate clinically” is basically a microaggression against clinicians. It’s a triggering common joke that automatically lowers your work in the eyes of the reader. If somebody must correlate, then they should be told what they should correlate with: direct inspection, physical exam, CBC, a possible history of X symptom, sign, or disease, etc. Most of the “never say this” and “always say that” saber-rattling in radiology is nonsense, but this is an easy way to make friends.

Going further:

A new radiology resident typically begins training without much meaningful radiology experience but with substantial clinical knowledge. Don’t give it up. Of course, you will likely not stay up-to-date with every specific medical therapy used to treat the diseases you used to manage as an intern, but good radiologists retain a significant fraction of the pathophysiology that underlies the imaging manifestations of the diseases we train to discern and then supplements that foundation with a growing understanding of subspecialized management. That combination informs their approach in creating actionable reports for referring clinicians, reports that contain more of the things they care about and fewer that they don’t.

In the world of outpatient radiology, it’s common for patient histories to be lackluster. Frequently the only available information from the ordering provider is the diagnosis code(s) used to justify insurance reimbursement. In many cases, radiologists rely more on a few words provided by the patient directly (filtered through the technologist that performs the imaging study). We don’t always have the context we need to do our best work. It’s as frustrating as it is unavoidable.

In the more inpatient (or academic medical center) world that dominates residency training, it’s common to see at first glance a similar diagnosis code or short “reason for exam” text from the EMR, frequently limited in length and sometimes further limited to specific indications in the name of appropriate use (e.g. “head trauma, mod-severe, peds 0-18y”).

As a young radiologist, it is in your best interest to not rely on so thin a justification as what is readily dropped into the report via a Powerscribe merge field if you have access to richer information. You may know very little radiology, but you remain literate. You will do yourself and your patients a favor by supplementing your nascent diagnostic acumen with a real history obtained from reading actual notes written by actual humans. So often the provided “reason for exam” is willfully incomplete or frankly deliberately misleading, like the patient with acute-onset left hemiparesis resulting in a ground-level fall arriving with a history of “head trauma” instead of stroke. Or pretty much everyone with a history of “altered mental status.” So often, the clinical correlation was there all along. It’s part of the learning process that helps make the most of your limited training time.

“You can’t see what you’re not looking for” is a classic adage for a reason. You sometimes have to know the real history—as much as realistically feasible—in order to either make the finding or to put them into context.

So, before you ask anyone else to “correlate clinically,” maybe see if you can do it yourself.

Working for Private Equity: A Radiologist’s Experience

07.25.22 // Radiology

This is part three in a series of posts about private equity in radiology. The first was this essay. The second was an interview with former PE analyst and current independent radiologist Dr. Kurt Schoppe.

This third entry is a Q&A with a radiologist who recently left a PE-owned practice and their experience as someone who joined a freshly purchased practice, made “partner,” and then left anyway.

I suspect this radiologist’s experience is very generalizable, but regardless it’s a rare and interesting perspective to hear, especially regarding their equity/stock holdings. The person providing their perspective will remain anonymous, and I’m also not interested in naming and shaming the group. This is intended to share a novel viewpoint and be helpful for trainees (and maybe also be interesting to spectators):

Read More →

Sigh-RADS

07.18.22 // Radiology

This is a work in progress, but I humbly submit a draft proposal for a new multimodality standardized radiology grading schema: Sigh-RADS.

Sigh-RADS 1: Unwarranted & unremarkable

Sigh-RADS 2: Irrelevant incidental finding to be buried deep in the body of the report

Sigh-RADS 3: Incidental finding requiring nonemergent outpatient follow-up (e.g. pancreatic cystic lesion)

Sigh-RADS 4: Off-target clinically significant management-changing finding by sheer chance.

Sigh-RADS 5: Even broken clocks are correct twice a day (e.g. PE actually present on a CTA of the pulmonary arteries).

Sigh-RADS 6: Known malignancy staged/restaged STAT from the ED

 

Update (h/t @eliebalesh):

Sigh-RADS 0: Inappropriate and/or technically non-diagnostic exam for the stated clinical indication.

Radiology Call Tips

07.06.22 // Radiology

It’s July, and that means a new generation starting radiology call. I’m not sure I’ve ever done a listicle or top ten, so here are fifteen.

The Images

  1. Look at the priors. For CTs of the spine, that may be CTs of the chest/abdomen/pelvis, PET scans, or *gasp* even radiographs.
  2. Look at all reformats available to you. On a CT head, for example, that means looking at the midline sagittal on every CT head (especially the sella, clivus, and cerebellar tonsils) as well as clearing the vertex on every coronal.
  3. Become a master of manipulation. If your PACS has the ability to generate multiplanar reformats or MIPS, don’t just rely on what the tech sends as a dedicated series. Your goal is to make the findings, and you should be facile with the software enough to adjust the images to help you make efficient confident decisions, such as adjusting the axials to deal with spinal curvature or tweaking images to align the anatomy to the usual planes when a patient is tilted in the scanner. MPRs are your tool to fight against confusing cases of volume averaging.

Reporting

  1. Your reports are a reflection of you. I don’t know if your program has standard templates or if those templates have pre-filled verbiage or just blank fields.  There is nothing I’ve seen radiologists bicker about more than the “right” way to dictate. What is clear is that you should seriously try to avoid errors, which include dictation/transcription errors as well as leaving in false standard verbiage. We are all fallible, and Powerscribe is a tool. Do whatever it takes to have as close to error-free reports as humanely possible.
  2. Seriously, please proofread that impression. Especially for mistakingly missing words like no.
  3. Templates and macros are powerful, useful, and easily abused tools, just like dot phrases and copy-forward in Epic. I am all for using every tool you have, but you need to use them in a way that comports with your psychology and doesn’t make you cut corners or include inadvertently incorrect information.
  4. Dictate efficiently. If you are saying the same thing over and over again, it should be a macro. If you use PowerScribe, you can highlight that magical text and say “macro that” to create a new macro. (On a related note, “macro” is a shorter trigger word than “Powerscribe”.)
  5. More words ≠ more caring/thoughtful. As Mark Twain famously said, “I didn’t have time to write a short letter, so I wrote a long one instead.” It’s easier to word vomit than to dictate thoughtfully, but no one wants to read a long (or disorganized) report. Thorough is good, but verbose doesn’t mean thorough. It usually means unfiltered stream of consciousness. The more you write, the less they read.
  6. Never forget why you’re working. The purpose of the radiology report is to create the right frame of mind for the reader. Our job is to translate context/pretest probability (history/indication) and images (findings) into a summary that guides management (impression).
  7. Address the clinical question. This is especially true in the impression. If your template for CTAs was designed for stroke cases and says some variation of “No stenosis,” that impression would be inappropriate for a trauma case looking for vascular injury.
  8. Include a real history. Yes, there are cases where an autogenerated indication from the EMR is appropriate, but there are many more where that history is either insufficient or frankly misleading/untrue. You need to check the EMR on every case for the real history. Then, including a few words of that history is both the right thing to do and also very helpful for the attending who is overreading you.

Your Mindset

  1. Radiologists practice Bayesian statistics every day. This is to say: context matters. A subtle questionable finding that would perfectly explain the clinical situation or be more likely given the history should be given more psychological weight in your decision-making process than if it would be completely irrelevant to the presentation. For example, a sorta dense basilar artery is a very different finding in someone acutely locked-in than somebody with a bad episode of a chronic headache.
  2. Work on your tired moves. We can’t all make Herculean calls at 4 am. When you’re exhausted and depleted, you rely on the skills you’ve overtrained to not require exceptional effort. For radiologists, this boils down to your search pattern. You need to not just have well-developed search patterns but also to have sets of knee-jerk associations and mental checklists of findings to confirm/exclude in different scenarios to prevent satisfaction of search (e.g whenever you see mastoid opacification in a trauma case, you will make sure to look carefully for a temporal bone fracture).
  3. Everyone is a person. The patients, the clinicians, the technologists, and any other faceless person you talk to on the phone. It’s easy to feel distanced and disrespected sitting in your institution’s dungeon. But even you will feel better after a hard night’s work when you’re a good version of yourself and not just someone sighing loudly and picking fights with strangers.
  4. Music modulates the mood.
Older
Newer