While I generally like to stay away from absolutely prescriptive advice, I think most radiologists would agree that the specific phrase “correlate clinically” is basically a microaggression against clinicians. It’s a triggering common joke that automatically lowers your work in the eyes of the reader. If somebody must correlate, then they should be told what they should correlate with: direct inspection, physical exam, CBC, a possible history of X symptom, sign, or disease, etc. Most of the “never say this” and “always say that” saber-rattling in radiology is nonsense, but this is an easy way to make friends.
Going further:
A new radiology resident typically begins training without much meaningful radiology experience but with substantial clinical knowledge. Don’t give it up. Of course, you will likely not stay up-to-date with every specific medical therapy used to treat the diseases you used to manage as an intern, but good radiologists retain a significant fraction of the pathophysiology that underlies the imaging manifestations of the diseases we train to discern and then supplements that foundation with a growing understanding of subspecialized management. That combination informs their approach in creating actionable reports for referring clinicians, reports that contain more of the things they care about and fewer that they don’t.
In the world of outpatient radiology, it’s common for patient histories to be lackluster. Frequently the only available information from the ordering provider is the diagnosis code(s) used to justify insurance reimbursement. In many cases, radiologists rely more on a few words provided by the patient directly (filtered through the technologist that performs the imaging study). We don’t always have the context we need to do our best work. It’s as frustrating as it is unavoidable.
In the more inpatient (or academic medical center) world that dominates residency training, it’s common to see at first glance a similar diagnosis code or short “reason for exam” text from the EMR, frequently limited in length and sometimes further limited to specific indications in the name of appropriate use (e.g. “head trauma, mod-severe, peds 0-18y”).
As a young radiologist, it is in your best interest to not rely on so thin a justification as what is readily dropped into the report via a Powerscribe merge field if you have access to richer information. You may know very little radiology, but you remain literate. You will do yourself and your patients a favor by supplementing your nascent diagnostic acumen with a real history obtained from reading actual notes written by actual humans. So often the provided “reason for exam” is willfully incomplete or frankly deliberately misleading, like the patient with acute-onset left hemiparesis resulting in a ground-level fall arriving with a history of “head trauma” instead of stroke. Or pretty much everyone with a history of “altered mental status.” So often, the clinical correlation was there all along. It’s part of the learning process that helps make the most of your limited training time.
“You can’t see what you’re not looking for” is a classic adage for a reason. You sometimes have to know the real history–as much as realistically feasible–in order to either make the finding or to put them into context.
So, before you ask anyone else to “correlate clinically,” maybe see if you can do it yourself.
9 Comments
I am in residency for a program with contracts with 100’s of different imaging sites. If it is done at our hospital we can look the information up for ourselves or make a call when we can to get more of a full story because most of the time the reason for exam is not helpful, such as “acute non localized abdominal pain”, when they could say something like “history of pancreatitis, acute abdominal pain midline”. My point is, many times we cannot correlate because we don’t have access to the information, and the reasons given are not helpful. We can call the imaging sites, go through the whole waiting on the phone for several minutes to talk to the provider, etc. but it takes a lot of time when we have a lot of studies to get done, STAT’s popping up throughout the day, surgery residents/attendings coming to radiology throughout the day for us to look at studies and talk to them about findings, etc. Should we be spending all this extra time researching each patient because we weren’t given enough information for the reason for exam? For example, on a non contrasted CT abdomen/pelvis study, if the prostate is enlarged, I will impress it and say “enlarged prostate, recommend correlating with PSA value if not already performed.” I would not call the ordering provider to ask them about his prostate and PSA values because that may take a while to get when we have our list of studies to go through among other tasks as outlined above, not to mention procedures throughout the day. I will say what I think they should correlate it with, and it’s not micro aggression, it is radiology trying to be as helpful as we can with the limited information we were given.
My point in this brief piece is not that everyone should make a Herculean effort to obtain borderline useful clinical information, but rather:
1) We shouldn’t be needlessly lazy in the settings where such information is easily accessible (like reading the two-sentence ER provider note in EPIC for an ER case when it’s available).
2) The exact phrase “correlate clinically” is weak and can almost always be replaced by something more targeted. That specific phrase is the microagression. That specific phrase is the one that people use in their lame radiology jokes.
Directed correlation is a valid approach, as I suggest in the first paragraph. In many cases, it’s the only approach. If I see some asymmetric fullness of the lingual tonsils in a smoker, (correlation with) direct inspection is simply the next step. If you can use a different word than correlate, sure, all the better, but I’m certainly not arguing that correlation is a never-say word.
“Correlate with PSA,” as in your example, is a completely different statement than if you simply said to correlate “clinically.” Correlate with a lab value or an exam finding or the-history-you-didn’t-supply is trying to help the clinician and provide context for interpretation. Saying “correlate clinically” adds literally no value: it’s basically you just telling them to do their job.
This is a 2 way street. This is not all on radiologists.
When over half the histories provided to me for plain film extremity studies is simply “pain”. To me that is insulting and triggering.
If I started getting histories like “Pain over 5th metarsal after trauma” rather than simply “pain”, I might not have to put “please correlate with site of pain” in my reprots when there is a subtle finding.
You want radiologists to do our jobs. We want clinicians to do theirs. When (and if) I stop getting one word histories such as “pain” for extremities and “shortenss of breath” for CXRs, etc., at that point I will stop saying “Clinical correlation” for the finding I describe. Until that point, I suggest you stop pointing fingers. A short vague history lacks value far more than me asking someone to cliniccaly correlate a subtle finding when I don’t have information that is presumably available to the ordering physician. Maybe I should write an article about how short nonsensical not always applicable histories are triggering. A wiase radiolgist teacher once told me, Garbage in = garbage out.
I am not suggesting it’s all on radiologists.
The thrust of this piece is two-fold:
1) The *specific* phrase ‘correlate clinically’ sucks. Correlate with localized pain to a specific finding is not the same.
2) Residents can learn good habits–and we can generally do a better job for patients–by reviewing easily accessible information especially when such info will change our interpretation or management. This does not mean that we are responsible for all the limitations of bad or misleading orders; they drive me crazy too.
I’m sorry you feel triggered. I could easily have written a companion post to clinicians about how crappy their orders are and how it hurts patient care, but that is not the audience or purpose of this specific short piece.
I think the point of the article was that as radiologists we are responsible for the patient whether the ordering physician gives adequate history or not. And one way to improve diagnostic quality/accuracy (and best serve the patient and the referring provider) is to improve our reports by giving clinically relevant recommendations.
This is ill-advised and pushing more work burden on to the radiologist. How much history should radiologist gather before reading a chest radiograph? Can easily go down a rabbit hole and spend 15-20 minutes for something that reimburses 5$. Better hourly pay at McDonald’s – they start at 20$/hour now.
To all you residents and fellows reading this: this piece offers extremely bad advice. Don’t get caught up in doing other people’s work. The reason provided clinical history became shorter and shorter over time is precisely because we enabled bad behavior by gathering more and more history. If that continues it will slowly will become standard-of-care and will make practicing radiology extremely cumbersome. Don’t enable bad behavior.
To each their own, but that straw-man argument (e.g. 20 min for CXR?) is simply not what I’m suggesting if you read the piece.
This is a short article, and I didn’t give lots of examples of what I would consider reasonable vs unreasonable chart-review. For what it’s worth, I’m suggesting, for example, that reading the two-sentence note from the ED triage nurse or doctor in EPIC will help radiologists provide better care. I’m suggesting that, for example, reading the op-note in a complicated post-op case may actually make your life easier and help you make a better-tailored report. And I would also suggest that, when confronting indeterminate findings, more history/context (or reviewing prior scans) can be especially useful. It doesn’t necessarily even take more time than the perserverating and hedging that often occurs instead.
If you want to argue that no chart-diving is ever appropriate because you want everything you could possibly want to know within the “reason for exam” field, that is an argument you are entitled to make. I’m not really sure that’s a tenable position, but perhaps as an anonymous internet commenter, that isn’t really what you mean. Regardless, in the long run, hopefully the AI-driven projects working on this very issue will make it a moot point.
I subscribe to this opinion! I see there are some commenters who disagree – and in a very fast-paced practice, I don’t think my Chart Diving Hobby would survive.
Currently, my main gig is as a military rad with full EHR access (although it’s on a separate computer from my workstation which is a hurdle). If I’ve got a case that just doesn’t make sense or needs more history to interpret meaningfully, I’ll dig into old notes, old labs, and scanned documents from civilian referrals. I actually take pride in the fact that sometimes I end up with a more complete picture of the patient’s medical history than the brand new PCM that just got assigned and requested the study. It is one of the more satisfying experiences of my job.
For ER patients with bad histories that obviously were entered by the front desk, (rule out SBO, appendicitis, diverticulitis) I’ll include the actual patient-reported history as well as include any pertinent labs that were already completed. Although, I admit for these ER patients that effort is probably more of a microaggression (“if I can do your job AND mine, can you at least do yours?”)…
Anyway, thanks for putting this out there. I recognize it’s a philosophy that not all will or can buy into, but if it encourages a few residents to keep their clinical curiosity (nosiness) alive, then I’m happy.