Radiology is as popular as ever with medical students and enjoyed a very competitive and completely-filled match last year. But I also know that students and residents–because they keep asking–are wondering: Given ChatGPT and other recent seemingly rapid advances in AI, is radiology still a viable career choice?
Yes, I think it is still viable.
Let’s open with two quotes.
Radiology & AI: It’s Complicated
Back in 2016, Geoffrey Hinton, a deep learning pioneer and Turing Award winner, famously said: “People should stop training radiologists now–it’s just completely obvious within 5 years deep learning is going to do better than radiologists. It might be 10 years, but we’ve got plenty of radiologists already.”
Here in 2023, we know that Hinton was wrong (and that he didn’t really understand radiology). Radiologists were not replaced in 2021 and aren’t on track to be replaced in 2026. Turns out that medical imaging is a little more complex than a challenging CAPTCHA. And, we’re currently quite far from having plenty: there is a worsening worldwide shortage. Forecasting is very difficult, but the nature of silly predictions is that the silly predictor can always say the prediction is still “correct” and that just the “timing” is wrong.
The second quote is from Roy Amara in the 1960s, which is commonly known as Amara’s Law: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” We could consider Amara’s Law to be a combination of the Hype Cycle (with its inevitable short-term disappointment) and Compounding (long-term geometric growth).
I think Amara had it right, and his “law” helps explain why Hinton was wrong. It’s easy to get swept away by new technology and easy to mistake early progress and extrapolate rapid changes. But the last mile problem is real. Developing a suite of very useful narrow radiology tools is one thing; combining all of these tools to fully replicate a wide variety/complex series of interpretive and communication tasks while requiring no trained human oversight in order to fully replace radiologists is another. The second half of Amara’s Law also explains why the glib dismissal of AI based on its current failings is also misguided.
(But as an aside, can we just acknowledge that even just the software integration components of this are no small feat? Anyone who has worked with medical enterprise software is fully aware of just how behind the whole industry is with its poorly realized walled gardens compared with consumer software. Do we really think that somehow AI is going to make these large commercial vendors magically start producing high-quality products that can reliably talk to one another? For example, Nuance, maker of Powerscribe and now owned by Microsoft, is now mostly a sales company peddling a “new” expensive product upgrade (Powerscribe One) that is widely considered worse (slower, buggier, and less accurate) than the very old Powerscribe 360 it was designed to replace and that still doesn’t play nicely without other software. Obviously, people are making progress in the industry, but let’s not pretend that a hallucinating chatbot approaches the kind of six sigma reliability required for autonomous healthcare. Even when the underlying technology works, just getting things deployed effectively will be a multi-year process. This is hard stuff, and the real world is not a kind environment. If it were easy, everything wouldn’t suck so much. That aside, AI is obviously happening and it is going to change our world.)
Dan Elton summarized the most likely situation for those currently practicing and in training well in his substack “AI for medicine is overhyped“:
Automating much of radiology is very different than automating all of radiology. Weird anomalies and unexpected situations abound in medicine. As with driverless cars, a knowledgeable human in the loop will be needed for a long time. It’s hard for me to imagine scenarios under which could AI could wholesale replace everything radiologists do in the next 20 years just using today’s deep learning. Of course it is technically possible, but given the amount of work needed to train a system to do one narrow thing at the human level right now, it’s hard to imagine it happening. Foundation models for medical imaging could help, but will be hard to create. Radiologists can identify hundreds of different types of diseases across many image modalities (MRI, CT, chest X-ray, other X-ray, mammography, ultrasound, PET, SPECT) and also have a detailed knowledge of what variations of anatomy are normal vs anomalous. Instead, barring a major AI breakthrough, what is likely to happen is that radiologists will work with an AI copilot that consists of a panel of specialized models that each do one narrow thing. The data from that AI panel will help the radiologist do their job better by catching things that radiologists frequently miss and will also make radiology more quantitative by providing measurements like volumes and diameters of lesions, volume of visceral fat, volume of plaque, etc. Eventually, reading a scan will become faster with AI taking on a lot of the work, freeing up time for today’s overworked radiologists to interact with patients more.
Patient interaction perhaps not so much outside of breast imaging or a very big change to care delivery, but the thrust here is probably the reality we’ll see. The whole essay is a good read. Any radiologist who stubbornly argues they don’t want to be augmented is off the mark. I don’t have magical calipers when I measure lesions, and there are plenty of other tedious tasks where the value I add is a small part of the time I spend. AI is neither better than radiologists in real life nor useless, and that’s the reality we’ll need to operate under for the near future.
It’s impossible to know how the many, many coming tools will change the job market and reimbursement. We still haven’t even figured out, in general, who should/how to pay for AI tools and how they will affect medicolegal liability.
Forecasting is hard:
- Will this be a Goldilocks situation allowing the current number of radiologists to handle rising imaging volumes? Maybe. (Seriously, I think the AI doomists too easily discount the possibility that—for an indefinite period—imaging volumes will continue to increase and AI will just help us meet it.)
- Will we still have too few radiologists? If so, for how long? Will there be a window for mid-level encroachment to gain a foothold or will AI come fast enough to keep imaging largely in radiologists’ hands?
- As tools evolve, when will we see a surplus of radiologists that will drive reimbursement down? Or, will changes happen gradually enough for us to tweak the training pipeline?
- For how long will AI’s often comical failures require an extremely well-trained radiologist to catch and counteract, or will this eventually open the doors for more specialties to get into diagnostic imaging? How can we effectively combat automation bias?
These are the kinds of predictions that are very, very hard to make. They aren’t even mutually exclusive.
But: Most of them will certainly take years.
So, for those just entering radiology training and wondering more concretely about the field’s prospects: I suspect the job will look different 10 years from now, but probably not too different by the time you start independent practice. It may look very different 20 years from now. But is change always bad?
I’d venture in that more intermediate term we will see efficiency gains–and perhaps those will even alleviate the radiologist shortage before mid-levels are allowed to read too much imaging–but I think it will be a while still before there is a surplus. How long is a while? That’s purposefully vague. I won’t pretend to know how fast things are likely to happen. I don’t think anyone does. Even vague timelines are based on such a flimsy ever-shifting foundation that they’re barely more than arbitrary. How does one even predict how any part of the economy will adapt to these changes? Despite our dark rooms, radiologists don’t practice in a vacuum insulated from everything else going on in healthcare.
Shorter hours? Better jobs? More clinician and patient contact? Greater oversight over the imaging pipeline? Or just raw devaluation to a rubber-stamping cog in the ever-declining reimbursement machine? It’s not hard to find smart people in both camps.
(It’s also worth pointing out the obvious: we will see changes in a lot of fields that won’t be limited to imaging. While LLMs like ChatGPT have some amazing abilities, these are easier to use in other industries and more meaningful for non-interpretive tasks (parsing stuff in the EMR, generating summaries, dictation, and text prediction). The big short-term impact we will see in radiology with products based on GPT and similar models is streamlined radiology report generation (competitors for and/or extensions to currently available dictation software) and being able to cull through an incredible amount of written radiology report data to help make imaging training datasets. It will be much cheaper and easier to build more narrow models (i.e. not just fractures, filling defects, and hemorrhage) without relying on big improvements to the relatively stagnant current state of computer vision.)
Most people remain unconvinced that combining a gazillion models and chatGPT and suddenly there’s no role for humans in radiology. For example, here are the results of a recently published survey of 331 non-radiologist clinicians:
The need for diagnostic radiologists in the coming 10 years was expected to increase by 162 clinicians (48.9%), to remain stable by 85 clinicians (25.7%), and to decrease by 47 clinicians (14.2%). Two hundred clinicians (60.4%) expected that artificial intelligence (AI) will not make diagnostic radiologists redundant in the coming 10 years, whereas 54 clinicians (16.3%) thought the opposite.
Ultimately, neither the narrow-AI vision models nor the general-purpose LLMs are artificial general intelligence. They can’t adequately do cross-domain tasks, and they have to be spoonfed to learn the right things. Even when they perform as well as a human in a task (and in real-world practice, so far they don’t), the data so far is that the combination of the AI and a human performs better (or, perhaps, that AI may be able to adequately screen out a fraction of normal cases). Performance will undoubtedly improve over the long term, but–despite what Hinton argued–it’s not “obviously” on track to take over by 2026.
Lastly, if we do eventually enter a world where the need for rads is very small, it will likely be amidst broader changes to the workforce and economy. When/if that happens, I believe–entirely without any factual basis–that we will see a pipeline to alternative careers in medicine that will not require a huge burden of time and money. Retraining for those in industries affected by machine learning is going to be a thing, and I don’t think radiology changes in a vacuum.
But, in the meantime, there’s a very good chance that AI will help make radiology a very very good job before it becomes a bad one.
So What is a Young Radiologist To Do?
For starters, ignore most of the news.
With all the hype, AI is currently enjoying a lot of attention, and AI speculators (and grifters) are getting their time in the spotlight just like the Crypto Bros before them. Every time someone like Hinton says we should stop training radiologists, they are hurting patients. The absolute reality is that we need to keep training radiologists and every other kind of doctor until we don’t. We really should not be making granular long-term predictions when it comes to staffing “essential services.” The downsides of being wrong aren’t acceptable.
Radiologists are absolutely critical to healthcare, and the possibility that one day they might not be shouldn’t dissuade you from pursuing a career you are genuinely interested in.
It does, however, make a lot of sense in these tumultuous and uncertain times to be financially conservative: try to get out of debt, live within your means, save for retirement, etc. I don’t think the fact that your career is likely to change substantially over the next 20 years means you should abandon radiology.
I am biased, but I would also argue that the suspected inevitable eventual workforce adjustment is another reason why it’s not a bad idea for those trainees leaving academia to pursue being a partner in an independent practice and not an employee for a company that would be happier if you didn’t need to exist, that would love to use AI to make you practice dangerously, and will absolutely take any and all extra revenue you generate through that increased efficiency when the labor market allows. (I’m sure some of you are tired of the frequency of private equity-related content here recently; well, me too). There is probably no job less secure in radiology than an employed teleradiology position for a large national company.
Just don’t take that conservatism too far: you don’t need to work like crazy now to protect yourself from uncertainty. You still need to actually live and hopefully enjoy your life, be present for your family, stay active, and have hobbies that recharge your batteries. Otherwise, what’s the point? You shouldn’t just plan for the future at the expense of today.
Ultimately, we don’t yet know whether machine learning tools will usher in the techno-utopia AI evangelists have dreamed of or instead help us sink further into a pseudo-capitalist oligopolistic hellscape.
The pace of that change–in either direction–is firmly outside of your locus of control. So this is my only strong advice: figure out what your good life would look like, and try to build it.