CEO of Anthropic, Dario Amodei, fighting hard on a podcast (clip here) to set a new record on how quickly you can be wrong about how radiology works and how it’s been affected by AI so far:
There’s this story of, like—I think it was Geoff Hinton—predicting that AI will replace radiologists. And indeed, AI has gotten better than radiologists at, you know, doing scans, right?
But what happens today is there aren’t less radiologists. What the radiologist does is they walk the patient through the scan, and they kind of talk to the patient. So, the most highly technical part of the job has gone away, but somehow there’s still some demand for like the kind of underlying human skill.
The “Indeed” is completely baseless and without a reality correlate. It was indeed Geoffrey Hinton who said the world should stop training radiologists in 2016. In feeling compelled to address this current wrongness, I am reminded of this perfect comic by XKCD.
What I find genuinely surprising here is not the marketing hype or the finessing of reality but the bold, straight-faced use of the past tense for something that simply has not happened. The future is uncertain, but the desire to continue raising money doesn’t change the past.
NVIDIA’s Jensen Huang made similarly wrong comments last November and also received no pushback. I appreciate the motivations for this kind of more-than-hyperbolic talk given the massive investment in AI, but is there an example anywhere on any of these podcast tours or speeches where someone has actually pushed back on a laughable, supposedly factual claim and had a real discussion?
The real world of AI is interesting enough as it is without needing to pretend that radiology has proven Jevons Paradox. Like, stuff is happening. It’s cool! I get it. Every day, someone reports something interesting, like a mathematician sharing last week how Claude solved a complicated math problem he was working on. Even if Dario is directionally right about the future, he’s wrong about where we are and where we’ve been.
(I buried the links in those paragraphs, but I wrote not one but two posts responding to Huang’s comments that I think are worth reading.)
It’s always dangerous to assume malevolence over incompetence/ignorance. That said, Dario Amodei is worth $7 billion on paper, with Anthropic raising money on a valuation of something like $380 billion. Maybe I’m too cynical, but I’m starting to think he, Jensen Huang, and others know it’s not true but feel it’s the storytelling they need. This radiology “example” has become such a common talking point that I’m beginning to doubt that all the AI guys don’t know better. I’m not even entirely sure which explanation (untruth vs ignorance) I prefer.
A common response that waves away these sorts of issues is to say that the prediction is/was right, but the timing is/was wrong. This is the excuse Geoffrey Hinton has been giving ever since that infamous 2016 claim.
But when it comes to anything important, there’s a word that summarizes what it means when you are kinda “right” about something broadly but incorrect in all of the details and timing. That word is wrong.
If I predict a stock market crash within the next year and it doesn’t happen, I’m wrong. If it happens four years later, I was still wrong when I said it. And that wrongness can be very unhelpful.
I wouldn’t necessarily argue that Amodei’s predictions about what will happen to work when we achieve a country full of “geniuses in a data center” are wrong. But nothing about those predictions makes a false statement true. It doesn’t change the past, and it does call into question the seriousness of the thought process and the commitment to honest discourse. It also forces you to cynically place those predictions into a market and fundraising context. Because only that helps explain why smart, talented folks who should know better somehow seemingly don’t.
To address Amodei’s vision of what he already thinks radiology is today:
Could we see a world where radiologists do more patient counseling? Sure—though honestly, I doubt that would happen at scale.
Could we see a world where some radiologists really focus more on patient-care aspects? (I’ll generously assume “walking them through the scan” was figurative and not misattributing what a technologist does.) Perhaps a vision of breast imaging after screening profitability is curtailed? Also sure.
Could we see a move, at least for an intermediate-term, to a world where procedural work becomes a greater part of the job for a greater fraction of people? Sure—although people wouldn’t be happy, and maybe, as Dario Amodei and others have also suggested, we’ll just have robots doing everything for everybody all the time.
I won’t pretend that those visions of the future are impossible, or that those possibilityscapes are wrong. But I can point out that the credibility of the visioner goes down when they are piggybacked on statements that are not reality-based.
People with vested interests in AI company valuations going to the moon telling you that AI is going to the moon are not an unbiased source of information. The nature of being the CEO of an extremely valuable company is that everything you say is the spear tip of a one-man marketing machine.
What’s less said amidst all the excitement, of course, is the quiet frustration of daily failure—like how my beloved magical automatic impression generator still sometimes hallucinates conclusions from a source text that is only a few hundred words. The future tense, the present tense, and the past tense are distinct for a reason.
Yes, of course, the jagged frontier is way more powerful than what is commercially available, and the best of what’s technically achievable has almost no market penetration. I agree that cool things are cool. What’s commercially available, however, is where the real world basically lives.
So, if anyone reading this plans to interview Mr. Amodei, Mr. Huang, or anyone else pontificating about AI:
In 2026, for most radiologists, the “most highly technical part of the job” hasn’t meaningfully changed.
5 Comments
Hi Ben, thanks for this post. Fantastic as always. One thing it makes me realize is that if we as radiologists can see how patently false Dario’s claim is re: AI in radiology, why should we still have stock in what he says about AI in *other* fields? It’s a variant of the Gell-Mann amnesia you’ve talked about before. His claim about radiologists not doing much other than walking the patients through the scanner is so far off base from what we actually do day to day and with our current tools — should I maybe be a bit more skeptical of the claims being made in fields like coding, writing, engineering, etc? I know in those cases, there are actual layoffs and output on the internet to back up the AI-hype, but to some extent, I’m not sure the over-hype (in the present day) of radiology AI is the outlier.
For other readers, my post on Gell-Mann amnesia is here.
To directly answer your question, I think the healthiest approach is one of more inquiry and less certainty. Most of these ideas are worth consideration, but not wholesale acceptance.
I think Amara’s Law is helpful, which states that we tend to *overestimate* the impact of technology in the short term but *underestimate* it in the long term.
One of the issues with these proclamations, in addition to the obvious bias, is that there is a varying degree of gap between what is technically achievable in a narrow sense and what is required to replace a person.
Medicine, for example, deals with uncertainty. There is not always a right answer. Even when there is, there is the added complexity of human factors, preferences, mutliple data streams, etc. At the risk of being overly reductive, code either works or it doesn’t, so the feedback loop—and the corpus of data it’s based on—is a very different. I can’t speak to how good it is at writing iron-clad code, but can make a programmer more efficient, and that programmer can do more work. The ability of AI to summarize text, write boilerplate copy, edit, and write code are all domains make existing workers extensible. That can mean that fewer people can do the same amount of work. I think this is a real thing. That analogy does not hold equally well for all jobs.
My Tesla can drive me around a lot of town. But it isn’t quite there for parts of my journeys. It’s never on FSD from end-to-end. Therefore, I’m still in the car, and I’m still stuck in the car driving. I can’t drive more cars just because my car is capable of some degree of self-driving, and I can’t actively do much other than sit there. We need trucks that can 100% drive on their own to replace a trucker.
How many jobs fall more into the former camp versus the *latter* is an open question that I won’t pretend to know the answer to.
We are still in the everyone’s-making-everything-up-all-the-time phase when it comes to predictions. One core problem with all of it is the implicit assumption that things are not going to change in response to changes. Nothing stays the same. The question becomes which jobs and tasks are replaceable, which are augmentable to become more efficient, and which our essentially/practically immune. For example, an hour of therapy with a human therapist takes about an hour. An AI scribe may shave a few minutes of documentation off, but it’s not a seismic force. And the impact of AI therapists on the demand for human therapists is unknown at every timescale.
Despite the fact that Amodei and Huang are wrong in their statements, new abilities are coming, and the current state of the art absolutely obviates the need for a lot of bullshit jobs if companies want to be ruthless about it. Part of that is because a lot of people aren’t doing a very good job at their jobs to be sure, but the question of where that stops compared to human skill and which tasks truly require a human in the loop are open questions in many fields.
What I do tell my residents is that they should strive to be awesome.
As someone who uses generative AI on a daily basis for interpretive and non- interpretive work: hopefully someone out there in the AI universe is listening:
– Yes, we in radiology know you’re good. Excepting when you flagged that port as suspicious. We trust you’ll get better.
– Please work on giving us the tools we want and need. Instead of uninformed hyperbole about how AI will obviate our profession, help us help you. There are not enough rads for the avalanche of work we face, predicated on aging populace and the ease of clicking a button to order imaging. AI is helpful but in fact not better than a well trained radiologist (right now).
– Who owns liability for AI mistakes? Yes. Virginia. AI makes mistakes.
– AI will improve but continue to make mistakes. People turn out to be to frustratingly complex. AI, however well informed, will see images with findings it can’t explain. This is the Mel Korobkin rule: loved working with the guy since he could nearly always say he’d seen the finding flummoxing me. Until he would laugh and say he’d never seen that before!
– Patients do NOT want an AI doctor. My patients are nervous and intimidated (breast imaging). They want and deserve a conversation with a human. They almost never prefer an automated portal message that leaves them curious, anxious and more confused.
– Thanks to legislation, we data dump into EMRs every minute of every day. Generative AI is pretty good at translating med speak into plainer English. It is entirely unproven at synthesizing complex imaging with medical history, ongoing treatment and patient expectations.
– I want to see us leverage AI to do tasks not easily accomplished by humans: for example, utilizing radiomics to predict risk or treatment response based on image analysis. Crickets?
Instead, we get noise for from rich tech dudes about how radiology is over. Le sigh….