CEO of Anthropic, Dario Amodei, fighting hard on a podcast (clip here) to set a new record on how quickly you can be wrong about how radiology works and how it’s been affected by AI so far:
There’s this story of, like—I think it was Geoff Hinton—predicting that AI will replace radiologists. And indeed, AI has gotten better than radiologists at, you know, doing scans, right?
But what happens today is there aren’t less radiologists. What the radiologist does is they walk the patient through the scan, and they kind of talk to the patient. So, the most highly technical part of the job has gone away, but somehow there’s still some demand for like the kind of underlying human skill.
The “Indeed” is completely baseless and without a reality correlate. It was indeed Geoffrey Hinton who said the world should stop training radiologists in 2016. In feeling compelled to address this current wrongness, I am reminded by this perfect comic by XKCD.
What I find genuinely surprising here is not the marketing hype or the finessing of reality but the bold, straight-faced use of the past tense for something that simply has not happened. The future is uncertain, but the desire to continue raising money doesn’t change the past.
NVIDIA’s Jensen Huang made similarly wrong comments last November and also received no pushback. I appreciate the motivations for this kind of more-than-hyperbolic talk given the massive investment in AI, but is there an example anywhere on any of these podcast tours or speeches where someone has actually pushed back on a laughable, supposedly factual claim and had a real discussion?
The real world of AI is interesting enough as it is without needing to pretend that radiology has proven Jevons Paradox. Like, stuff is happening. It’s cool! I get it. Every day, someone reports something interesting, like a mathematician sharing last week how Claude solved a complicated math problem he was working on. Even if Dario is directionally right about the future, he’s wrong about where we are and where we’ve been.
(I buried the links in those paragraphs, but I wrote not one but two posts responding to Huang’s comments that I think are worth reading.)
It’s always dangerous to assume malevolence over incompetence/ignorance. That said, Dario Amodei is worth $7 billion on paper, with Anthropic raising money on a valuation of something like $380 billion. Maybe I’m too cynical, but I’m starting to think he, Jensen Huang, and others know it’s not true but feel it’s the storytelling they need. This radiology “example” has become such a common talking point that I’m beginning to doubt that all the AI guys don’t know better. I’m not even entirely sure which explanation (untruth vs ignorance) I prefer.
A common response that waves away these sorts of issues is to say that the prediction is/was right, but the timing is/was wrong. This is the excuse Geoffrey Hinton has been giving ever since that infamous 2016 claim.
But when it comes to anything important, there’s a word that summarizes what it means when you are kinda “right” about something broadly but incorrect in all of the details and timing. That word is wrong.
If I predict a stock market crash within the next year and it doesn’t happen, I’m wrong. If it happens four years later, I was still wrong when I said it. And that wrongness can be very unhelpful.
I wouldn’t necessarily argue that Amodei’s predictions about what will happen to work when we achieve a country full of “geniuses in a data center” are wrong. But nothing about those predictions makes a false statement true. It doesn’t change the past, and it does call into question the seriousness of the thought process or the commitment to honest discourse. It also forces you to cynically place those predictions into a market and fundraising context. Because only that helps explain why smart, talented folks who should know better somehow seemingly don’t.
To address Amodei’s vision of what he already thinks radiology is today:
Could we see a world where radiologists do more patient counseling? Sure—though honestly, I doubt that would happen at scale.
Could we see a world where some radiologists really focus more on patient-care aspects? (I’ll generously assume “walking them through the scan” was figurative and not what a technologist does.) Perhaps a vision of breast imaging after screening profitability is curtailed? Also sure.
Could we see a move, at least for an intermediate-term, to a world where procedural work becomes a greater part of the job for a greater fraction of people? Sure—although people won’t be happy, and maybe, as Dario Amodei and others have also suggested, we’ll just have robots doing everything for everybody all the time.
I won’t pretend that those visions of the future are impossible, or that those possibilityscapes are wrong. But I can point out that the credibility of the visioner goes down when they are piggybacked on statements that are not reality-based.
People with vested interests in AI company valuations going to the moon telling you that AI is going to the moon are not an unbiased source of information. The nature of being the CEO of an extremely valuable company is that everything you say is the spear tip of a one-man marketing machine.
What’s less said amidst all the excitement, of course, is the quiet frustration of daily failure—like how my beloved magical automatic impression generator still sometimes hallucinates conclusions from a source text that is only a few hundred words. The future tense, the present tense, and the past tense are distinct for a reason.
Yes, of course, the jagged frontier is way more powerful than what is commercially available, and the best of what’s technically achievable has almost no market penetration. I agree that cool things are cool. What’s commercially available, however, is where the real world basically lives.
So, if anyone reading this plans to interview Mr. Amodei, Mr. Huang, or anyone else pontificating about AI:
In 2026, for radiologists, the “most highly technical part of the job” hasn’t meaningfully changed.