Inexperienced radiologists were significantly more likely to follow the suggestions of the purported AI when it incorrectly suggested a higher BI-RADS category than the actual ground truth compared with both moderately (mean degree of bias, 4.0 ± 1.8 vs 2.4 ± 1.5; P = .044; r = 0.46) and very (mean degree of bias, 4.0 ± 1.8 vs 1.2 ± 0.8; P = .009; r = 0.65) experienced readers.
Small but pretty clever study.
“Automation bias” is an insidious combination of anchoring and the authority fallacy, and it demonstrated a huge (though experience-mediated) effect here. We are still very much in the early days here (most radiologists are still very skeptical about the current powers of “AI” tools).
As machine learning tools grow in power and complexity, they will undoubtedly become a larger part of the radiology workflow. But counter to enabling inexperienced practitioners to function without oversight (e.g. a non-trained non-radiologist working independently with AI bypassing radiologists), we will instead need more robust skills: Raising the floor to miss fewer fractures and PEs is the easy part; it takes knowledge and experience to countermand the computer you increasingly rely on.
This isn’t going to be easy.