Popular essays about AI published in our current media like to cycle between utopianism to massive dystopian automation/disruption to the “plea for collaboration.” The latter, from “A Better Way to Think About AI” by David Autor and James Manyika in The Atlantic:
In any given application, AI is going to automate or it’s going to collaborate, depending on how we design it and how someone chooses to use it. And the distinction matters because bad automation tools—machines that attempt but fail to fully automate a task—also make bad collaboration tools. They don’t merely fall short of their promise to replace human expertise at higher performance or lower cost, they interfere with human expertise, and sometimes undermine it.
This is the no man’s land that explains why articles about AI in medicine don’t reflect reality and why everyone I talk to thinks that all radiologists spend their days awash in useful AI.
Human expertise has a limited shelf life. When machines provide automation, human attention wanders and capabilities decay. This poses no problem if the automation works flawlessly or if its failure (perhaps due to something as mundane as a power outage) doesn’t create a real-time emergency requiring human intervention. But if human experts are the last fail-safe against catastrophic failure of an automated system—as is currently true in aviation—then we need to vigilantly ensure that humans attain and maintain expertise.
The permanent cousin of automation bias will be de-skilling. Pilots who can’t actually take the yoke and land planes anymore are de-skilled. If there is a gap between useful AI and magical super-human AI, then mitigating de-skilling and preventing never-skilling are critical components to any future workflow:
Research on people’s use of AI makes the downsides of this automation mindset ever more apparent. For example, while experts use chatbots as collaboration tools—riffing on ideas, clarifying intuitions—novices often treat them mistakenly as automation tools, oracles that speak from a bottomless well of knowledge. That becomes a problem when an AI chatbot confidently provides information that is misleading, speculative, or simply false. Because current AIs don’t understand what they don’t understand, those lacking the expertise to identify flawed reasoning and outright errors may be led astray.
The seduction of cognitive automation helps explain a worrying pattern: AI tools can boost the productivity of experts but may also actively mislead novices in expertise-heavy fields such as legal services. Novices struggle to spot inaccuracies and lack efficient methods for validating AI outputs. And methodically fact-checking every AI suggestion can negate any time savings.
Beyond the risk of errors, there is some early evidence that overreliance on AI can impede the development of critical thinking, or inhibit learning. Studies suggest a negative correlation between frequent AI use and critical-thinking skills, likely due to increased “cognitive offloading”—letting the AI do the thinking. In high-stakes environments, this tendency toward overreliance is particularly dangerous: Users may accept incorrect AI suggestions, especially if delivered with apparent confidence.
The rise of highly capable assistive AI tools also risks disrupting traditional pathways for expertise development when it’s still clearly needed now, and will be in the foreseeable future. When AI systems can perform tasks previously assigned to research assistants, surgical residents, and pilots, the opportunities for apprenticeship and learning-by-doing disappear. This threatens the future talent pipeline, as most occupations rely on experiential learning—like those radiology residents discussed above.
Learners will not truly learn if they don’t take on tasks independently.