The Report of the ACR Task Force on Certification in Radiology

The report from the ACR Task Force on Certification in Radiology is out. This is the American College of Radiology’s formal take on how the American Board of Radiology is doing exercising its monopoly on radiology certification.

It’s clear, concise, well-researched, and contains wonderfully diplomatic language. I admire the restraint (emphasis mine):

Fees have remained unchanged for initial certification since 2016 and MOC since 2015. We acknowledge there is a cost of doing business and reserves are necessary but increased transparency and cost effectiveness are encouraged.

This in reference to finances like this. Such a gentle request.

Radiologists are also concerned that there is absence of scientific evidence of value.

An understatement certainly written like it was dictated by a radiologist.

We congratulate the ABR for modernizing its testing platform for MOC Part 3. The move to OLA is a responsive change from feedback. However, we are not aware of any theory or research that supports how the annual completion of 52 online multiple-choice questions (MCQ) demonstrates professional competence.

Ooph. Boom goes the dynamite.

MOC critique is tough.

On the one hand, OLA is better than a 10-year exam based on sheer convenience alone. It’s a trivial task, and therefore I know many radiologists don’t want to complain because they’re concerned that any changes would only make MOC more arduous or challenging (a valid concern). Organizations would much rather increase the failure rate to stave off criticism about a useless exam than actually get rid of a profit-generating useless exam (see USMLE Step 2 CS).

On the other hand, what a joke. There is literally no basis for assuming this token MCQ knowledge assessment reflects or predicts anything meaningful about someone’s ability to practice. Even just the face validity approaches zero. (Of course, this argument could also apply to taking 600+ multiple-choice questions all at once for initial certification).

Scores on standardized tests have been shown to correlate better with each other than with professional performance. Practical radiology is difficult to assess by MCQs, requiring a much greater skillset of inquiry and judgment.

This relates to the only consistent board certification research finding: standardized testing scores like Step 1 are the best predictors of Core Exam passage. People who do well on tests do well on tests. And while certainly smart hard-working people are likely to remain smart hard-working people, it remains to be seen if Step 1, in-service, or even Core Exam performance predicts actually being excellent at radiology versus being excellent at a multiple-choice radiology exam.

The obvious concern is that it’s the latter: that the ABR’s tests do not differentiate between competent and incompetent radiologists, and that we are largely selecting medical students based on their ability to play a really boring game as opposed to their ability to grow into radiologists.

Successful certification programs undertake early and independent research of assessment tools, prior to implementation. This is a vital step to ensure the accurate assessment of both learner competence and patient outcomes.

Subtle dig with the use of the word successful, but this is the crux:

Assessments are not bad. Good assessments are a critical component of the learning process. Bad assessments are bad because they provide incomplete, confusing, or misleading information, a problem compounded when a preoccupation with doing well on said bad assessment then distracts learners from more meaningful activities (look no further than Step 1 generated boards fever).

Medicine and radiology should not be limited by legacy methodology. Recognizing that learning and assessment are inseparable, the ABR has the opportunity to lead other radiology organizations, integrating emerging techniques such as peer-learning and simulation into residency programs. Assessment techniques are most effective when they create authentic simulations of learners’ actual jobs, although such techniques can be time-consuming and resource-intensive to develop.

Yes.

And I’ll say it again: diagnostic radiology is uniquely suited–within all of medicine–to incorporate simulation. Whether a case was performed in real-life years ago or is fresh in the ER, a learner can approach it the same way.

Despite alternative certification boards, the market dominance of the ABMS and its member boards has been supported by a large infrastructure of organizations that influence radiologists’ practices. The ABR should welcome new entrants, perhaps by sponsoring products developed by other organizations to catalyze evolution, innovation and improvement to benefit patients.

Hard to imagine that alternate reality.

Although the ABR meets regularly with leadership from external organizations, such as APDR, the ABR could better connect with its candidates and diplomates by reserving some voting member positions on their boards for various constituencies.

As I discussed in my breakdown of the ABR Bylaws, there is a massive echo chamber effect due to the ABR’s promotion policy, which requires all voting board members to be voted in by the current leadership, usually from within the ranks of its hard-working uncompensated volunteers. This means that operationally, the ABR is completely led, at all levels of its organization, by people who believe in and support the status quo.

Meeting with stakeholders may act as a thermometer helping them feel the room. The recent inclusion of Advisory Committees that give intermittent feedback and the perusal of social media commentary may provide the occasional idea. But all of this information is, by the ABR’s design, put into a well-worn framework.

The ABR is designed to resist change.

No one has a vote who wasn’t voted to have a vote by those who already vote.

And that’s a problem.

3 Comments

Coolio 12.14.20 Reply

I love your last line, confusingly to the point – great work. I enjoyed the post because of course, it’s accurate. You said,

“This relates to the only consistent board certification research finding: standardized testing scores like Step 1 are the best predictors of Core Exam passage. People who do well on tests do well on tests. And while certainly smart hard-working people are likely to remain smart hard-working people, it remains to be seen if Step 1, in-service, or even Core Exam performance predicts actually being excellent at radiology versus being excellent at a multiple-choice radiology exam.”

I wasn’t a great test taker. It almost kept me out of radiology. Consider that I am an above average radiologist (now). Might knowledge of how systems work and a radiologists place in it is elite in my view however, because most of modern medicine has become being a peon or algorithm pursuer/doer. I do believe that being male has an impact on this, since our ability to cut through BS and critically think, as opposed to worrying about what others think we should be doing (conformism), is just a biological reality. What I’m getting at is that this is actually more important in fields like medicine because once you have attained the knowledge (it just takes time) you can really see the forest.

Why did I have poor (compared to other rads applicants) boards scores but do amazingly well on the tests in rad residency? We had great materials/questions/time to set aside to put that into. I wasn’t the greatest rad resident, but I shined on the tests during that time, and it was mostly in figuring out that test taking is dependent on dedicated time and practice questions. Why was I just an OK resident? I didn’t care or worry about the social issues with busting my arse for attendings who didn’t give me the big picture, learning wise. I knew that since were forced to train so long, what was the point at that time in my life to be marginally better so I could do work better for others? I know it sounds funny but I think you get it.

I guess what I’m saying is that medical training and education takes way too long and is way too costly for the soon-to-be physician. Unless you are a real sociopath, med schools can’t effectively get rid of you, and for residencies it’s the same thing. You can’t tell me that a guy or gal who goes through a 4 year residency in radiology and does a fellowship is not already competent. It’s the essence of dumb to say something is for “competency” but then grade exams on a curve. Huh? I thought the idea was for a competent physician to do well with a patient, not compare his knowledge of zebras to others of his cohort. Alas, legacy and simple solutions to complex problems still rule the day, I guess.

Leave a Reply