Academic Medicine and the Peter Principle

Over four years of medical school, a one-year internship, a four-year radiology residency, a one-year neuroradiology fellowship, and now some time as an attending, one of my consistent takeaways has been how well (and thus how badly) the traditional academic hierarchy conforms to The Peter Principle.

The Peter Principle, formulated by Laurence J Peter in 1969, postulates that an individual’s promotion within an organizational hierarchy is predicated on their performance in their current role rather than their skills/abilities in their intended role. In other words, people are promoted until they are no longer qualified for the position they currently hold, and “managers rise to the level of their incompetence.”

In academic medicine, this is particularly compounded by the conflation of research prowess and administrative skill. Writing papers and even getting grants doesn’t necessarily correlate with the skills necessary to successfully manage humans in a clinical division or department. I don’t think it would be an overstatement to suggest that they may even be inversely correlated. But this is precisely what happens when research is a fiat currency for meaningful academic advancement.

The business world, and particularly the tech giants of Silicon Valley, have widely promoted (and perhaps oversold) their organizational agility, which is in many cases has been at least partially attributed to their relatively flat organizational structure: the more hurdles and mid-level managers any idea has to go through, the less likely it is for anything important to get done. A strict hierarchy promotes stability primarily through inertia but consequently strangles change and holds back individual productivity and creativity. The primary function of managers is to preserve their position within management. As Upton Sinclair wrote in The Jungle: “It is difficult to get a man to understand something when his salary depends upon his not understanding it.” (which incidentally is a perfect summary of everything that is wrong in healthcare and politics).

The Three-legged Stool

Academic medicine is sometimes described as a three-legged stool, where the department/institution is balanced on the three pillars of clinical care, research, and education. There is a pervasive myth that academic physicians can do it all: be an outstanding clinician, an excellent teacher, and a prodigious researcher. The reality is that most people don’t have all three skills in sufficient measure, and even those that do are not given the requisite time to perform meaningfully in all three categories.

While polymaths exist, the idea of the physician-scientist is increasingly intractable in modern medicine. The demands of clinical work have increased substantially with increasingly advanced medicine, increased productivity/RVU expectations, often overwhelming documentation burdens, and greater trainee oversight. Meanwhile, research has gotten more complex at the same time that the grant money has dried up. More and more of the funding pie goes to fewer and fewer people. And, lastly, education is typically taken for granted as something that should just take care of itself, something we expect “clinician educators” to do without faculty development, dedicated time, or even credit.

It’s very easy to have an unbalanced stool. Departments tend to lean in one direction or another precisely because they are aligned to do so and are staffed accordingly. As Arthur Jones of Proctor & Gamble famously remarked, “All organizations are perfectly designed to get the results they get.”

Putting pressure on individuals to do everything—deliver excellent clinical care, teach/mentor students/trainees, and contribute to high-impact research—fails to acknowledge the reality on the ground that doing high-end work in any of these dimensions is hard. Without dedicated time and sufficient support, doing anything successfully for very long is a challenge. Trying to work toward impossible expectations (even self-imposed ones) is a big contributor to burnout. At least a veneer of control, self-determination, and respect are prerequisites–not luxuries–for a successful “knowledge worker”-type career. We could more reasonably expect people in every role to excel at one role, be competent at another, and largely ignore the third.

Hospitals and large academic institutions are not filled by flat teams of equals working on a common mission, they are occupied by layers of committees and bureaucracy. Rising stars often contribute more to their superior’s careers than their own. Progress, change, and new initiatives are choked by a spinning-wheels-grind of proposals, SOPs, committees (and subcommittees), amassing nebulous “stakeholders,” and every other trick in the large organization toolbox that isn’t bad in of itself but should never be implemented universally and thoughtlessly. It’s all leadership in the I-attended-a-leadership-conference sense without any true leadership.

Physicians who focus on producing excellent care are derided as “worker bees” while those who believe in education are labeled by the inverse: “doesn’t like research.” And the managers rise to the level of their incompetence and perpetuate the hierarchy.

Meanwhile, the consultants and nonphysician leadership consolidate power outside of the traditional hierarchy. And how can we say they shouldn’t, when we do such a bad job ourselves?

Talking to Strangers & Professional Identity

From this NYTimes’ interview with Malcolm Gladwell about his new book, Talking to Strangers:

“That happens in these divided times — your professional identity becomes your identity,” Mr. Gladwell said.

“On every level,” he added, “I feel like there is this weird disconnect between the way the world is presented to us in the media and the way it really is. The goal is simply to give people an opportunity to reflect on things they otherwise wouldn’t reflect on. What they do next is out of my control.”

When people ask me about AI and radiology or automation in general, I tend to take an Amara’s Law view: We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

But the short term overhype is our best chance to make thoughtful structural changes that will allow for desirable future outcomes. A lot of mental health and structural economic problems are tied up in Gladwell’s first line:

If your professional identity is your identity, what happens when you or your profession need to change? If the individuals you meet are just proxies for the job they do or the service they provide, then they aren’t people to you.

A Deep Dive into the Tax Returns of the American Board of Radiology

With the class-action antitrust suit filed against the ABR earlier this year, a post looking deeper at the finances that make an appearance in the lawsuit is overdue. You can find the recent filings that I used for this post collected here.

I promise this is a more interesting read than one might think.

Background

The ABR is a 501(c)(6) organization.

Readers may be familiar with the more common 501(c)(3) designation, which is the non-profit status used by religious, charitable, scientific, and educational organizations (and is the type generally required to qualify for loan forgiveness within the Public Service Loan Forgiveness (PSLF) program).

A 501(c)(6) organization is a business league or association organized and operated primarily to “promote the common business interests of its members.” I’m not really sure how the ABR qualifies as that, but it’s a self-reported designation and that’s their purpose as far as the IRS is concerned. (burn!)

Regardless, as a tax-exempt non-profit, the ABR must make public their Form 990 annual returns for the past three years. The most recent returns (2017 tax year, filed in 2018) are also available on several sites including ProPublica and GuideStar, both of which maintain a searchable database of all non-profit tax returns.

But before we go through the returns and try to make sense of the ABR’s finances, a disclaimer: I am a radiologist with a hobbyist understanding of the tax code, not a CPA, tax preparer, or financial anything (let alone a forensic accountant). This is all for entertainment purposes only.

Disclaimer #2: Form 990 is light on details. I emailed the ABR for clarifications about several issues. Unsurprisingly, they ignored me.

Revenue Breakdown

Tax-exempt non-profits can, in fact, have taxable income if the income derives from activities separate from their mission. In 2016, the ABR claimed $45,605 in taxable income on line 7. In Part VIII, this was described as real estate rental income. I don’t know what they’re renting or to whom they rent to. In 2017, it was down to around $30k.

Total tax-exempt revenue is mostly from “certification fees.” Over the past five years, total revenue (which includes investment income):

2017: $17,430,259 (up $1,138,815, 6.9%)
2016: $16,291,444 (up $530,424, 3.4%)
2015: $15,761,020 (up $585,430, 3.9%)
2014: $15,175,589 (up $1,635,419, 12.1%)
2013: $13,540,170

For reference, the inflation rates over this period according to the US Labor Department were 0.8% in 2014, 0.7% in 2015, 2.1% in 2016, and 2.1% in 2017. So the ABR has reported true growing revenue.

Though not specified as such in the 990, the substantial year over year increase is primarily related to increasing MOC enrollment.

In 2017’s Part III, the ABR says that it administered 4,790 total exams and that approximately 27,000 diplomates were enrolled in MOC. As a reminder, MOC costs $340/year, so the revenue from MOC was approximately $9.2 million in 2017. Note that because older radiologists were grandfathered with “Lifetime” certifications whereas all new diplomates immediately enter MOC, this number will enlarge annually until a steady state is reached, presumably sometime in the next 10-20 years. I’m sure the ABR has a better idea of when the gravy train will hit its coasting speed.

To wit, the number of MOC-enrolled physicians was 26,140 in 2016, 25,000 in 2015, and 24,000 in 2014. With over a thousand new diplomates automatically enrolled in MOC every year, the ABR can anticipate rising revenue for the foreseeable future.

Also note that MOC revenue scales fantastically, as the incremental cost to service an individual enrollee approaches $0 but each one brings in $340 every year throughout their career.

Subtracting MOC income from the total fee-related revenue of $16,271,311 would leave around $7 million in revenue from exam services ($7.4MM in 2016). As I’ve discussed before, around $3 million of that comes just from residents, who spend around 1% of their pre-tax income directly to the ABR annually without exception.

In the 2018 annual report, ABR president Brent Wagner made this comment on the first page of content:

As a non-for-profit, the ABR collects fees to cover the expenses of administering the programs. Reserves are maintained to cover unexpected capital expenses, but fees are set as closely as possible to approximate administrative expenses.

Based on the numbers you just read, I think you can see where this is going. But let’s see how that holds up.

Expenses

Expenses also continued to rise from $13,758,299 in 2015 to $15,590,929 (a 13% increase) in 2016 to $16,468,080 in 2017 (a 5.6% increase). Payroll-type expenses increased from $6,932,139 to $7,342,360 (a 6% increase) to $8,256,080 (a 12% increase).

Revenue minus expenses yield a “non” profit of $962,179 in 2017, down from $1,329,124 in 2016.

So despite $17.4MM in revenue, the ABR claims its expenses take out all but a single mil. Let’s look at some of those.

Salaries

We don’t need to name names of every ABR officer and their compensation, but we can at least track the highest paid, who is Dr. Valerie Jackson, the ABR’s executive director during the studied period.

YearSalaryNon-salary (retirement + benefits)Total Compensation
2017690,10648,925$739,031
2016693,58333,429$727,012
2015711,57739,730$751,307
2014316,49521,828$338,323

2015 was a good year to hold the reins.

These numbers are interesting in how they may or may not correlate with the recent changes to the ABR board certification process that occurred in 2013 as well as the rising profits from MOC endeavors (common among ABMS members) and then perhaps coincidentally followed by small token decreases in light of increasing physician frustration with MOC across the country, increased scrutiny of member-board financials (like these wonderful reads from Newsweek about the ABIM), or the recent series of class action lawsuits.

Or that could totally be a coincidence. What any reader can agree on is that being the head of the ABR is certainly a livable income.

Another reason payroll increased? In 2017, the ABR hired a “Director of External Relations” whose base salary is $135,033. Trying to make the ABR look good is apparently a challenging full-time task. To round out the A-Team, they also hired a similarly paid “IT Director” (possibly as a result of Mammogeddon) and a “Managing Director,” to, um, manage and direct things?

“Other” Expenses

Other expenses make up the majority of the ABRs expenses and totaled $8,248,569. What are “other” expenses you ask? Look no further than Part IX (“Statement of Functional Expenses”)

These include things like credit card processing fees, office expenses, insurance, etc. I wouldn’t pretend to have any idea about how much the ABR should spend on staplers and toner.

Expenses are hard to parse because they’re grouped into large nebulous categories.

  • In 2016, the ABR spent $2,204,166 for Exam Services. Given that the ABR owns its own testing center, the exams are administered by employees, and the questions are largely written by volunteers, I would personally be interested in having these big numbers broken down some more. In 2017, this dropped down to $466,472.
  • $1,292,317 was for conferences, conventions, and meetings. The ABR does reportedly convalesce in Hawaii twice annually. This is down from $1,534,608 in 2016.
  • Legal services accounted for $45,439 in 2017. Watch for that number to rise substantially in 2019.
  • $1,051,695 was for “Other fees” — who knows? These are often payments made to various independent contractors that don’t fall into the other categories likes investment management fees, IT, etc.
  • Another mil for office expenses. Another mil for occupancy (rent, utilities, real estate taxes, etc).
  • And a big $5.4MM for salaries/wages of the nonexecutive rank and file.

What we do know from Schedule J, which details compensation information, is that the ABR does “reimburse board members for companion’s travel.” That’s probably in the $640,464 figure for travel, which is separate from the $1.3MM above for conferences/meetings. Twice annual all-expense-paid trips for the family to Hawaii do sound nice.

The ABR doesn’t run lean.

War Chest

Rising revenues have nicely padded the ABR’s current assets, which totaled $51,737,127 in 2017, up from $49.5 million in 2016 and $45.7 million in 2015.

The ABR does claim $10.8MM in liabilities, so according to its 990, the net assets total $40.8MM. However, these liabilities include $8,914,139 in “deferred revenue.” This is to say, the lower figure is meaningless in a common-sense interpretation. Deferred revenue is a mostly BS accounting technique used to refer to payments made in advance for services not yet rendered. In this case, it’s a convenient way to make it look like you’re making less money than you really are. Based on the figure, it would seem the ABR jams all the MOC fees in there, though it’s not as though they offer refunds.

In everyday terms, most would argue that all of the ABR’s revenue is “unearned.” Regardless, outside of clever spreadsheets, that cash isn’t really a liability. It’s all sitting in the bank.

So, the ABR was really holding on to a war chest of almost $52 million in 2017.

Even with questionable payroll, staffing, and vacation meeting practices, the ABR still has an annual operating revenue surplus (aka a profit) of a million bucks. What size of “reserves” will finally be sufficient to “cover unexpected capital expenses,” so to speak? Maybe the slush fund was to cover the inevitable lawsuit. Outside of its testing business, the ABR investment portfolio itself gained almost a million in 2017 and almost two million in 2016.

Even if the ABR stopped making a profit on fees (hard to do even with an impressive meeting budget), they would still likely make money every year. The portfolio proceeds would certainly be enough, for example, to drop the resident and fellow fees down to attending levels from their current $300 premium ($640/yr for trainees vs $340/yr for MOC).

The ABR Foundation

The ABR does maintain a separate “Foundation” that is a 501(c)3 organization. The ABR Foundation, unlike the ABR, is able to receive tax-exempt charitable donations. The nebulous purpose of the ABR Foundation is “to demonstrate, enhance, and continuously improve accountability to the public in the use of medical imaging and radiation therapy.” Like you, dear reader, I have no idea what that means.

Later, the mission of the foundation is described: “The Foundation carries out the scientific, educational and charitable purpose of the mission of the American Board of Radiology.” I have a hard time picturing that too. The final description of the mission: “to demonstrate, enhance, and continuously improve accountability to the public in the use of medical imaging and radiation therapy.” Darn, that still doesn’t help.

In 2017, it only made money from investments on its net assets (now $1.6MM). No one gave them any money, and they awarded no new grants. Why?

Because, since 2015:
“The Foundation is re-evaluating program services offered to determine how to most effectively achieve the mission statement. During this period of re-evaluation, no new contributions are currently being accepted. Current program commitments for sponsorships continue to be serviced.”

In 2014, the ABR awarded two grants:
1. $95,000 to create a national brachytherapy registry and QA program
2. $25,550 to create ethics and professionalism instructional modules

But 2013 was a much more interesting year:
The ABR Foundation somehow managed to receive $202,348. Total expenses were $305,982:
1. $95k again went to the brachytherapy project
2. $77,599 went to “summit meetings/conduct symposiums to optimize a national strategy for safe and appropriate medical imaging”
3. An additional $115,908 were also “meetings expenses”

So, it’s meetings all the way down.

Either way, the foundation seems mostly defunct now.

American Board of Radiology International

Is a “disregarded entity” that made $178,750 for total assets $539,649 in 2016. Its stated purpose is to “provide guidance in a radiology certification exam program.” Yes, a program. I have no idea.

 

Conclusion

Whew.

So to summarize, I am not an accountant. If you or someone you love has more information about the ABR’s operations or financial workings, please feel free to contact me. I would love to update this post (or all my posts, for that matter). I feel strongly that there should be more information available to candidates and diplomates, and it would be much better if it came unwhitewashed from the ABR itself rather than from someone throwing snarky potshots from the sidelines like myself.

The ABR makes a lot of money from trainees and radiologists who have zero say in its operations and to whom the ABR does not feel accountable.

The ABR’s expenses are hard to parse but are clearly not super-duper efficient in their use of very generous certification fees.

The war chest was around $52 million in 2017, is almost certainly higher now, and will continue increasing every year for the foreseeable future due to essentially compulsory MOC.

Assuming any of the current lawsuits progress to discovery and aren’t confidentially settled, we can eventually expect some fascinating news in the years to come. In the meantime, those legal fees certainly aren’t going to help their bottom line.

 

The 2019 ABR Core Exam Results, the Board Prep Arms Race, and Where It All Went Wrong

On August 15, the ABR released the 2019 Core Exam results, which included the highest failure rate since the exam’s inception in 2013: 15.9%.

(Side note: due to a “computer error,” the ABR decided to release the aggregate results before sharing individual results with trainees, resulting in entirely unnecessary extra anxiety. This itchy trigger finger release is in stark contrast to the Certifying Exam pass rates, which have never been released.)

 

YearPercent PassedPercent FailedPercent ConditionedTotal Examinees
201691.18.50.41,150
201793.56.30.21,173
201886.213.00.81,189
201984.015.90.11,191

So what happened?

 

Option 1

One potential explanation is that current residents are less intelligent, less hard-working, or less prepared for the exam despite similar baseline board scores in medical school, similar training at their residency programs, and now very mature and continually improving board preparation materials. This would seem unlikely.

If it really does simply chalk up to resident “caliber” as reflected in minor variations in Step scores, then I would volunteer that we should be concerned that a minimally related test could be so predictive (i.e., so what are we testing here? Radiology knowledge as gained over years of training or just MCQ ability?).

Option 2

Another explanation is that—despite the magical Angoff method used to determine the difficulty/fairness of questions—the ABR simply isn’t very good at figuring out how hard their test is, and we should expect to see large swings in success rates year to year because different exams are simply easier or harder than others. This is feasible but does not speak well to the ABR’s ability to fairly and accurately test residents (i.e., their primary stated purpose). In terms of psychometrics, this would make the Core exam “unreliable.”

The ABR would certainly argue that the exam is criterion-based and that a swing of 10% is within the norms of expected performance. The simple way to address this would be to have the ABR’s psychometric data evaluated by an independent third-party such as the ACR. Transparency is the best disinfectant.

Option 3

The third and most entertaining explanation is that current residents are essentially being sacrificed in petty opposition to Prometheus Lionheart. The test got too easy a couple years back and there needed to be a course correction.

 

The Core Prep Arms Race

With the widespread availability of continually evolving high-yield board prep material, the ABR may feel the need to update the exam in unpredictable ways year to year in order to stay ahead of “the man.”

(I’ve even heard secondhand stories about persons affiliated with the ABR in some capacity making intimations to that effect including admitting to feeling threatened by Lionheart’s materials/snarky approach and expressing a desire to “get him.” I wouldn’t reprint such things because they seem like really stupid things for someone to admit within public earshot, and I certainly cannot vouch for their veracity.)

If you’re happy with how your exam works, and then third parties create study materials that you feel devalue the exam, then your only option is to change (at least parts of) the exam. This may necessitate more unusual questions that do not make appearances in any of the several popular books or question banks. This is also not a good long-term plan.

This scenario was not just predictable but was the inevitable outcome of creating the Core exam to replace the oral boards. If the ABR thought people “cheating” on the oral boards by using recalls was bad, replacing that live performance with an MCQ test–the single most recallable and reproducible exam format ever created–was a true fool’s errand.

A useless high-stakes MCQ test based on a large and unspecified fraction of bullshit results in residents optimizing their learning for exam preparation. I see first-year residents using Crack the Core as a primary text, annotating it like a medical student annotates First Aid for the USMLE Step 1. Look no further than undergraduate medical education to see what happens when you make a challenging test that is critically important and cannot be safely passed without a large amount of dedicated studying: you devalue the actual thing you ostensibly want to promote.

In medical school, that means swathes of students ignoring their actual curricula in favor of self-directed board prep throughout the basic sciences and third-year students who would rather study for shelf exams than see patients. The ABR has said in the past that the Core Exam should require no dedicated studying outside of daily service learning. That is blatantly untrue, and an increasing failure rate only confirms how nonsensical that statement was and continues to be. Instead, the ABR is going to drive more residents into a board prep attitude that will detract from their actual learning. Time is finite; something always has to give.

If I were running a program that had recurrent Core Exam failures, I wouldn’t focus on improving teaching and service-learning. Because on a system-level, those things are not only hard to do well but probably wouldn’t even help. The smart move would be to give struggling residents more time to study. And that is bad for radiology and bad for patients.

The underlying impression is that the ABR’s efforts to make the test feel fresh every year have forced them to abandon some of the classic Aunt Minnie’s and reasonable questions in favor of an increasing number of bullshit questions in either content or form in order to drive the increasing failure rates. Even if this is not actually true, those are the optics, and that’s what folks in the community are saying. It’s the ABR’s job to convince people otherwise, but they’ve shown little interest in doing so in the past.

There is no evidence that the examination has gotten more relevant to clinical practice or better at predicting clinical performance, because there has never been any data nor will there ever be any data regarding the validity of the exam to do that.

 

The Impossibility of True Exam Validity

The ABR may employ a person with the official title of “Psychometric Director” with an annual base salary of $132,151, but it’s crucial to realize the difference between psychometrics in terms of making a test reliable and reproducible (such that the same person will achieve a similar score on different days) and that score being meaningful or valid in demonstrating what it is you designed the test to do. The latter would be if passing the Core Exam meant that you were actually safe to practice diagnostic radiology and failing it meant you were unsafe. That isn’t going to happen. It is unlikely to happen with any multiple-choice test because real life is not a closed book multiple-choice exam, but it’s compounded by the fact that the content choices just aren’t that great (no offense to the unpaid volunteers that do the actual work here). Case in point: there is completely separate dedicated Cardiac imaging section, giving it the same weight as all of MSK or neuroradiology. Give me a break.

The irony here is that one common way to demonstrate supposed validity is to norm results with a comparison group. In this case, to determine question fairness and passing thresholds, you wouldn’t just convene a panel of subject matter experts (self-selected mostly-academic rads) and then ask them to estimate the fraction of minimally competent radiologists you’d expect to get the question right (the Angoff method). You’d norm the test against a cohort of practicing general radiologists.

Unfortunately, this wouldn’t work, because the test includes too much material that a general radiologist would never use. Radiologists in practice would probably be more likely to fail than residents. That’s why MOC is so much easier than initial certification. Unlike the Core exam, the statement that no studying is required for MOC is actually true. Now, why isn’t the Core Exam more like MOC? That’s a question only the ABR can answer.

I occasionally hear the counter-argument that the failure rate should go up because some radiologists are terrible at their jobs. I wouldn’t necessarily argue that last part, with the caveat that we are all human and there are weak practitioners of all ages. But this sort of callous offhand criticism only makes sense if an increasing failure rate means that the people who pass the exam are better radiologists, the people who fail the exam are worse radiologists, and those who initially fail and then pass demonstrate a measurable increase in their ability to independently practice radiology. It is likely that none of the three statements are true.

Without getting too far into the weeds discussing types of validity (e.g., content, construct, and criterion), a valid Core Exam should have content that aligns closely with the content of practicing radiology, should actually measure radiology practice ability and not just radiology “knowledge,” and should be predictive of job performance. 0 for 3, it would seem.

So, this exam is lame and apparently getting lamer with no hope in sight. And let’s not get started on shameless exercise in redundant futility that is the Certifying Exam. So where did everything go wrong? Right from the start.

That’s the end of the rant. But let’s end with some thoughts for the future.

What the Core Exam SHOULD Be

To the ABR, feel free to use this obvious solution. It will be relatively expensive to produce, but luckily, you have the funds.

Diagnostic radiology is a specialty of image interpretation. While some content would be reasonable to continue in a single-best-answer multiple-choice format, the bulk of the test should be composed of simulated day-to-day practice. Unlike most medical fields, where it would be impossible to objectively see a resident perform in a standardized assortment of medical situations, the same portability of radiology that makes AIs so easy to train and cases so easy to share would be equally easy to use for resident testing.

Oral boards aren’t coming back. The testing software should be a PACS.

Questions would be cases, and the answers would be impressions. Instead of having a selection of radio buttons to click on, there would be free text boxes that would narrow down to a list of diagnoses as you type (like when you try to order a lab or enter a diagnosis in the EMR; this component would be important to make grading automated.)

The exam could be anchored in everyday practice. One should present cases centered on the common and/or high-stakes pathology that we expect every radiologist to safely and consistently diagnose. We could even have differential questions by having the examinee enter two or three diagnoses for the cases where such things are important considerations (e.g., some cases of diverticulitis vs colon cancer). These real-life PACS-based cases could be tied into second-order questions about management, communication, image quality, and even radiation dose. But it should all center around how radiologists actually view real studies. It could all be a true real-world simulation that is a direct assessment of relevant practice ability and not a proxy for other potentially related measurables. Let’s just have the examinees practice radiology and see how they do.

The ABR has argued in the past that the Core exam cannot be ported to a commercial center, which is largely the fault of the ABR for producing a terrible test. But at least that argument would finally hold water if the ABR actually deployed a truly unique evaluative experience that could actually demonstrate a trainee’s ability. The current paradigm is silly and outdated, and radiology is uniquely positioned within all of medicine to do better. The exam of the future should not be rooted in the largely failed techniques of the past.

 

Once in IDR/IBR/PAYE/REPAYE, Always in IDR/IBR/PAYE/REPAYE

“I got married” or “My income went up” and “they MADE me change repayment plan because I didn’t qualify anymore.”

No no no. They cannot make you do this. You are never forced to leave a federal repayment plan once you have been accepted for it, ever (unless you are not making your payments or don’t submit your annual income certification).

When in an Income-Driven Repayment (IDR) plan like IBR, PAYE, or REPAYE, payments may change annually—but the plan does not. People are more aggressive in negotiating their cable bill than they are in dealing with student loans servicers! Switching the acronym of your payment plan not only capitalizes your accrued interest but can easily cost could thousands or even tens of thousands of dollars.

If you lose your personal financial hardship while enrolled in IBR or PAYE, your interest capitalizes, but you’re not kicked out of the plan, and you are not forced to choose a new plan. Because you “no longer qualify” for the plan, your payments are capped at the 10-year standard repayment amount. “No longer qualify” is deliberating confusing phrasing. Yes, at this point, if you were to freshly apply, you would not qualify and would not be accepted into the plan. But guess what? You’re not applying, you’re just recertifying your income to determine your monthly payment amount. It doesn’t matter if you get married or if you win the lottery. Your plan is your plan until you choose otherwise. You don’t need to “qualify” anymore: once in IBR, always in IBR. Once in PAYE, always in PAYE.

People are being told during their annual income recertification that they need to switch from IBR and PAYE to REPAYE once they lose a PFH, and that is incorrect. All switching does is unnecessarily subject borrowers to uncapped higher monthly payments. The problem is, once you’ve switched to REPAYE on this bad advice, you can’t switch back (because you don’t qualify, see what they did there?).

You can never tell if this is ignorance or malevolence, but given that this is generally coming from FedLoan in the context of borrowers planning for PSLF, a “mistake” like this that results in borrowers spending more per month and getting less forgiven does look pretty suspicious.

Bottom line: This is just wrong. If you file your forms on time and make your monthly payments, your plan will never change.

Don’t let anyone tell you otherwise.