Skip to the content

Ben White

  • About
  • Archives
  • Asides
  • Support
    • Paying Surveys for Doctors
  • Medical Advice
    • Book: The Texas Medical Jurisprudence Exam: A Concise Review
    • Book: Student Loans (Free!)
    • Book: Fourth Year & The Match (Free!)
  • Radiology Jobs
  • #
  • #
  • #
  • #
  • About
  • Archives
  • Asides
  • Support
    • Paying Surveys for Doctors
  • Medical Advice
    • Book: The Texas Medical Jurisprudence Exam: A Concise Review
    • Book: Student Loans (Free!)
    • Book: Fourth Year & The Match (Free!)
  • Radiology Jobs
  • Search
  • #
  • #
  • #
  • #

Shallow versus Deep. Prolific versus Profound.

06.03.21 // Medicine, Reading

From Rest: Why You Get More Done When You Work Less:

We see work and rest as binaries. Even more problematic, we think of rest as simply the absence of work, not as something that stands on its own or has its own qualities. Rest is merely a negative space in a life defined by toil and ambition and accomplishment. When we define ourselves by our work, by our dedication and effectiveness and willingness to go the extra mile, then it’s easy to see rest as the negation of all those things. If your work is your self, when you cease to work, you cease to exist.

What fraction of doctors (and miscellaneous business workaholics) do you think still believe rest is for the weak and that the ability to slog and hustle is not just good but truly enviable?

Second, most scientists assumed that long hours were necessary to produce great work and that “an avalanche of lectures, articles, and books” would loosen some profound insight. This was one reason they willingly accepted a world of faster science: they believed it would make their own science better. But this was a style of work, Ramón y Cajal argued, that led to asking only shallow, easily answered questions rather than hard, fundamental ones. It created the appearance of profundity and feelings of productivity but did not lead to substantial discoveries. Choosing to be prolific, he contended, meant closing off the possibility of doing great work.

Just like many jobs are bullshit jobs, much of our research is bullshit research. If we reward volume, we disincentive depth.

As Vinay Prasad was quoted in the Atlantic, “Many papers serve no purpose, advance no agenda, may not be correct, make no sense, and are poorly read. But they are required for promotion.”

When we treat workaholics as heroes, we express a belief that labor rather than contemplation is the wellspring of great ideas and that the success of individuals and companies is a measure of their long hours.

And this is one of the tough parts about almost everything written about deep work, rest, the power of no, when to say yes, and everything else in the modern business/productivity/self-improvement genre. The approaches just don’t apply very well out-of-the-box to service workers.

Doctors are primarily service workers. If we work more hours, we see more patients. While there is almost certainly a diminishing return in terms of quality care, there is no diminishing return for billing. A doctor generates more RVUs when they have more clinical hours, and that means more profits for their handlers (until someone burns out and quits).

William Osler advised students that “four or five hours daily it is not much to ask” to devote to their studies, “but one day must tell another, one week certify another, one month bear witness to another of the same story.” A few hours haphazardly spent and giant bursts of effort were both equally fruitless; it was necessary to combine focus and routine. (He lived what he preached: one fellow student recalled that in his habits Osler was “more regular and systematic than words can say.”)

Cramming is bad. Overwork is bad. A reasonable concerted effort over a long period of time is good.

Studying 4-5 hours a day was apparently a reasonable amount to Osler’s sensibility. Olser, if you recall, founded the first residency training program at Johns Hopkins.

Do you remember when the heads of the NBME and FSMB suggested in 2019 that a pass/fail USMLE Step 1 would be bad because students might take the decreased pressure as an opportunity to watch Netflix? Because I do.

Diganostic FOMO

05.24.21 // Medicine, Radiology

From Suneel (brother of Sanjay) Gupta’s Backable: The Surprising Truth Behind What Makes People Take a Chance on You:

Apply the following quotation to why doctors don’t want to make the call:

If the fear of betting on the wrong idea is twice as powerful as the pleasure of betting on the right idea, then we can’t neutralize the fear of losing with the pleasure of winning. We can only neutralize the fear of losing with…the fear of losing. Enter FOMO, the fear of missing out. For backers, the only thing equally powerful to missing is…missing out.

Gupta goes on to discuss how potential backers initially too scared to be the first investor eventually pile on to avoid missing out on rare unicorns.

The fear of betting on the wrong idea in medicine manifests through overtesting and hedging. More than our desire to be right, we really don’t want to be wrong. But we can’t use the usual FOMO to our advantage, because medicine isn’t about making pitches or raising money but about directly helping individual people.

We don’t want to miss anything and so are forced to entertain everything, even if that means everyone in the ED gets a CT scan or a radiologist gives an impression a mile long with the words “cannot be excluded” featured prominently next to something extremely scary.

The true solution is this: we need to disentangle the outcome from the process. You can have good outcomes from bad decisions (dumb luck) or you can have bad outcomes after good decisions (bad luck). Luck and uncertainty are part of life, and they’re a big part of medicine. We should expect some bad outcomes even when doing the right thing, and we shouldn’t forget that overtesting and overdiagnosis have their own costs, risks, and harms. Passing the buck to the future doesn’t mean it won’t be paid.

By not making the call, we are making a decision: a decision to abdicate the diagnostic yield of an encounter or examination.

There are absolutely times when uncertainly is prudent. There are true “differential” cases. But the FOMO of diagnostic medicine should be passing up an opportunity to clearly define the next steps in a patient’s care.

Price Transparency and the True Cost of Quality Healthcare

05.19.21 // Medicine

When you read healthcare reviews online, so many of the 1-star reviews relate to prices: patients frustrated by high costs or surprised by high bills. It’s easy to think that price transparency rules will help. One key problem is that healthcare consumers are intermittently if not completely insulated from the true costs of their care due to the filter of commercial insurance. It’s hard to blame people for feeling that their doctor’s time is “worth” a $35 copay instead of the hundreds of dollars they really pay indirectly.

When my family moved from typical employer-provided health insurance to a high-deductible plan, I finally started seeing firsthand how much things really “cost,” and how ludicrous billing gamesmanship practices have become.

I’m a physician, and even I find it striking.

I recently received a bill for hundreds of dollars for an annual well-person patient visit that should have been covered at 100%. If you manage to complain about anything during the intake, you see, you also get billed for a problem visit at the same time.

Is that nuts? Well, yes, of course it is. But this is the world we live in and how institutions pay the bills.

Dr. Peter Ubel had an interesting article in The Atlantic back in 2013 called “How Price Transparency Could End Up Increasing Health-Care Costs” that holds up pretty well. His main thought experiment centers on imaging, which is an easy but sort of plus/minus example.

The same kind of consumer pressure rarely exerts a similar influence on the cost and quality of health-care goods. For starters, most patients have little inclination, or motivation, to shop for health-care bargains. Insurance companies pick up most of the tab for patients’ health-care. A patient who pays a $150 co-pay for an MRI (like I do with my insurance) won’t care whether the clinic she goes to charges the insurance company $400 or $800 for that MRI. The MRI is still going to cost the patient $150. Even patients responsible for 20 percent of the tab (a phenomenon called co-insurance) face a maximum bill of only $160 in this circumstance. That is not an inconsequential amount of money, but it is still not enough money to prompt most patients to shop around for less expensive alternatives, especially when most consumers don’t realize that the price of such for services often varies significantly, with little discernible difference in quality.

To make matters worse, patients often don’t shop for health care in the kind of rationally defensible way that economic theory expects them to. According to neoclassical economics, when making purchasing decisions consumers independently weigh the costs of services from the quality of those same services. If toaster A is more expensive than toaster B, the consumer won’t buy A unless it is better than B in some way—unless it is more durable or has better features—and unless these improved features are worth the extra money.

While some patients shop around for imaging services, many stay within a larger system for all their care or go where their doctor tells them. A more meaningful scenario in a large metro would be to compare broad costs across multiple specialties/types of care across multiple health systems. Say, in Dallas, would you generally pay less at UT Southwestern, Health Texas, or Texas Health? Does that hold true for primary care and specialty care? Are there certain categories of chronic diseases that one network does better or worse with? What about labs and imaging?

Due to network effects, a consumer may not meaningfully be able to choose where to do every little thing, but rapidly comparing systems is perhaps not beyond reach. It would be nice to know, for example, which places are playing games to maximize insurance payouts at patients’ expense and which (if any) aren’t.

Sometimes, however, cost and quality are not perceived by consumers as being independent attributes. Instead, people assume the cost of a good or service tells them something about its quality. For instance, blind taste tests have shown that consumers rate the flavor of a $100 bottle of wine as being superior to that of a $10 bottle of wine, even when researchers have given people the exact same wines to drink. Other studies show that expensive pain pills reduce pain better than the same pills listed at a lower price. Price, then, leads to a placebo effect.

Such a placebo effect is no major concern in the context of wine tasting and pain pills (even if it suggests that consumers could save themselves some money if they didn’t hold this strange belief that higher cost means higher quality). But suppose your doctor asks you to get a spinal MRI to evaluate the cause of your back pain, and you decide to shop around for prices before getting the test. Would greater price transparency cause you to choose an MRI provider more rationally? Or would you instead mistakenly assume that higher price means higher quality? There is reason to worry that price transparency won’t lead consumers to make savvy decisions. It is too difficult for people to know which health-care provider offers the highest quality care.

If patients are not going to make savvy use of price information to choose higher quality, lower cost health-care, some health-care providers, like doctors and hospitals, will probably respond to price transparency by raising their prices.

And there’s the rub: is it a race to the bottom or a slow creep to the top? And if it’s both, how do we predict and influence the outcome? If the growth of debt-fluid corporate and private equity has taught us anything, it’s that competition is fickle, and it doesn’t take much for a dominant position to be abused.

Imagine you direct an MRI center in Massachusetts, and the state government requires you and your competitors to post prices for your services. You consequently find out that the MRI center around the corner from you charges $300 more than you do for their spinal MRIs, and that this increased price hasn’t hurt their business. Imagine, also, that you are convinced that your competitors don’t offer higher quality MRI scans than you do—your MRI machines are just as new and shiny as theirs; your radiologists and technicians are just as well trained. In that case, if patients are not going to be price-sensitive, you are going to raise your prices to match your competitor’s. Otherwise you are just leaving money on the table.

Quality in healthcare is a theoretically important metric but it is so, so poorly measured and understood. Customer satisfaction? Not so good. Outcomes? Highly influenced by patient selection. Healthcare is heterogeneous and complex.

Ultimately, the problem is complex and nuanced, but we should keep this in mind. Efforts to increase price transparency through state and federal law need to be carefully crafted and closely followed. Such laws should include research funding that would enable experts to evaluate how the law influences patient and provider behavior.

Also, whenever possible, price transparency should be accompanied by quality transparency. We need to provide consumers with information not only about the cost of their services but also about the quality of those services, so that they can trade off between the two when necessary. I recognize that this is a huge challenge. Measuring health care quality is no simple task. But if we are going to push for greater price transparency, we should also increase our efforts to determine the quality of health care offered by competing providers. Without such efforts, consumers will not know when, or whether, higher prices are justified.

It’s no surprise that optimizing for cost seems like a reasonable plan given how easy it is to compare versus how hard meaningful quality indicators are to measure.

But price selection in the absence of quality selection creates a perverse incentive for the cheapest lowest-quality-but-just-barely-permissible product.

 

Residency and the Craftsman Mentality

05.12.21 // Medicine, Reading

From Cal Newport’s excellent Deep Work: Rules for Focused Success in a Distracted World:

Whether you’re a writer, marketer, consultant, or lawyer: Your work is craft, and if you hone your ability and apply it with respect and care, then like the skilled wheelwright you can generate meaning in the daily efforts of your professional life.

You don’t need a rarified job; you need instead a rarified approach to your work.

Let’s add “physician” to Newport’s list.

One of the more disheartening aspects of medical school is the siloing of medical specialties such that different breeds of doctors appear to compete in the hospital and medical students come away with the idea that one specialty should spark passion in their hearts (and that they will be professionally unhappy if they then don’t match into that one specialty).

It doesn’t have to be this way.

The satisfaction of professional growth and a job well done can transcend specialty choice. If the results of the match weren’t what you wanted, apply yourself to developing a craftsmen’s mentality. Get good at what you do, take pride in it, and passion can follow.

 

 

Explanations for the 2021 Official Step 1 Practice Questions

04.17.21 // Medicine

This year’s set was updated in February 2021 (PDF here).

The asterisks (*) signify a new question, of which there are only 2 (#24 and 53). The 2020 set explanations and pdf are available here; the comments on that post may be helpful if you have questions.

The less similar 2019 set is still available here for those looking for more free questions, and even older sets are all listed here. The 2019 and 2020 sets, for example, differed by 36 questions (in case you were curious).

 

(more…)

Scheduling Slack

04.15.21 // Medicine, Reading

From Alan’s Weiss’ classic Getting Started in Consulting:

Medical consultants advise doctors never to schedule wall-to-wall appointments during the day, because inevitably there are emergencies, late patients, complications on routine exams, and so forth. These create a domino effect by day’s end, and some very unhappy scheduled patients. Instead, they advise some built-in slack time that can absorb the contingencies. If not needed, slack time provides valuable respite.

Ha.

I read this book years ago when I was a resident and came across this passage when reviewing my Kindle highlights the other day.

Perhaps there are consultants in real-life operating as Dr. Weiss suggests, but this common-sense approach to sustainable medical practice is not what many large health systems employ.

In my wife’s old outpatient academic practice, lunchtime wasn’t respite. It was an overbook slot, and her schedule was so jam-packed that there were always patients clamoring to squeeze in.

In order to make that all work, the average doctor spends 1-2 hours charting at home per day.

Contrast that with her current solo practice where she has complete autonomy: her patients aren’t scheduled wall to wall, and she has time for the inevitable emergencies, hospitalizations, collateral phone calls, prior auths, and the other vagaries of modern medical practice.

I’m proud of the practice she’s built—during a pandemic no less!—but it’s crazy that even academic medicine has become so corporatized in its paradigm that it was easier to craft her own business in order to practice on anything approaching the terms that would best serve her patients and herself.

 

Attending

04.08.21 // Medicine, Reading

A few separate passages I’ve combined from Dr. Ronald Epstein’s Attending: Medicine, Mindfulness, and Humanity:

Altogether, I saw too much harshness, mindlessness, and inhumanity. Medical school was dominated by facts, pathways, and mechanisms; residency was about learning to diagnose, treat, and do procedures, framed by a pit-of-the-stomach dread that you might kill someone by missing something or not knowing enough.

Good doctors need to be self-aware to practice at their best; self-awareness needs to be in the moment, not just Monday-morning quarterbacking; and no one had a road map.

The great physician-teacher William Osler once said, “We miss more by not seeing than by not knowing.”

The fast pace of clinical practice—accelerated by electronic records—requires juggling multiple tasks seemingly simultaneously. Although commonly thought of as multitasking, multitasking is a misnomer—we actually alternate among tasks. Each time we switch tasks we need time to recover and, during the recovery period, we are less effective. Psychologists call this interruption recovery failure, which sounds a bit like those computer error messages we all dread. We increasingly feel as if we are victims of distractions rather than in control of them.

Outside of the OR (and not always even then), it’s rare to find an environment that promotes the space for deep focus and self-awareness. Mindfulness, insofar as a daily approach to medical practice, is something that goes against the grain of one’s surroundings.

Good doctors need to be self-aware to practice at their best; self-awareness needs to be in the moment, not just Monday-morning quarterbacking.

I like that. Medicine is generally ripe for Monday-morning quarterbacking (and radiology in particular due to the permanent, accessible, and objective nature of the imaging record).

But doctors don’t work in vacuums. We are humans.

Consider for a moment the discipline of human factors engineering:

Human factors engineering is the discipline that attempts to identify and address these issues. It is the discipline that takes into account human strengths and limitations in the design of interactive systems that involve people, tools and technology, and work environments to ensure safety, effectiveness, and ease of use. A human factors engineer examines a particular activity in terms of its component tasks, and then assesses the physical demands, skill demands, mental workload, team dynamics, aspects of the work environment (e.g., adequate lighting, limited noise, or other distractions), and device design required to complete the task optimally. In essence, human factors engineering focuses on how systems work in actual practice, with real—and fallible—human beings at the controls, and attempts to design systems that optimize safety and minimize the risk of error in complex environments.

(I first found that passage plagiarized on page 8 of the American Board of Radiology’s Non-interpretive Skills Guide.)

Despite the rise of checklists and evidence-based medicine, humans have been almost designed out of healthcare entirely. Rarely is anything in the system—from the overburdened schedules, administrative tasks, constant messaging, system-wide emails, the cluttered EMR, or the byzantine billing/coding game—designed to help humans take the time and mental space to sit in front of a patient (or an imaging study, for that matter) and fully be, in that moment, a doctor.

Program directors and the pass/fail USMLE

03.31.21 // Medicine

Just over a year ago, the NBME announced that Step 1 would soon become pass/fail in 2022. A lot of program directors complained, saying the changes would make it harder to compare applicants. In this study of radiology PDs, most weren’t fans of the news:

A majority of PDs (69.6%) disagreed that the change is a good idea, and a minority (21.6%) believe the change will improve medical student well-being. Further, 90.7% of PDs believe a pass/fail format will make it more difficult to objectively compare applicants and most will place more emphasis on USMLE Step 2 scores and medical school reputation (89.3% and 72.7%, respectively).

Some students also complained, believing that a high Step score was their one chance to break into a competitive specialty.

There are two main reasons some program directors want to maintain a three-digit score for the USMLE exams.

The Bad Reason Step Scores Matter

One reason Step scores matter is that they’re a convenience metric that allows program staff to rapidly summarize a candidate’s merit across schools or other non directly comparable metrics. This is a garbage use case—in all ways you might imagine—but several reasons include:

  • The test wasn’t designed for this. It’s a licensing exam, and it’s a single data point.
  • The standard error of measurement is 6. According to the NBME scoring interpretation guide, “plus and minus one SEM represents an interval that will encompass about two thirds of the observed scores for an examinee’s given true score.” As in, given your score on test day, you should expect a score in that 12-point page only 2/3 of the time. That’s quite the range for an objective summary of a student’s worth.
  • The standard error of difference is 8, which is supposed to help us figure out if two candidates are statistically different. According to the NBME, “if the scores received by two examinees differ by two or more SEDs, it is likely that the examinees are different in their proficiency.” Another way of stating this is that within 16 points, we should consider applicants as being statistically inseparable. A 235 and 250 may seem like a big difference, but our treatment of candidates as such isn’t statistically valid. Not to mention, a statistical difference doesn’t mean a real-life clinical difference (a concept tested on Step 1, naturally).
  • The standard deviation is ~20 (19 in 2019), a broad range. With a mean of 232 in 2019 and our standard errors as above, the majority of applicants are going to fall into that +/- 1SD range with lots of overlap in the error ranges. All that hard work of these students is mostly just to see the average score creep up year to year (it was 229 in 2017 and 230 in 2018). If our goal was just to find the “smartest” 10% of medical students suitable for dermatology, then we could just use a nice IQ test and forget the whole USMLE thing.

It’s easier to believe in a world where candidates are both smarter and just plain better when they have higher scores than it is to acknowledge that it’s a poor proxy for picking smart, hard-working, dedicated, honest, and caring doctors. You know, the things that would actually help predict future performance. Is there a difference in raw intelligence between someone with a 200 vs 280? Almost certainly. That’s 4 standard deviations apart. But what about a 230 and 245? How much are we really accidentally weighing the luxury of having both the time and money needed in order to dedicate lots of both to Step prep?

In my field of radiology, I care a lot about your attention to detail (and maybe your tolerance for eyestrain). I care about your ability to not cut corners and lose your focus when you’re busy or at the end of a long shift. I care that you’re patient with others and care about the real humans on the other side of those images.

There’s no test for that.

If there were, it wouldn’t be given by the NBME.

The Less Bad Reason Step Scores Matter

But there is one use case that unfortunately has some merit: multiple-choice exams are pretty good at predicting performance on other multiple-choice exams. That wouldn’t matter here if licensure was the end of the test-taking game, but Step performance tends to predict future board exam performance.

Some board exams are quite challenging, and programs pride themselves on high pass-rates and hate dealing with residents that can’t pass their boards. So, Step 1 helps programs screen applicants by test-taking ability.

Once upon a time, I considered a career as a neurosurgeon instead of a neuroradiologist. No denying it certainly sounded cooler. I remember attending a meeting with the chair of neurosurgery at my medical school. This is only noteworthy because of his somewhat uncommon frankness. At the meeting, he said his absolute minimum interview/rank threshold was 230 (this was back around 2010). And I remember him saying the only reason he cared was because of the boards. They’d recently had a resident that everyone loved and thought was an excellent surgeon but just couldn’t seem to pass his boards after multiple attempts. It was a blight on the program.

Now, leave aside for a moment the possible issue with test validity if a dutiful clinician and excellent operator is being screened out over some multiple-choice questions. At the end of the day, programs need their residents to pass their boards. And it’s ideal if they pass their boards without special accommodations or other back-bending (like extra study time off-service) to help enable success. So while Step 1 cutoffs may be a way to quickly filter a large number of ERAS applications to a smaller more manageable number, they’re also a way to help programs in specialties with more challenging board exams ensure that candidates will eventually move on successfully to independent practice.

There is only one real reason a “good” Step score matters, and that is because specialty board certification exams are also broken.

One of the easiest ways a program can demonstrate high-quality and high board passage rates regardless of the underlying training quality is to select residents who can bring strong test-taking abilities to bear when it comes to another round of bullshitty multiple-choice exams.

A widely known secret is that board exams don’t exactly reflect real-life practice or real-life practical skills. Much of this type of board knowledge is learned by the trainees on their own, often through commercial prep products. A residency program in a field with a challenging board exam, like radiology, may be incentivized to pick students with high scores simply as a way to best ensure that their board pass rates will remain high. If Step 1 mania has taught us anything, it’s shown us that if you want high scores on a high-stakes exam, you pick people with high academic performance and then get out of their way.

What Are We Measuring?

When I see the work of other radiologists, I am rarely of the opinion that the quality of their work depends on their innate intelligence such as might be measured on a standardized exam. Ironically, most radiology exam questions ask questions about obvious findings. Almost none rely on actually making the finding or combating satisfaction of search (missing secondary or incidental findings when another finding is more obvious). And literally none test whether or not a radiologist can communicate findings in writing or verbally. When radiologists miss findings and get sued, the vast majority are for “perceptual errors” and not “interpretive ones.” As in, when I miss things, it’s relatively rare that I misinterpreted the findings I make and more often that I just didn’t see something (often that even I normally would [because I’m human]).

Obviously, it’s never a bad thing to be super smart or even hard-working. But the medical testing industrial complex has already selected sufficiently for intelligence. What it hasn’t selected for is being competent at practicing medicine.

While everyone would like to have a smarter doctor and train “smarter” residents, the key here is that board passage rates are another reflection of knowledge cached predominately in general test-taking ability and not clinical prowess. All tests are an indirect measure, for obvious reasons, but most include a wide variety of dubiously useful material largely designed to simply make exams challenging without necessarily distinguishing capable from dangerous candidates.

So when program directors complain about a pass/fail Step 1, they should be also be talking with their medical boards. I don’t think we should worry about seeing less qualified doctors, but we should be proactive about ensuring trainee success in the face of exams of arbitrary difficulty.

 

Private Equity & the Comeback of the For-Profit Medical School

03.29.21 // Medicine

You may be used to hearing about private equity takeovers of medical practices, but you may be less familiar with the recent growth of for-profit (primarily osteopathic) medical schools, two of which are owned by Medforth Global Healthcare Education. Medforth, as you might have guessed, is a private equity firm based in New York, NY.

Given the current osteopathic tilt of these for-profit schools, can this do anything but worsen the unfair stigma already facing DO students and physicians?

Well, here is an excerpt for how a recent proposed for-profit private-equity-backed medical school in Billings, Montana got derailed:

Billings Clinic has had concerns about many aspects of the Medforth project. These concerns, combined with three events that occurred recently, have caused Billings Clinic to cease discussions with Medforth. On two separate occasions an executive representative of the medical school cast aspersions on a proposed medical school in Great Falls, Montana, on the basis of that medical school’s Jewish affiliation. Those statements intimated that a school with a stated Jewish heritage may not belong in Montana and would not be able to assimilate in the state. In a third instance, a different executive representative of the medical school referred to a female Billings Clinic leader as a “token.” These comments are inconsistent with Billings Clinic’s core values, including a dedication to diversity, inclusion, equity and belonging.

Ew. Now, are these clowns really a bunch of abhorrent scummy sexist racist antisemites? Absolutely a possibility, though flaunting that bias would be incredibly stupid.

Is it possible that much of this bigotry display instead reflects some poorly conceived cynical attempt to appeal to others believed to hold bigoted views? Do these private equity jokers just think that Montanans are a bunch of abhorrent scummy sexist racist antisemites?

Maybe it’s a bit of both. Maybe Medforth is just looking for kindred spirits.

When it comes to people running a medical school, neither possibility should be acceptable.

(h/t @jbcarmody)

Old Guard Medical Wisdom? Rest

03.26.21 // Medicine, Reading

From Rest: Why You Get More Done When You Work Less:

Neurosurgeon Wilder Penfield, for example, warned medical students that unless they cultivated other interests, “your specializing will expose you to an insidious disease that can shut you away from all but your occupational associates” and “imprison you in lonely solitude.” Penfield’s mentor, William Osler, warned that without care, “good men are ruined by success in practice,” and that “ever-increasing demands” can leave even the most curious person “worn out, yet not able to rest.” It was essential to develop “some intellectual pastime which may serve to keep you in touch with the world of art, of science, or of letters.”

These statements came from an era when residents literally lived in the hospital and Osler’s famous surgical colleague William Halstead’s work ethic was fueled by cocaine.

And even they thought it was important for doctors to be well-rounded, have hobbies, and get a life.

Honestly, I’m more interested in what you do for you than what boxes you’re just checking to impress me.

Older
Newer