A struggling post-bankruptcy Envision was so desperate to get out of their radiology business—presumably due to the impossibility of recruiting/retaining, meeting clinical obligations, navigating the general tumult, etc.—that they have agreed to “transition” their remaining practices/contracts to RadPartners. Sounds like they already shed the salable groups/assets/non-competes, so this is most likely a final liability dump/hand-washing endeavor. It’s unclear how much money exchanged hands for the remaining 400 rads and some desperate/unhappy hospitals.
I don’t think I was ever more uncertain about my chosen field than during the first couple of months of my R1 year. Coming off my intern year, I had gained in skill and responsibility, and I wouldn’t have been unhappy taking on a role as an internist during my PGY2 year.
I didn’t read all that much radiology material during my intern year and had no radiology electives because there was no radiology residency where I did my transitional year (an ACGME requirement). So when I began radiology at a new institution—with new people and a new hospital—it was a complete reset.
The first lecture I attended as a radiology resident was GU #3, the third part in a series of case conferences on genitourinary imaging, covering topics like intravenous pyelograms. I had absolutely no idea what was going on. That feeling—of being completely lost—defined much of my early experience in radiology. I lacked the foundation to get anything meaningful out of the lectures.
In the reading room, I spent a lot of time transcribing and editing reports—often repeating words I didn’t understand about anatomy I barely knew. We had a weekly first-year radiology resident interactive conference (a 2-hour pimp-session) based on chapters from Brant and Helms, but this meant I had to do additional reading on my own time, which didn’t always align with what I needed to learn for my rotation. The questions were always challenging and got harder until you failed. There was no escape.
Of course, in the end, it all worked out. At the time, I benefited from some slower rotations at the VA, which gave me some extra time to shore up my reading. And I kept plugging away, day after day on service, doing my best to understand what I was looking at and awkwardly dictate that into comprehensible English (hopefully quietly enough that no one could hear me).
It’s not weird to find radiology disorienting when you first start—it should be expected. The medical school process trains you for clinical medicine. Especially between third year, fourth year, and the intern year, you develop along a continuum that doesn’t naturally lead toward a career in diagnostic radiology.
Becoming a radiology resident is a step backward in personal efficacy. For someone who has done well in school, met expectations across multiple clerkships, and excelled on tests, it’s frustrating to suddenly feel useless.
Some people struggle with feeling like they’re not a “real doctor” in radiology because they are removed from direct clinical care for a large portion of their time. But that sense of detachment is even more profound when you can’t even do your job yet. You can only watch an attending highlight your entire report, delete it en bloc, and start from scratch so many times before your ego takes a hit.
Some attendings even dictate reports to you word for word as though you’re very slow, inaccurate, fleshy dictation software, and then judge your performance by how well you parrot everything back. This process can feel infantilizing.
But, as I’ve previously discussed in the craftsmanship mentality of residency training, I believe we can find satisfaction in our work by taking pride in doing it well.
Reading books is important. Doing practice cases and questions is important. Watching videos can be helpful. You absolutely must do the extra work to become proficient in radiology. You can’t just rely on the list gods to expose you to the full spectrum of pathology needed to adequately learn radiology and provide high-quality diagnostic care.
When everything feels overwhelming—the sheer volume of material, the anatomical complexity, the endless variations in pathology—the answer is to take it one scan at a time.
From the titular reference of Ann Lamott’s beloved Bird by Bird: Some Instructions on Writing and Life:
Thirty years ago my older brother, who was ten years old at the time, was trying to get a report on birds written that he’d had three months to write, which was due the next day. We were out at our family cabin in Bolinas, and he was at the kitchen table, close to tears, surrounded by binder paper and pencils and unopened books on birds, immobilized by the hugeness of the task ahead. Then my father sat down beside him, put his arm around my brother’s shoulder, and said, ‘Bird by bird, buddy. Just take it bird by bird.’”
You learn by doing. Every day is a learning experience. Every scan is a chance to learn a new anatomical structure or detail. Every pathology is an opportunity to expand your internal library of normal versus abnormal. Every case is a lesson—not just in recognizing the pathology present but also in differentiating it from other possible diagnoses. Yes, the work has to get done, but it can’t just be about getting through the work.
The key to being good at radiology—beyond hard work, attention to detail, and sustained focus—is realizing that taking it scan by scan isn’t just a temporary survival strategy for residency:
It’s the way we learn—when we’re right, actively reinforcing our knowledge, and when we’re wrong, absorbing the painful but essential lessons that only come from making mistakes over and over and over again.
I made the mistake of procrastinating on something more meaningful by reading a variety of random commenters on issues related to radiology. One type of flawed thinking stuck out: the all-or-nothing fallacy.
For example, as it pertains to artificial intelligence, the argument often goes, “AI will never replace a human in doing what I can do, and therefore I can ignore it.” Or, “I put a radiology screenshot into a general-purpose LLM and it was wrong,” or “our current commercially available pixel-based AI is wrong a lot,” and therefore, “I can ignore the entire industry indefinitely based on the current commercially available products.”
Leave aside the potentially short-sighted disregard for this growing sector because of its obvious and glaring current shortcomings. Even the current state of the art can have an impact without actually replacing a human being in a high-level, high-training, high-stakes cognitive task.
For instance, let’s say the current radiologist market is short a few thousand radiologists—roughly 10% of the workforce. Basic math says we could:
- Hire 10% more human beings to fill the gap (difficult in the short term)
- Reduce the overall workload by 10% (highly unlikely)
- Increase efficiency by 10%
The reality is, it doesn’t take that much magic to make radiologists 10–20% more efficient, even with just streamlining non-interpretive, non-pixel-based tasks. If only enterprise software just sucked less…
We don’t need to reach the point of pre-dictated draft reports for that to happen. There’s plenty of low-hanging fruit. Rapid efficiency gains can come from relatively small improvements, such as:
- Better dictation and information transfer. When dictation software is able to transcribe your verbal shorthand easily (like a good resident), radiology is a whole different world.
- Real summaries of patient histories.
- Automated contrast dose reporting in reports.
- Summaries of prior reports and follow-up issues (e.g., “no change” reports where previous findings are reframed in the customizable style and depth).
- Automated transfer of measurements from PACS into reports with series/image numbers.
- Automated pre-filling of certain report styles (e.g., ultrasound or DEXA) based on OCR of handwritten or otherwise untransferable PDFs scanned into DICOM.
These tasks, as currently performed by expensive radiologists, do not require high-level training but instead demand tedious effort. Addressing them would reduce inefficiency and alleviate a substantial contribution to the tedium and frustration of the job.
Anyone who thinks these growing capabilities—while not all here yet, nor evenly distributed as they arrive—can’t in aggregate have an impact on the job market is mistaken. And if AI isn’t implemented quickly enough to prevent the continued expansion of imaging interpretation by non-physician providers, the radiology job market will be forced to contend with a combination of both factors, potentially leading to even more drastic consequences.
When you extrapolate a line or curve based on just two data points, you have no real idea where you started, where you’re headed, or where you’re going to end up. Just because you can draw a slope doesn’t make the line of best fit meaningfully reflect reality or extrapolate to a correct conclusion.
Don’t fall prey to simple black and white thinking.
What is quality care, and how do you define it?
I suspect for most people that quality is like pornography in the classic Supreme Court sense—you know it when you see it. But quality is almost assuredly not viewed that way when zoomed out to look at care delivery in a broad, collective sense. Instead, it’s often reduced to meeting a handful of predetermined outcome or compliance metrics, like pneumonia readmission rates or those markers of a job-well-done as defined in the MIPS program.
The reality is that authoritative, top-down central planning in something as variable and complicated as healthcare is extremely challenging, even if well-intentioned. As Goodhart’s law says, “When a measure becomes a target, it ceases to be a good measure.”
In the real world, I would argue a quality radiology report is one that is accurate in its interpretation and clear in its communication. But without universal peer review, double reading, or an AI overlord monitoring everyone’s output, there is no way to actually assess real quality at scale. You can’t even tell from the report if someone is correct in what they’re saying without looking at the pictures. Even “objectively” assessing whether they are clear or helpful in just their written communication requires either a human reviewer or AI to grade language and organization by some sort of mutually agreed-on rubric. It’s simply not feasible without a significant change to how healthcare is practiced.
And so, we resort to proxy metrics—like whether the appropriate follow-up recommendation for a handful of incidental findings was made. The irony, of course, is that many of these quality metrics are a combination of consensus guidelines and healthcare gamesmanship developed by non-impartial participants with no proof they reflect or are even associated with meaningful quality at all.
We should all want the quality of radiology reporting to improve, both in accuracy and in clarity. Many of these problems have been intractable because potential solutions are not scalable with current tools and current manpower—which is why soon you’ll be hearing about AI for everything, because AI solves the scaling problem, and even imperfect tools over the coming years will rapidly eclipse our current methods like cursory peer review.
Everyone would rather have automated incidental finding tracking than what most of us are still using for MIPS compliance. Right now, it’s still easy to get dinged and lose real money because you or your colleagues omitted some BS footer bloat about the source of your follow-up recommendations for pulmonary nodules too often. Increased quality without increased effort is hard to complain about.
But even just imagine you have a cheap LLM-derived tool that catches sidedness errors (e.g. right abnormality in the findings and left in the impression) or missing clarity words like forgetting the word “No” in the impression (or, hey, even just the phrase “correlate clinically”). This already exists: it’s trivial, requires zero pixel-based AI, (and—I know—is rapidly becoming table stakes for updated dictation software), but widespread adoption would likely have a more meaningful impact on real report quality than most of the box checking we do currently. A company could easily create a wrapper for one of several current commercial products and sell it tomorrow for those of us stuck on legacy systems. It might even be purchased and run by third parties (hospitals, payors, Covera Health, whatever) to decide which groups have “better” radiologists.
But now, take it that one step further. We’ve all gotten phone calls from clinicians asking us to translate a colleague’s confusing report. Would a bad “clarity score” get some radiologists to start dictating comprehensible reports?
It’s not hard to leap from an obviously good idea (catching dictation slips) to more dramatic oversight (official grammar police).
Changes to information processing and development costs mean the gap between notion and execution is narrowing. As scalable solutions proliferate, the question then becomes: who will be the radiology quality police, and who is going to pay for it?
As we discussed in The Necessity of Internal Moonlighting, you can regularly need some extra manpower to maintain turnaround times or mitigate misery without the need for a full additional FTE shift on the schedule (or, alternatively, where you do need some real shiftwork but don’t want to press people into service without additional reward).
Take this recent article about “Surge” staffing in radiology as described in Radiology Business:
On-service radiologists utilize Microsoft Teams to contact available nonscheduled rads during periods of heavy demand. Team members who are available then can log on remotely and restore the worklist to a “more manageable length,” logging their surge times in the scheduling system in five-minute increments. Compensation is based on the duration of the surge and time of day when it occurs.
Just-in-time overflow help is an important use case of internal moonlighting, and doing this with less friction is exactly what LnQ is trying to facilitate and streamline.
Firstly, it should almost go without saying, but: you can do this.
I’d also like to acknowledge that nothing below is particularly noteworthy or novel advice. The Core Exam is like the other high-stakes multiple choice exams you’ve taken except for the fact that it has more pictures.
And, of course, the question of how to pass the Core Exam after a failure is mostly the same as asking how to pass it in the first place. Before we get further, I published a series of articles grouped here as a “Guide to the Core Exam” that lays out a lot of helpful information. There are some out-of-date passages about failing because physics and in-person details, but the core information is unchanged.
Acknowledge Luck
As you no doubt noticed during your first attempt(s), the questions on the ABR exams are somewhat arbitrary in what they include, so part of your performance is luck of the draw. While the difficulty is curated, the specific content breakdown is not a perfect cross section of possible topics. You can have the same diagnosis multiple times but then zero questions on broad swaths of important material. How your knowledge gaps line up can make a big difference.
Your performance on a given day is a product of your variable skill (energy, focus, attention, etc) and the exact exam you get. All things being equal, that means that a borderline failure is also truly a near pass.
Dissect Your Performance
Look at the two breakdowns: organ (breast, cardiovascular, GI, GU, MSK, neuro, peds, and thoracic) and modality (CT, IR, MR, Nucs, radiography, US, NIS, and Physics). Find if you have outliers, and plan to shore up those shortfalls with extra dedicated targeted review.
At the same time, do not neglect your strengths entirely. Backsliding is counterproductive.
The nature of spaced repetition is that you need more reps more frequently for new knowledge and problem areas and fewer reps spaced further apart for your strengths—but you still need reps across the board.
Further Reading: Learning & The Transfer Problem
Review Your Study Methods
What exactly was your study method and plan for your initial attempt(s)?
There are a couple of maladaptive tendencies common amongst medical students that can persist into residency:
- The tendency to graze across too many resources. Focus on fewer things and learn them deeply.
- The tendency to spend too much time passively reading (and especially re-reading) books like Crack the Core at the expense of doing lots of high-quality questions. We are radiologists, and the majority of the exam is image identification: you need to look at lots and lots of pictures.
When it comes to doing practice questions, you also need zoom out and look for trends:
More than just stronger/weaker subspecialty performance, are there any themes to why you get questions wrong? Is there a time component? Is it that you often don’t see the finding in the picture? That you simply don’t know the diagnosis? Or that you’re being fooled by differential considerations and need to focus on key features that distinguish between plausible alternatives? Is it a confidence issue and you’re overthinking it, getting spooked by questions that seem too easy? If you change answers, are you more likely to change from wrong to right or right to wrong? (I think most people overthink it and change for the worse).
If there’s a pattern, it can be the key to unlocking performance.
Further Reading: Dealing with Test Anxiety and Demoralization
Questions/Cases > Reading >> Re-reading
First: Horses > Zebras.
In general, the biggest bang for your buck is still going to be common diagnoses (including weird looks of common things) and normal variants over esoterica. Rare things arise most when they are absolute Aunt Minnies that you can tell at a glance (hence the need for lots of image-based questions).
On a related note, if you never saw them, the ancient free official ABR practice test from 2014 is available on the Wayback machine here.
Also worth mentioning: NIS is a bunch of random material. Many people can read the manual a copy of times and get the job done here, but the reality is that these need to mostly be easy points. If you don’t retain pseudoscientific business jargon naturally, then don’t shirk the review here. The NIS App, for example, is well-reviewed, but there is also an Anki deck as well.
Spaced Repetition
You can use the ACR DXIT/in-service Anki deck for a large number of good free questions. You could also use one of the massive Core Radiology decks. But for the second round of studying after a failure, making quick cards of every question you guess or get wrong from whatever source you’re using with your phone’s camera or with screenshots and incorporating that into repeated longitudinal review may be the highest yield.
In Windows, for example, Windows+Shift+S opens up a quick adjustable screenshot reticle that will copy that portion of your screen to the clipboard.
On Mac, the adjustable screenshot shortcut is Shift+Command+4, which automatically saves to desktop. To save to your clipboard, add Control, so Ctrl+Shift+Command+4.
The Reality
Passing is almost certainly a matter of doing more, high-quality reps while not over-focusing on weaknesses such that you backslide on your relative strengths. The Core Exam is arbitrary enough that some of your high and low-performance areas may not be as meaningful as you think, so you need to continue broad reps in addition to the extra targeted review.
Once you can emotionally recover from the setback and just get back to it, it’s going to work out.
Further Reading: Focused Nonchalance
One thing our recent discussions of the nationwide shortage of radiologists didn’t include (in addition to a solution) is how the shortage has impacted the culture of radiology:
Pulse and a License
One of the concerning features of the current staffing shortage is the desperation with which many universities/hospitals/groups are recruiting new talent. When the market is tight and groups are well-staffed, groups get to be picky about cultural fit. Groups prioritizing compensation or efficiency can recruit fast radiologists or those with a broad skillset and flexible attitude. Groups that prioritize quality, academic productivity, or relationship-building can at least attempt to select for proxy features. Groups can grow in ways that align with their culture and mission.
But when there is far too much work to do and not enough people to do it, priorities shift to things like having a pulse and an active medical license, hopefully without board complaints or a felony record.
In Jim Collins’ Good to Great, there is a whole chapter dedicated to hiring. He argues that the key to creating a great business is not the mission or values, a charismatic leader, technology, or a clever strategic plan. The key foundation on which all other components rest is having the right people.
Sabotaging culture to generate revenues or get the work done may feel like a necessary choice in the short term (it also may be unavoidable when the alternative is operational insolvency), but it also has long-term consequences. Culture isn’t something you speak to; it is something that arises organically from good people doing things they believe in. As Collins says, “The best people don’t need to be managed.”
Fast Readers vs Practice Builders
One of the ways that cultural breakdowns manifest in radiology practices is the conflict between fast readers and practice builders (or, in academics: gold-star earners vs worker bees). Obviously, there are efficient radiologists who can do everything well, churning out an incredible volume of high-quality reports, and there are lazy people who do a bad job producing a relative molehill. Real life is a continuum, but the fake dichotomy is helpful for discussion.
I’m not even going to pretend that one is good or bad. The reality is that each of us is on a continuous spectrum as opposed to a caricature at the ends of the curve. The problem is that both extremes are real approaches and that good people in each camp become frustrated when the culture of the group does not align with their personal preferences. I see this discussed online all the time. Fast readers bemoan the freeloaders who are being paid the same or a similar amount for “less work.” But practices also rely on good citizens to get the important but non-remunerative work done. As an organization scales—and many groups have grown significantly over the past decade—reconciling these competing visions for an ideal radiology practice can seem impossible.
From a practice competition standpoint, it’s easy for a group to fall into a no man’s land where the compensation plan doesn’t reward speed enough for the high-productivity readers to be happy or rewards speed too much for the less “productive” members who get bogged down in the most complex cases, want more time to produce helpfully detailed reports, speak to clinicians, answer technologist questions, or teach residents. This can be compounded to disastrous effect by the cherry-picking that ravages some practices utilizing a productivity model, especially those that do not enforce list hygiene through automatic case assignment or customized internal work units to balance case desirability. If you log into a list and it’s full of plain films, thyroid ultrasounds, and chest CTs, you are logging into an environment where this hasn’t been figured out yet.
We touched on this previously in quality, speed, and “productivity”—and I didn’t have a magic bullet in that discussion either. These are real problems, and if there was a universal easy solution, everyone would be doing it. My feeling, as concluded in that brief article, is that the table stakes in the future are to automate case assignment ± customized internal RVUs to better account for effort ± account for “work” RVUs for nonbillable tasks (but bean counting every single thing you do has its own very real negative consequences). The quality/speed tradeoff is inherent to radiology work, but a productivity model that doesn’t take some variation of this into account is too ripe for cheating and gamesmanship.
Culture isn’t Optional; It’s Organic
My argument with the first two sections of this post is that compromising on hiring and being passive with internal work divisions and the productivity question are a one-two punch. I increasingly believe that places that don’t figure this out become revolving doors. At that point, recruitment is purely mercenary based on measurables. There are people who are willing to work that way, but long term, I don’t believe that leads to satisfaction or stability.
Culture happens whether we want it to or not, and creating a job that people enjoy and are loyal to doesn’t happen by accident. We are in a period of increased volatility in the radiology workforce regardless of what a practice does, but any job can become more stable if it feels like a meaningful career.
In this previous post about breast imaging, we briefly touched on the soon-to-be-growing-and-maybe-even-critical problem of automation bias in radiology caused by the growing use of AI.
This study evaluating AI-assisted detection of cerebral aneurysms had similar findings:
Results
False-positive AI results led to significantly higher suspicion of aneurysm findings (p = 0.01). Inexperienced readers further recommended significantly more intense follow-up examinations when presented with false-positive AI findings (p = 0.005). Reading times were significantly shorter with AI assistance in inexperienced (164.1 vs 228.2 s; p < 0.001), moderately experienced (126.2 vs 156.5 s; p < 0.009), and very experienced (117.9 vs 153.5 s; p < 0.001) readers alike.Conclusion
Our results demonstrate the susceptibility of radiology readers to automation bias in detecting cerebral aneurysms in TOF-MRA studies when encountering false-positive AI findings. While AI systems for cerebral aneurysm detection can provide benefits, challenges in human–AI interaction need to be mitigated to ensure safe and effective adoption.
Everyone got faster, but inexperienced readers were fooled by false positives.
This is going to be such a problem.
The reality is that using AI to make us faster is so incredibly ripe for these outcomes. Sure, we could using AI to catch mistakes after an initial independent rad interpretation, and then we could even set up such a system to then use a third party to adjudicate persistent disagreements in a blinded fashion (i.e. a neutral third party radiologist or maybe a different AI agent picks the winner without knowing who they side with)—but the raw economics absolutely point to us using AI as a resident first draft as soon as feasible. It’s going to get messy.
There is an argument that you will have to increasingly be an expert in order to outperform an increasingly competent algorithm. While many current machine mistakes are obvious to experienced radiologists, failures won’t always be comically clear in the future. Assuming we need humans for the long term, training and training quality are critical, and doing so in a way that shields humans from tainting and overreliance on computers will be key.
Yes, pilots use autopilot, but some of those big life-saving stories make the news precisely because pilots also sometimes need to take control.
Some really good follows on the Imaging Wire’s 2025 list of Top 40 Radiology Resources. I’ll happily accept the description of “excellent insights into the vagaries of being a working radiologist.”
If you read my article on using Autohotkey for radiology, I describe that I use a click-lock script to simulate holding down the left mouse button. This allows me to power-scroll by using a single keystroke (in my case, backslash) to toggle scrolling on/off instead of needing to hold the mouse in a death grip for hours a day (which is a great way to destroy your wrist):
;toggle holding down the left mouse button
\::
alt := not alt
if (alt)
{
Click Down
}
else
{
Click Up
}
Return
If you also happened to read my post on radiology equipment or the follow-up deeper dive on how I use the Contour Shuttle for radiology, you may also know that I really enjoy autoscrolling with the Shuttle’s outer dial: When I twist the dial, each “hour” on the clockface repeats the mouse scroll wheel multiple times a second to allow me to scroll to varying speeds without needing to move at all. It takes some getting used to, but it’s awesome.
Not everyone has the Shuttle or the ability to install software on hospital computers, so I was thinking about how to recreate that without the wheel.
The following script was made—with a surprising amount of back and forth—using ChatGPT (I just kept telling it what errors it was getting and it eventually figured it out). I include it here as a potentially helpful tool but mostly to inspire you to play with making your own things to solve your own needs. The LLMs available for free online now make this sort of thing comically easy related to even just a couple of years ago.
The way this example works is by combining Alt + any number key (1-9) to scroll up and Ctrl + 1-9 to scroll down. The higher the number you press, the faster you scroll. As in, Alt+1 scrolls slowly and Alt+9 scrolls quickly. The reality is that anyone using some variant of this would almost certainly want to change the hotkeys used on an actual keyboard (perhaps using ZXC and ASD for slow, medium, and fast scrolling respectively instead of the numbers), but it would probably be best used with a small keypad where you could pick a handful of your favorite speeds and assign them to some obscure key combination that you would map to one of those keypad buttons.
Regardless, the point is that with a small amount of work, we can set up an off-hand alternative to jerking the mouse wheel back and forth incessantly. The more joints we spread these repetitive motions to, the better.
Enjoy:
#Persistent
#SingleInstance Force
SetBatchLines, -1
; Define scroll speeds (in milliseconds per scroll)
scrollSpeeds := [1000, 500, 200, 100, 67, 50, 40, 33, 25]
; Variables to track active scrolling
scrollUpActive := false
scrollDownActive := false;
Function to start scrolling up
StartScrollUp(speed) {
global scrollUpActive
scrollUpActive := true
while (scrollUpActive) {
Send {WheelUp}
Sleep speed
}
}
; Function to start scrolling down
StartScrollDown(speed) {
global scrollDownActive
scrollDownActive := true
while (scrollDownActive) {
Send {WheelDown}
Sleep speed
}
}
; Function to stop scrolling
StopScrolling() {
global scrollUpActive, scrollDownActive
scrollUpActive := false
scrollDownActive := false
}
; Manually Define Hotkeys for Alt + 1-9 (Scroll Up)
~Alt & 1::StartScrollUp(scrollSpeeds[1])
~Alt & 2::StartScrollUp(scrollSpeeds[2])
~Alt & 3::StartScrollUp(scrollSpeeds[3])
~Alt & 4::StartScrollUp(scrollSpeeds[4])
~Alt & 5::StartScrollUp(scrollSpeeds[5])
~Alt & 6::StartScrollUp(scrollSpeeds[6])
~Alt & 7::StartScrollUp(scrollSpeeds[7])
~Alt & 8::StartScrollUp(scrollSpeeds[8])
~Alt & 9::StartScrollUp(scrollSpeeds[9])
; Manually Define Hotkeys for Ctrl + 1-9 (Scroll Down)
~Ctrl & 1::StartScrollDown(scrollSpeeds[1])
~Ctrl & 2::StartScrollDown(scrollSpeeds[2])
~Ctrl & 3::StartScrollDown(scrollSpeeds[3])
~Ctrl & 4::StartScrollDown(scrollSpeeds[4])
~Ctrl & 5::StartScrollDown(scrollSpeeds[5])
~Ctrl & 6::StartScrollDown(scrollSpeeds[6])
~Ctrl & 7::StartScrollDown(scrollSpeeds[7])
~Ctrl & 8::StartScrollDown(scrollSpeeds[8])
~Ctrl & 9::StartScrollDown(scrollSpeeds[9])
; Ensure scrolling stops when releasing Alt or Ctrl
~Alt Up::
~Ctrl Up::
StopScrolling()
return
Note that this script as copy/pasted doesn’t play nicely with my scripts in the other post because I personally use the ctrl key in my macros to control Powerscribe, but changing things up is as easy as just changing a letter or two.
I am not an expert here, and I guarantee there are better ways to achieve this functionality, but stuff like this is a great example of what’s possible for a novice with a little vibe coding enabled by current LLMs.