I got an underdesk elliptical a couple of weeks ago…I think maybe it’s awesome and wish I had gotten one a long time ago. I’m honestly a little surprised I can pedal while thinking.

I tried a few, and this is the one I landed on: very stable, pretty cheap, reasonably quiet.

// 03.11.24

C.S. Lewis (of Narnia fame) on peer learning:

It often happens that two schoolboys can solve difficulties in their work for one another better than the master can. The fellow-pupil can help more than the master because he knows less. The difficulty we want him to explain is one he has recently met. The expert met it so long ago he has forgotten.

I’ve always been a big proponent of peer teaching and peer mentoring in medicine. I also often wonder if I’m getting worse at teaching the basics as I get older.

// 01.15.24

From “How to Do Great Work” by Paul Graham:

Schools also give you a misleading impression of what work is like. In school they tell you what the problems are, and they’re almost always soluble using no more than you’ve been taught so far. In real life you have to figure out what the problems are, and you often don’t know if they’re soluble at all.

Schools sometimes also give students the misleading impression that learning is not fun for its own sake and that writing should be boring.

// 01.04.24

From “The Bitter Lesson” by Rich Sutton:

In speech recognition, there was an early competition, sponsored by DARPA, in the 1970s. Entrants included a host of special methods that took advantage of human knowledge—knowledge of words, of phonemes, of the human vocal tract, etc. On the other side were newer methods that were more statistical in nature and did much more computation, based on hidden Markov models (HMMs). Again, the statistical methods won out over the human-knowledge-based methods. This led to a major change in all of natural language processing, gradually over decades, where statistics and computation came to dominate the field. The recent rise of deep learning in speech recognition is the most recent step in this consistent direction. Deep learning methods rely even less on human knowledge, and use even more computation, together with learning on huge training sets, to produce dramatically better speech recognition systems. As in the games, researchers always tried to make systems that worked the way the researchers thought their own minds worked—they tried to put that knowledge in their systems—but it proved ultimately counterproductive, and a colossal waste of researcher’s time, when, through Moore’s law, massive computation became available and a means was found to put it to good use.

[…]

We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done.

// 01.03.24