I recently experienced a trial of Tesla’s full self-driving capability for the first time. It was a decent, if bewildering and somewhat spooky experience. It’s unusual to see a computer do a task in the real world that you have been doing—and had to do yourself—for decades.
For some reason, feeling a computer manifesting its judgment and controlling a vehicle in three-dimensional space does feel a little bit different than seeing its capabilities in a chat box. In some ways, it is less impressive, and in others, much more.
Certainly, the current iteration—which requires you to maintain attention on the road—does not fulfill the promise of true self-driving in the sense that you are still, at least in part, the driver of the car and ultimately responsible for it, as opposed to being a front-seat passenger able to do other tasks. And that’s for good reason.
The reality is that, like with human drivers, the issues are always in the edge cases. The average person does not get in a crash after every left turn, and neither do self-driving cars. I was struck by how easy it is to trust, and how willing I was to let the car give it a go. The Tesla did struggle repeatedly in my experience in several locations. It may drive more miles without an accident than the average human driver, but the mistakes it makes currently are also ones that most people would not make:
It gets caught in turn-only lanes or the wrong turning lanes pretty consistently and doesn’t learn from those mistakes. It doesn’t, for example, move over nearly fast enough for a right-hand turn after exiting the highway if there is a long line of cars waiting. More dramatically, in downtown Dallas’s web of one-way streets, it tried to go down the wrong way in a bifurcation into oncoming traffic. It also ran a red light after trailing an 18-wheeler so close that it couldn’t see the traffic light.
That being said, what struck me most is how quickly humans trust—and how rapidly automation bias takes effect simply by virtue of the product being available. There is a serious sense of trust in its capabilities. You see it handle a couple of turns, stop appropriately, go appropriately, and check the blind spot before changing lanes, and suddenly, you believe it can do the real deal. You’re ready to trust it on the road in ways that should frankly be pretty surprising. You begin to resent its incessant demand that you pay attention.
In some post-AGI arguments, some people say that humans like other humans and really want other humans in the loop—but I’m actually not sure that’s the case. Especially not if the lack of humans in the loop means faster service, cheaper service, or more customizable service. Really, every situation could be different, and how you feel about your doctor or your pastor might be different than how you feel about yourself as a car or about cabbies.
But these sorts of shortcomings are probably a matter of time, and the downstream consequences of even just this narrow use are profound and unknowable. I don’t want to opine on American car culture or how cities might deal with fleets of self-driving cars waiting for passengers.
It does make you wonder how high the bar for AI adoption is for any particular task. If we can trust a car so easily, a matter of personal life or death, how fast will the shelf life of human expertise decay in other contexts?
In healthcare, for example, I have seen nothing autonomous that is remotely reliable yet, which is why adoption outside of personal use or low-hanging general LLM fruit—scribing, summaries/impressions, some clinical decision support/brainstorming, etc.—is relatively low other than narrow models flagging stuff like pulmonary emboli or diagnosing a single etiology like diabetic retinopathy. If, with broader tool adoption, we still need a human with hands on the wheel to prevent catastrophe in rare occurrences, then we will need to set up a system carefully to keep that human paying attention and not just going through the motions of looking like they are paying attention in order to placate a monitoring algorithm instead of actually focusing on the road.
The question, as always, becomes: is the purpose to be more efficient, or is the purpose to provide a better product? Those are, as always, in constant tension.
If you search for reviews or commentary on Tesla self-driving, the most common thing you’ll find is frustration that the car won’t let you stop paying attention. Autopilot is frustrating. Driving is fine. Being a passenger is fine. Being in between (other than in stop-and-go traffic) is kinda annoying.
This is likely a temporary problem, and while driving is also different from many people’s jobs, I think it does speak to an underlying human desire: people do not like feeling superfluous. I do not think as a species we will like being forced to rubber-stamp or clock in just for the pretense of being a human in the loop (i.e. drawing a paycheck as liability operator/sin-eater).
Real work is meaningful. Box-checking is soul-sucking, demoralizing, and breeds resentment.