Okay, I’ve had about enough of Tesla’s zombie legion of brainwashed fans reflexively and ignorantly defending them on autopilot grounds, so it’s time for a good old fashioned rant. I have two targets.
First: autopilot itself. Tesla’s autopilot is a nifty technological achievement. In its current state, though, it’s dangerous, and it disregards seventy years of research into how humans interact with machines. This book, on the specific topic of human reliability in transit systems, cites just over two hundred sources. In the world of trains, locomotive cabs usually feature a device called an alerter. If the driver doesn’t flip a switch or press a button every so often, the locomotive automatically stops.
The locomotive, actually, is a good analogue for the specific sort of cognitive load imposed by driving with an assisted cruise control system. If you read my Train Simulator review, you have some idea what I mean. For the benefit of you who did not read it, let me sum up.
Driving a car manually is a task with relatively evenly-distributed (low) difficulty. It takes constant attention to keep from hitting something or driving off the road. It may take more attention at times, but there’s a certain minimum cognitive load below which you can no longer drive a car. Sure, it’s no helicopter, but you do have to be paying at least a little bit of attention. This is materially different from driving a train or a semi-automatic car.
Piloting those two forms of transit requires so nearly zero input from the driver as to be indistinguishable therefrom. In both cases, the vehicle handles the moment-to-moment input required to keep itself from crashing into things1. The driver has no ongoing task to keep his mind focused. A quick sampling of Wikipedia articles on train crashes shows, broadly speaking, two sorts of accident which capture almost every incident: equipment failures causing derailment, and driver inattentiveness causing a train to run into another train2. In fact, the trend with trains is increasing computerization and automation, because—shocker—it turns out that humans are very bad at watching nothing happen with boring predictability for dozens or hundreds of hours, then leaping into action the moment something begins to go wrong. This article, by a self-proclaimed UI expert3 goes into great detail on the problem, using Google’s experience with testing self-driving cars as an example. The train industry knows it’s a problem, too, hence the use of the alerter system I mentioned earlier.
“Well then, you ought to love what Tesla is doing!” I hear you say. Don’t get me wrong, I think they’re making intriguing products4, and the technology which goes into even the limited autopilot available to Tesla drivers is amazing stuff. That said, there’s a twofold problem.
First, no self-driving system—not even Google’s more advanced fleet of research vehicles—is perfect. Nor will they ever be. Computerizing a train is trivial in comparison. There’s very little control to be done, and even less at the train itself. (Mostly, it happens at the switching and signaling level, and nowadays that’s done from a centralized control room.) There are very few instances driving a train where you can see an obstacle soon enough to stop before hitting it, and very few instances where it’s worth stopping to avoid hitting the thing you might hit. Again, though, hitting a deer with a train is materially different than hitting a deer with a luxury sedan. More generally, there’s a lot more to hit with a car, a lot more of it is dangerous, and it’s a lot more difficult to tell into which category—dangerous or no—a certain piece of stuff falls.
Second, there’s a problem with both driver alertness systems and marketing. To the first point, requiring that you have your hands on the wheel is not enough. There’s a reason a locomotive alerter system requires a conscious action every minute or so. Without that constant requirement for cognition, the system turns into another thing you just forget about. To the second, calling something which clearly does not drive the car automatically an ‘autopilot’ is the height of stupidity5. Which brings me to the second rant I mentioned at the start of the article.
You see, whenever anyone says, “Maybe Tesla shouldn’t call their assisted driving system Autopilot, because that means something which pilots automatically,” an enormous gaggle of geeks push their glasses up their noses and say, “Actually…”6
I’m going to stop you right there, strawman7 in a Tesla polo. If your argument starts with “Actually” and hinges on quibbling over the definition of words, it’s a bad argument. Tesla Autopilot is not an autopilot. “What about airplane autopilots?” you may ask. “Those are pilot assistance devices. They don’t fly the airplane from start to finish.” Precisely. The pilot still has lots to do8, even to the point of changing speeds and headings by hand at times. More to the point, it’s almost impossible to hit another plane with a plane unless you’re actively trying9. Not so with cars. Cars exist in an environment where the obstacles are thick and ever-present. A dozing pilot is usually a recipe for egg on his face and a stiff reprimand. A dozing driver is a recipe for someone dying.
I also sometimes hear Tesla fans (and owners) saying, in effect, “Just pay attention like I do.” The hubris there is incredible. No, you are not unlike the rest of the human race. You suffer from the same attention deficit when monitoring a process which mostly works but sometimes fails catastrophically as does the remainder of the human race. It is overwhelmingly more likely that you overestimate your own capability than that you’re some specially talented attention-payer.
To quote Lenin, “What is to be done?” Fortunately, we have seventy years of research on this sort of thing to dip into. If your system is going to require occasional human intervention by design, it has to require conscious action on the same time scale on which intervention will be required. Trains can get away with a button to push every minute because things happen so slowly. Planes have very little to hit and lots to do even when the plane is flying itself. Cars have neither luxury. To safely drive an Autopilot-equipped car, you have to be paying attention all the time. Therefore, you have to be doing something all the time.
I say that thing ought to be steering. I’m fine with adaptive speed, and I’m also fine with all kinds of driver aids. Lane-keeping assist? Shake the wheel and display a warning if I’m doing something wrong. Automatic emergency braking? By all means. These are things computers are good at, and which humans can’t do: seeing a specific set of circumstances and reacting faster than humans. Until the day when a car can drive me from my house to my office with no input from me—a day further away than most people think—the only safe way for me, or anyone, to drive is to be forced to pay attention.
I’m not usually one to revisit already-posted articles, but this is just too much. In this Ars Technica comment, a Tesla owner describes “multiple uncommanded braking events” since the last software update. In the very same post, he calls his Tesla “the best car I’ve ever owned”.
If you needed further proof of the Tesla fan’s mindset, there it is.
- Whether by advanced computer systems and machine vision, or by the way flanged steel wheels on top of steel rails stay coupled in ordinary circumstances. ↩
- Sometimes, driver inattentiveness causes derailments, too, as when a driver fails to slow to the appropriate speed for a certain stretch of track. ↩
- I like his use of a topical top-level domain. We over here at .press salute you, sir! ↩
- Electric cars weren’t cool five years ago. Now they’re kind of cool10. ↩
- In a stroke of genius, Cadillac called a similar system ‘Super Cruise’. I’ll be frank with you: when a salesman is going down the list of options for your new Caddy, and he says, “Do you want to add Super Cruise?” your answer is definitely going to be, “Heck yes. What’s Super Cruise?” It just sounds that cool. Also, it has a better, though not quite ideal, solution to the driver attentiveness problem. There’s a little IR camera on the steering column which tracks your gaze and requires you to look at the road. ↩
- Yes, I realize that also describes me and this article. I also just fixed my glasses. ↩
- Never let it be said that our qualities do not include self-awareness and self-deprecation! ↩
- The occasional embarrassed dozing pilot story notwithstanding. ↩
- That’s why it’s always news on the exceedingly rare occasions when it happens, and frequently news when it doesn’t, but merely almost happens. ↩
- If poorly built, but Tesla say they’re working on that. ↩