Tag Archives: technology

Tesla Motors: Ignoring Facts of Human-Machine Interaction Since 2014

Okay, I’ve had about enough of Tesla’s zombie legion of brainwashed fans reflexively and ignorantly defending them on autopilot grounds, so it’s time for a good old fashioned rant. I have two targets.

First: autopilot itself. Tesla’s autopilot is a nifty technological achievement. In its current state, though, it’s dangerous, and it disregards seventy years of research into how humans interact with machines. This book, on the specific topic of human reliability in transit systems, cites just over two hundred sources. In the world of trains, locomotive cabs usually feature a device called an alerter. If the driver doesn’t flip a switch or press a button every so often, the locomotive automatically stops.

The locomotive, actually, is a good analogue for the specific sort of cognitive load imposed by driving with an assisted cruise control system. If you read my Train Simulator review, you have some idea what I mean. For the benefit of you who did not read it, let me sum up.

Driving a car manually is a task with relatively evenly-distributed (low) difficulty. It takes constant attention to keep from hitting something or driving off the road. It may take more attention at times, but there’s a certain minimum cognitive load below which you can no longer drive a car. Sure, it’s no helicopter, but you do have to be paying at least a little bit of attention. This is materially different from driving a train or a semi-automatic car.

Piloting those two forms of transit requires so nearly zero input from the driver as to be indistinguishable therefrom. In both cases, the vehicle handles the moment-to-moment input required to keep itself from crashing into things1. The driver has no ongoing task to keep his mind focused. A quick sampling of Wikipedia articles on train crashes shows, broadly speaking, two sorts of accident which capture almost every incident: equipment failures causing derailment, and driver inattentiveness causing a train to run into another train2. In fact, the trend with trains is increasing computerization and automation, because—shocker—it turns out that humans are very bad at watching nothing happen with boring predictability for dozens or hundreds of hours, then leaping into action the moment something begins to go wrong. This article, by a self-proclaimed UI expert3 goes into great detail on the problem, using Google’s experience with testing self-driving cars as an example. The train industry knows it’s a problem, too, hence the use of the alerter system I mentioned earlier.

“Well then, you ought to love what Tesla is doing!” I hear you say. Don’t get me wrong, I think they’re making intriguing products4, and the technology which goes into even the limited autopilot available to Tesla drivers is amazing stuff. That said, there’s a twofold problem.

First, no self-driving system—not even Google’s more advanced fleet of research vehicles—is perfect. Nor will they ever be. Computerizing a train is trivial in comparison. There’s very little control to be done, and even less at the train itself. (Mostly, it happens at the switching and signaling level, and nowadays that’s done from a centralized control room.) There are very few instances driving a train where you can see an obstacle soon enough to stop before hitting it, and very few instances where it’s worth stopping to avoid hitting the thing you might hit. Again, though, hitting a deer with a train is materially different than hitting a deer with a luxury sedan. More generally, there’s a lot more to hit with a car, a lot more of it is dangerous, and it’s a lot more difficult to tell into which category—dangerous or no—a certain piece of stuff falls.

Second, there’s a problem with both driver alertness systems and marketing. To the first point, requiring that you have your hands on the wheel is not enough. There’s a reason a locomotive alerter system requires a conscious action every minute or so. Without that constant requirement for cognition, the system turns into another thing you just forget about. To the second, calling something which clearly does not drive the car automatically an ‘autopilot’ is the height of stupidity5. Which brings me to the second rant I mentioned at the start of the article.

Tesla fans.

You see, whenever anyone says, “Maybe Tesla shouldn’t call their assisted driving system Autopilot, because that means something which pilots automatically,” an enormous gaggle of geeks push their glasses up their noses and say, “Actually…”6

I’m going to stop you right there, strawman7 in a Tesla polo. If your argument starts with “Actually” and hinges on quibbling over the definition of words, it’s a bad argument. Tesla Autopilot is not an autopilot. “What about airplane autopilots?” you may ask. “Those are pilot assistance devices. They don’t fly the airplane from start to finish.” Precisely. The pilot still has lots to do8, even to the point of changing speeds and headings by hand at times. More to the point, it’s almost impossible to hit another plane with a plane unless you’re actively trying9. Not so with cars. Cars exist in an environment where the obstacles are thick and ever-present. A dozing pilot is usually a recipe for egg on his face and a stiff reprimand. A dozing driver is a recipe for someone dying.

I also sometimes hear Tesla fans (and owners) saying, in effect, “Just pay attention like I do.” The hubris there is incredible. No, you are not unlike the rest of the human race. You suffer from the same attention deficit when monitoring a process which mostly works but sometimes fails catastrophically as does the remainder of the human race. It is overwhelmingly more likely that you overestimate your own capability than that you’re some specially talented attention-payer.

To quote Lenin, “What is to be done?” Fortunately, we have seventy years of research on this sort of thing to dip into. If your system is going to require occasional human intervention by design, it has to require conscious action on the same time scale on which intervention will be required. Trains can get away with a button to push every minute because things happen so slowly. Planes have very little to hit and lots to do even when the plane is flying itself. Cars have neither luxury. To safely drive an Autopilot-equipped car, you have to be paying attention all the time. Therefore, you have to be doing something all the time.

I say that thing ought to be steering. I’m fine with adaptive speed, and I’m also fine with all kinds of driver aids. Lane-keeping assist? Shake the wheel and display a warning if I’m doing something wrong. Automatic emergency braking? By all means. These are things computers are good at, and which humans can’t do: seeing a specific set of circumstances and reacting faster than humans. Until the day when a car can drive me from my house to my office with no input from me—a day further away than most people think—the only safe way for me, or anyone, to drive is to be forced to pay attention.

Update 04/21/17
I’m not usually one to revisit already-posted articles, but this is just too much. In this Ars Technica comment, a Tesla owner describes “multiple uncommanded braking events” since the last software update. In the very same post, he calls his Tesla “the best car I’ve ever owned”.

If you needed further proof of the Tesla fan’s mindset, there it is.

  1. Whether by advanced computer systems and machine vision, or by the way flanged steel wheels on top of steel rails stay coupled in ordinary circumstances. 
  2. Sometimes, driver inattentiveness causes derailments, too, as when a driver fails to slow to the appropriate speed for a certain stretch of track. 
  3. I like his use of a topical top-level domain. We over here at .press salute you, sir! 
  4. Electric cars weren’t cool five years ago. Now they’re kind of cool10
  5. In a stroke of genius, Cadillac called a similar system ‘Super Cruise’. I’ll be frank with you: when a salesman is going down the list of options for your new Caddy, and he says, “Do you want to add Super Cruise?” your answer is definitely going to be, “Heck yes. What’s Super Cruise?” It just sounds that cool. Also, it has a better, though not quite ideal, solution to the driver attentiveness problem. There’s a little IR camera on the steering column which tracks your gaze and requires you to look at the road. 
  6. Yes, I realize that also describes me and this article. I also just fixed my glasses. 
  7. Never let it be said that our qualities do not include self-awareness and self-deprecation! 
  8. The occasional embarrassed dozing pilot story notwithstanding. 
  9. That’s why it’s always news on the exceedingly rare occasions when it happens, and frequently news when it doesn’t, but merely almost happens. 
  10. If poorly built, but Tesla say they’re working on that. 

How-To: Two USB Mics, One Computer, JACK, and Audacity

The Crossbox Podcast is going upmarket: I now have two USB microphones, and for the March episode, parvusimperator and I will each have one directly in front of us. This is a wonderful advance for audio quality, but it does pose some problems:

  1. Audacity, our usual recording tool of choice (and probably yours, if you ended up here), only supports recording from one source at once.
  2. Though other tools support recording from multiple sources, the minor variations in internal clocks between two USB microphones mean that each microphone has a sample rate which varies in a slightly different fashion, and that longer recordings will therefore be out of sync.

Modern Linux, fortunately, can help us out here. We have need of several components. First, obviously, we need two microphones. I have a Blue Snowball and a CAD Audio U37, with which I’ve tested this procedure1. Second, we need a computer with at least two USB ports. Third, we need the snd-aloop kernel module. (If your Linux has ALSA, you probably already have this.) Fourth, we need JACK, the Linux low-latency audio server. Fifth, we need the QJackCtl program.

Before I describe what we’re going to do, I ought to provide a quick refresher in Linux audio infrastructure. If you use Ubuntu or Mint, or most other common distributions, there are two layers to your system’s audio. Closest to the hardware is ALSA, the kernel-level Advanced Linux Sound Architecture. It handles interacting with your sound card, and provides an API to user-level applications. The most common user-level application is the PulseAudio server, which provides many of the capabilities you think of as part of your sound system, such as volume per application and the ‘sound’ control panel in your Linux flavor of choice. (Unless you don’t use Pulse.)

JACK is a low-latency audio server; that is, a user-level application in the same vein as Pulse. It has fewer easily accessible features, but allows us to do some fancy footwork in how we connect inputs to outputs.

Now that you have the background, here’s what we’re going to do to connect two mono USB microphones to one computer, then send them to one two-channel ALSA device, then record in Audacity. These instructions should work for any modern Linux flavor. Depending on the particulars of your system, you may even be able to set up real-time monitoring.

  1. Create an ALSA loopback device using the snd-aloop kernel module.
  2. Install JACK.
  3. Build QJackCtl, a little application used to control JACK. (This step is optional, but makes things much easier; I won’t be providing the how-to for using the command line.)
  4. Use JACK’s alsa_in and alsa_out clients to give JACK access to the microphones and the loopback device.
  5. Use QJackCtl to connect the devices so that we can record both microphones at once.

We’ll also look at some extended and improved uses, including some potential fixes for real-time monitoring.

Create an ALSA loopback device
The ALSA loopback device is a feature of the kernel module snd-aloop. All you need to do is # modprobe snd-aloop and you’re good to go. Verify that the loopback device is present by checking for it in the output of aplay -l.

The loopback device is very straightforward: any input to a certain loopback device will be available as output on a different loopback device. ALSA devices are named by a type string (such as ‘hw’), followed by a colon, then a name or number identifying the audio card, a comma, and the device number inside the card. Optionally, there may be another comma and a subdevice number. Let’s take a look at some examples.

  • hw:1,0: a hardware device, card ID 1, device ID 0.
  • hw:Loopback,1,3: a hardware device, card name Loopback, device ID 1, sub-device ID 3.

For the loopback device, anything input to device ID 1 and a given sub-device ID n (that is, hw:Loopback,1,n) will be available as output on hw:Loopback,0,n, and vice versa. This will be important later.

Install JACK
You should be able to find JACK in your package manager2, along with Jack Rack. In Ubuntu and derivatives, the package names are ‘jackd’ and ‘jack-rack’.

Build QJackCtl
QJackCtl is a Qt5 application. To build it, you’ll need qt5 and some assorted libraries and header packages. I run Linux Mint; this is the set I had to install.

  • qt5-qmake
  • qt5-default
  • qtbase5-dev
  • libjack-jack2-dev
  • libqt5x11extras5-dev
  • qttools5-dev-tools

Once you’ve installed those, unpack the QJackCtl archive in its own directory, and run ./configure and make in that directory. The output to configure will tell you if you can’t continue, and should offer some guidance on what you’re missing. Once you’ve successfully built the application, run make install as root.

Run QJackCtl
Run qjackctl from a terminal. We should take note of one feature in particular in the status window. With JACK stopped, you’ll notice a green zero, followed by another zero in parentheses, beneath the ‘Stopped’ label. This is the XRUN counter, which counts up whenever JACK doesn’t have time to finish a task inside its latency settings.

Speaking of, open the settings window. Front and center, you’ll see three settings: sample rate, frames per period, and periods per buffer. Taken together, these settings control latency. You’ll probably want to set the sample rate to 48000, 48 kHz; that’s the standard for USB microphones, and saves some CPU time. For the moment, set frames per period to 4096 and periods per buffer to 2. These are safe settings, in my experience. We’ll start there and (maybe) reduce latency later.

Close the settings window and press the ‘Start’ button in QJackCtl. After a moment or two, JACK will start. Verify that it’s running without generating any XRUN notifications. If it is generating XRUNs, skip down to here and try some of the latency-reduction tips, then come back when you’re done.

Use JACK’s alsa_in and alsa_out clients to let JACK access devices
Now we begin to put everything together. As you’ll recall, our goal is to take our two (mono) microphones and link them together into one ALSA device. We’ll first use the alsa_in client to create JACK devices for our two microphones. The alsa_in client solves problem #2 for us: its whole raison d’être is to allow us to use several ALSA devices at once which may differ in sample rate or clock drift.

Now, it’s time to plug in your microphones. Do so, and run arecord -l. You’ll see output something like this.

$ arecord -l
**** List of CAPTURE Hardware Devices ****
card 0: PCH [HDA Intel PCH], device 0: ALC295 Analog [ALC295 Analog]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 1: Audio [CAD Audio], device 0: USB Audio [USB Audio]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 2: Snowball [Blue Snowball], device 0: USB Audio [USB Audio]
  Subdevices: 0/1
  Subdevice #0: subdevice #0

This lists all the currently available capture hardware devices plugged into your system. Besides the first entry, the integrated microphone on my laptop, I have hw:1 or hw:Audio, the CAD Audio U37, and hw:2 or hw:Snowball, the Blue Snowball.

Next, set up alsa_in clients so JACK can access the microphones.

$ alsa_in -j snow -d hw:Snowball -c 1 -p 4096 -n 2 &
$ alsa_in -j cad -d hw:Audio -c 1 -p 4096 -n 2 &

Let’s go through the options. -j defines the label JACK will use for the microphone; make it something descriptive. -d declares which ALSA device JACK will open. -c declares the number of channels JACK will attempt to open.

On to the last two options: like the JACK settings above, -p defines the number of frames per period, and -n defines the number of periods per buffer. The documentation for alsa_in suggests that the total frames per buffer (frames per period multiplied by period) should be greater than or equal to JACK’s total frames per buffer.

Next, set up an alsa_out client for the ALSA loopback device.

$ alsa_out -j loop -d hw:Loopback,1,0 -p 4096 -n 2 &

The arguments here are the same as the arguments above.

Use QJackCtl to hook everything up
Now, we’re almost done. Go back to QJackCtl and open the Connect window. You should see a list of inputs on the left and a list of outputs on the right. Your inputs should include your two microphones, with the names you provided in your -j arguments. Your outputs should include system, which is linked to your system’s audio output, and an output named ‘loop’, the ALSA loopback device.

Assuming you have mono microphones, what you want to do is this: expand a microphone and highlight its input channel. Then, highlight the system output and hit ‘connect’ at the bottom of the window. This will connect the input channel to the left and right channels of your system audio output. At this point, you should be able to hear the microphone input through your system audio output. (I would recommend headphones.) The latency will be very high, but we’ll see about correcting that later.

If the audio output contains unexpected buzzing or clicking, your computer can’t keep up with the latency settings you have selected3. Skip ahead to the latency reduction settings. That said, your system should be able to keep up with the 4096/2 settings; they’re something of a worst-case scenario.

If the audio output is good, disconnect the microphones from the system output. Then, connect one microphone’s input to loop’s left channel, and one microphone input to loop’s right channel. Open Audacity, set the recording input to Loopback,04, and start recording. You should see audio from your microphones coming in on the left and right channel. Once you’re finished recording, you can split the stereo track into two mono tracks for individual editing, and there you have it: two USB microphones plugged directly into your computer, recording as one.

Recording more than two channels
Using Jack Rack, you can record as many channels as your hardware allows. Open Jack Rack, and using the ‘Channels’ menu item under the ‘Rack’ menu, set the number of channels you would like to record. In QJackCtl’s connections window, there will be a jackrack device with the appropriate number of I/O channels.

In Audacity, you can change the recording mode from ALSA to JACK, then select the jackrack device, setting the channel count to the correct number. When you record, you will record that many channels.

Jack Rack is, as the name suggests, an effects rack. You can download LADSPA plugins to apply various effects to your inputs and outputs. An amplifier, for instance, would give you volume control per input, which is useful in multi-microphone situations.

Reduce frames per period
If you’re satisfied with recording-only, or if you have some other means of monitoring, you can stop reading here. If, like me, you want to monitoring through your new Linux digital audio workstation, read on.

The first step is to start reducing the frames per period setting in JACK, and correspondingly in the alsa_in and alsa_out devices. If you can get down to 512 frames/2 periods without JACK xruns, you can probably call it a day. Note that Linux is a little casual with IRQ assignments and other latency-impacting decisions; what works one day may not work the next.

You can also try using lower frames per period settings, and higher periods per buffer settings, like 256/3 or 512/3. This may work for you, but didn’t work for me.

If you come to an acceptable monitoring latency, congratulations! You’re finished. If not, read on.

Fixing latency problems
Below, I provide three potential latency-reducing tactics, in increasing order of difficulty. At the bottom of the article, just above the footnotes, is an all-in-one solution which sacrifices a bit of convenience for a great deal of ease of use. My recommendation, if you’ve made it this far, is that you skip the three potential tactics and go to the one which definitely will work.

Further latency reduction: run JACK in realtime mode
If JACK is installed, run sudo dpkg-reconfigure -p high jackd (or dpkg-reconfigure as root).

Verify that this created or updated the /etc/security/limits.d/audio.conf file. It should have lines granting the audio group (@audio) permission to run programs at real-time priorities up to 95, and lock an unlimited amount of memory. Reboot, set JACK to use realtime mode in QJackCtl’s setup panel, and start JACK. Try reducing your latency settings again, and see what happens.

Further latency reduction: enable threaded IRQs
Threaded IRQs are a Linux kernel feature which help deliver interrupt requests5 more quickly. This may help reduce latency. Open /etc/default/grub. Inside the quotation marks at the end of the line which starts with GRUB_CMDLINE_LINUX_DEFAULT, add threadirqs, and reboot.

Further latency reduction: run a low-latency or real-time kernel
If none of these help, you might try running a low-latency kernel. You can attempt to compile and use low-latency or real-time kernels; the Ubuntu Studio project provides them, and there are packages available for Debian. If you’ve come this far, though, I’d recommend the…

All-of-the-above solution: run an audio-focused Linux distribution
AV Linux is a Linux distribution focused on audio and video production. As such, it already employs the three tactics given above. It also includes a large amount of preinstalled, free, open-source AV software. It isn’t a daily driver distribution; rather, its foremost purpose is to be an audio or video workstation. It worked perfectly for me out of the box, and met my real-time monitoring and audio playback requirements for The Crossbox Podcast6. I recommend it wholeheartedly.

Given that my laptop is not primarily a podcast production device, I decided to carve a little 32gb partition out of the space at the end of my Windows partition, and installed AV Linux there. It records to my main Linux data partition instead of to the partition to which it is installed, and seems perfectly happy with this arrangement.

So am I. Anyway, thanks for reading. I hope it helped!

  1. Two identical microphones actually makes it slightly (though not insurmountably) harder, since they end up with the same ALSA name. 
  2. If you don’t have a package manager, you ought to be smart enough to know where to look. 
  3. This is most likely not because of your CPU, but rather because your Linux kernel does not have sufficient low-latency features to manage moving audio at the speeds we need it to move. 
  4. Remember, since we’re outputting to Loopback,1, that audio will be available for recording on Loopback,0. 
  5. Interrupt requests, or IRQs, are mechanisms by which hardware can interrupt a running program to run a special program known as an interrupt handler. Hardware sends interrupt requests to indicate that something has happened. Running them on independent threads improves the throughput, since more than one can happen at once, and, since they can be run on CPU cores not currently occupied, they interrupt other programs (like JACK) less frequently. 
  6. Expect us to hit our news countdown time cues a little more exactly, going forward. 

Rampant speculation: why did the Falcon 9 blow up?

I am not a rocket scientist, but I do like to think about engineering problems.

Here are the facts as we know them:

  • A Falcon 9 rocket blew up on the pad on September 1, 2016.
  • The rocket was undergoing a pre-launch static test, when it exploded.
  • According to SpaceX, the explosion originated in the second-stage liquid oxygen tank.
  • SpaceX uses a fancy super-cooled LOX mix, which allows more fuel in a given tank volume, which allows better performance.
  • Last summer, SpaceX had another rocket fail. The CRS-7 mission disintegrated in flight after the upper stage LOX tank burst. The internal helium tank (to maintain tank pressure) failed because of a faulty strut.

Now, for a rocket to fail during fueling, before engine firing—as the most recent Falcon failed—is very unusual. To my engineer’s mind, it suggests a materials problem in the LOX or liquid helium tanks, something failing in an unexpected way when deep-chilled. Crucially, the Falcon 9’s LOX tank experiences the coldest temperature (for a LOX tank) in the history of rocketry. Take that in combination with the failure on the CRS-7 mission: after their investigation, SpaceX switched to a new strut, which means new failure modes.

Mark my words (and feed them to me along with a heaping helping of crow, when I turn out to be wrong): this is another strut issue, be it faulty or just unsuited for the deep-cryo fuel in some other way.