Author Archives: Fishbreath

Fishbreath Shoots: Beretta M9 Review

Is it necessary to review the Beretta M9 nowadays? Given the 92-pattern’s long history of military and police issue, I don’t doubt it’s near the top of the list as far as ‘how many people have shot this?’ goes. Am I going to review it anyway? Yes, yes I am. Will it be a traditional review, where I tell you things you already know? No, it won’t.

The gun in question is a bone-stock, commercial production Beretta M9. It isn’t a 92, so it has the flat backstrap and straight dust cover. It’s neither an M9A1 nor an M9A3, so it doesn’t have a rail, retains the original snowman-style sights, and isn’t finished in Modern Operator Tacticool Desert Tan1. It’s an M9, no more, no less, recently produced but in the very same pattern as the M9 as accepted into US military service in 1985.

Like I said, it’s one of the most widely-shot firearms in history. Like many popular things, the stereotypical view of it is wrong.

If you hang around firearms forums, or if you were recently deployed to somewhere sandy and far away, you may be down on the M9. “It’s unreliable! The fiddly bits inside get clogged with sand. Mine is inaccurate. It rattled when I walked. The slide came off of a buddy’s gun and hit him in the nose.”

None of these complaints are strictly inaccurate, but they don’t capture the full picture. Let’s get nerdy and talk about firearms actions for a bit. The 92-pattern pistol, unlike most modern handguns, does not use a Browning-inspired tilting-barrel design2. Rather, it uses a locking block, which engages the frame by lugs until an internal plunger running up against the frame pushes the locking block down to release the barrel.

Though it’s less common nowadays, the locking block action has some advantages. For one, given the traditional open slide, the ejection on the M9 is absurdly reliable. If the case comes out of the barrel at all, it’s leaving the gun. For another, the skeleton slide makes for a much smaller recoiling mass. The main wear item—the locking block—is easier to fix than worn-out cuts inside a slide. All you have to do is replace the locking block, rather than the slide. The bullet at the top of the magazine can feed directly into the barrel, since its angle relative to the barrel never changes. This makes for simpler magazines and requires no faffing about with feed ramps; as a side effect, it means that the M9 will happily feed any ammunition which is not overlong. It also means that feeding failures3 rarely happen outside of torture tests.

Now, the 92-pattern pistol does have some failings as an issue weapon, which we’ve touched on in previous articles. Mainly, a double-action pistol is not especially well-suited to the role the M9 found itself in, that of a soldier’s sidearm. Shooting a double-action pistol well requires mastering both the double-action trigger pull and the single-action trigger pull. It’s a heavy pistol, only incrementally lighter than the 1911 it replaced, and it’s also quite large. I have average-sized hands, and I suspect my thumbs are somewhat shorter than the norm; the magazine release is out of my reach.

I’m purposefully not mentioning the trigger reach or the safety. The former is obviously a problem, and the latter is obviously not. Do you subscribe to the school of thought which claims the 92’s safety is easy to engage by accident while working the slide? If so, next question: have you ever actually done it? Parvusimperator and I once spent a good ten minutes trying to engage the safety by accident, and wouldn’t you believe it, neither of us managed to. This particular complaint is overblown.

I did say that the 92 has some failings as an issue weapon, and if pressed, I might even admit that a striker-fired gun is probably a better choice (for a secondary weapon, because of easier training). Obviously, it isn’t a great choice if your aim is concealment. That leaves two categories: the duty gun and the competition gun. I think it stacks up well in both of those. Let’s look at why.

Weight
For a competition gun in particular, weight is not a bad thing. Even a heavy gun can be comfortably carried in a good holster, and if concealment isn’t a requirement, it’s not terribly hard to make a good holster. Weight means less felt recoil. Less felt recoil means faster, more accurate follow-up shots.

Trigger
It’s a double-action trigger. By definition, that means it’s better than almost every striker-fired trigger in existence. The first pull is rough, but how often do you have to do that? Once4.

Dependability
Much like parvusimperator’s favored Glocks, the M9 is, on the whole, a legendarily reliable piece of equipment. Unlike said Glocks, the M9 requires some attention to hold up its end of the bargain. Any firearm with steel riding on aluminum, like the 92-pattern guns, requires lubrication. It also requires occasional replacement—the design life of an M9 is about 40,000 rounds, and combat conditions undoubtedly shrink that figure. Many of the M9’s alleged flaws can be chalked up to pistols nearing their end of life, and to bad maintenance habits5. Carrying an M9 by itself, or shooting it in competition, leaves a gun owner room to solve both problems.

It’s no secret I like Beretta’s products. My carry gun, after all, is a Px4 Compact, and I’ve taken to shooting the M9 as my standard competition pistol. It’s plenty competitive, both in 3-Gun Nation Practical division, and USPSA Production division. I have some plans, even, to build a Beretta 96 for Limited competition next season. Watch this space.


  1. A color I actually really like. 
  2. For patent reasons, maybe? Ask parvusimperator. 
  3. Why yes, I did choose a picture of a Glock to illustrate this point. No, no particular reason why. cough 
  4. The match-running mavens at Performance Shooting Sports in Ohio are fond of ‘pistol loaded, chamber empty’ start conditions. Those are my favorite. No double-action pull at all! 
  5. I’m not judging, mind. A soldier overseas has many, many tasks which come ahead of ‘pistol maintenance’, not least of which is ‘rifle maintenance’. 

Fishbreath Plays: Starsector 0.8 Kind-Of-Review

I’m a scavenger and salvager by trade. My fleet is half reclaimed ships, free to obtain and cheap to run. Maybe a little less capable, but I have a solid set of four frigates and a combat shuttle. One of those frigates is an armed merchantman. Not only does it have some teeth, it’s hard to kill, and it has a big fuel tank. Refueling the other ships, it can get the fleet out of the core worlds and into the outer sector. I’ve been taking a few scouting and exploration jobs on the side to pay the bills.

The one that landed on my desk last night was a run to Gindraum, a blue supergiant ten light-years from the nearest inhabited world, to scan a derelict ship in the system’s outer reaches. That’s on the edge of my fleet’s range. That shuttle in particular takes just as much fuel as a frigate, but carries less than a third as much as my armed freighter. I left it at Jangala, the world I started out at, along with my shortest-legged frigate.

We set out, stopping briefly to beat up on a pirate fleet which decided to run us down. A week and a half later, detouring around some tremendous hyperspace storms, we came up on Gindraum. By the inner system jump point was a flashing beacon. I brought us closer. It was a message from the Hegemony Navy:

“Warning. This system is known to contain automated weapons systems. Extreme caution is advised.”

Well that’s a little worrying. I turned away from the insystem jump point and headed for the fringe jump point. Hopefully that would drop me a little further from any potential nasties.

We came out of hyperspace in between a ring system and an asteroid belt at roughly a Saturn orbit distance. Turns out that the fringe of this system is really big. I went counterclockwise, finding an awful lot of nothing much for a month and a half. Around a gas giant inside the rings, our sensors found an old frigate carrying a single sleeper pod. Inside was a very confused lieutenant who agreed to join up with me and warned me about some dormant combat drones around the backside of a nearby moon. I kept my distance.

Finally, we found the derelict we were after, in a nebula out beyond the rings by the old hyperspace communication relay. An active drone flew a patrol pattern around it. I’m not averse to a scrap (or the scrap following a scrap), but nobody likes fighting combat AI. At my order, the fleet went dark, creeping through the nebula at low signature while the drone was on the far side of its orbit. We came up on the derelict, ran the scanner package the client gave us, and turned to run, just as the drone came back around. Between the nebula and the silent running, I made my escape. Fuel and supplies running low and money in hand, I set a course for the jump point. Time to head home.

I’ve written about Starsector before. Two and a half years ago, it turns out. That, in part, is the story of Starsector: a game which has been available in pre-release form for at least six years. I first played it in college, in fact; one day, I put it up on a projector and hooked it into the suite’s sound system, and sold several copies on the spot.

Since then, it’s seen little updates: ship fitting, an economy here, a random battle generator there. Yesterday, though, the developer released version 0.8, and finally, we have a closer approximation of what we can expect on that glorious day when release arrives.

It’s spectacular. I’m not given to hyperbole, or indeed to statements without hedging of some kind or another, so don’t miss the import of this statement: Starsector is going to go down in history as a timeless classic. I fully expect it will end up being the best space sandbox game of the decade. It’s that good.

The story above gives you some idea of what’s been added. Surveying and salvage flesh out the list of things to do, and a procedurally-generated outer sector adds an enormous playground in which to do it. (To say nothing of the massively expanded sector core—there’s plenty to do before you start to dip your toes in the outer system.)

Honestly, I don’t want to say too much, because I don’t want to rob you of the sense of wonder you’ll feel when you play through it yourself. Rest assured, though, Starsector was good two and a half years ago, but it’s an incredible experience now, and it isn’t even done. You owe it to yourself to pick it up. You can do so here.

Tesla Motors: Ignoring Facts of Human-Machine Interaction Since 2014

Okay, I’ve had about enough of Tesla’s zombie legion of brainwashed fans reflexively and ignorantly defending them on autopilot grounds, so it’s time for a good old fashioned rant. I have two targets.

First: autopilot itself. Tesla’s autopilot is a nifty technological achievement. In its current state, though, it’s dangerous, and it disregards seventy years of research into how humans interact with machines. This book, on the specific topic of human reliability in transit systems, cites just over two hundred sources. In the world of trains, locomotive cabs usually feature a device called an alerter. If the driver doesn’t flip a switch or press a button every so often, the locomotive automatically stops.

The locomotive, actually, is a good analogue for the specific sort of cognitive load imposed by driving with an assisted cruise control system. If you read my Train Simulator review, you have some idea what I mean. For the benefit of you who did not read it, let me sum up.

Driving a car manually is a task with relatively evenly-distributed (low) difficulty. It takes constant attention to keep from hitting something or driving off the road. It may take more attention at times, but there’s a certain minimum cognitive load below which you can no longer drive a car. Sure, it’s no helicopter, but you do have to be paying at least a little bit of attention. This is materially different from driving a train or a semi-automatic car.

Piloting those two forms of transit requires so nearly zero input from the driver as to be indistinguishable therefrom. In both cases, the vehicle handles the moment-to-moment input required to keep itself from crashing into things1. The driver has no ongoing task to keep his mind focused. A quick sampling of Wikipedia articles on train crashes shows, broadly speaking, two sorts of accident which capture almost every incident: equipment failures causing derailment, and driver inattentiveness causing a train to run into another train2. In fact, the trend with trains is increasing computerization and automation, because—shocker—it turns out that humans are very bad at watching nothing happen with boring predictability for dozens or hundreds of hours, then leaping into action the moment something begins to go wrong. This article, by a self-proclaimed UI expert3 goes into great detail on the problem, using Google’s experience with testing self-driving cars as an example. The train industry knows it’s a problem, too, hence the use of the alerter system I mentioned earlier.

“Well then, you ought to love what Tesla is doing!” I hear you say. Don’t get me wrong, I think they’re making intriguing products4, and the technology which goes into even the limited autopilot available to Tesla drivers is amazing stuff. That said, there’s a twofold problem.

First, no self-driving system—not even Google’s more advanced fleet of research vehicles—is perfect. Nor will they ever be. Computerizing a train is trivial in comparison. There’s very little control to be done, and even less at the train itself. (Mostly, it happens at the switching and signaling level, and nowadays that’s done from a centralized control room.) There are very few instances driving a train where you can see an obstacle soon enough to stop before hitting it, and very few instances where it’s worth stopping to avoid hitting the thing you might hit. Again, though, hitting a deer with a train is materially different than hitting a deer with a luxury sedan. More generally, there’s a lot more to hit with a car, a lot more of it is dangerous, and it’s a lot more difficult to tell into which category—dangerous or no—a certain piece of stuff falls.

Second, there’s a problem with both driver alertness systems and marketing. To the first point, requiring that you have your hands on the wheel is not enough. There’s a reason a locomotive alerter system requires a conscious action every minute or so. Without that constant requirement for cognition, the system turns into another thing you just forget about. To the second, calling something which clearly does not drive the car automatically an ‘autopilot’ is the height of stupidity5. Which brings me to the second rant I mentioned at the start of the article.

Tesla fans.

You see, whenever anyone says, “Maybe Tesla shouldn’t call their assisted driving system Autopilot, because that means something which pilots automatically,” an enormous gaggle of geeks push their glasses up their noses and say, “Actually…”6

I’m going to stop you right there, strawman7 in a Tesla polo. If your argument starts with “Actually” and hinges on quibbling over the definition of words, it’s a bad argument. Tesla Autopilot is not an autopilot. “What about airplane autopilots?” you may ask. “Those are pilot assistance devices. They don’t fly the airplane from start to finish.” Precisely. The pilot still has lots to do8, even to the point of changing speeds and headings by hand at times. More to the point, it’s almost impossible to hit another plane with a plane unless you’re actively trying9. Not so with cars. Cars exist in an environment where the obstacles are thick and ever-present. A dozing pilot is usually a recipe for egg on his face and a stiff reprimand. A dozing driver is a recipe for someone dying.

I also sometimes hear Tesla fans (and owners) saying, in effect, “Just pay attention like I do.” The hubris there is incredible. No, you are not unlike the rest of the human race. You suffer from the same attention deficit when monitoring a process which mostly works but sometimes fails catastrophically as does the remainder of the human race. It is overwhelmingly more likely that you overestimate your own capability than that you’re some specially talented attention-payer.

To quote Lenin, “What is to be done?” Fortunately, we have seventy years of research on this sort of thing to dip into. If your system is going to require occasional human intervention by design, it has to require conscious action on the same time scale on which intervention will be required. Trains can get away with a button to push every minute because things happen so slowly. Planes have very little to hit and lots to do even when the plane is flying itself. Cars have neither luxury. To safely drive an Autopilot-equipped car, you have to be paying attention all the time. Therefore, you have to be doing something all the time.

I say that thing ought to be steering. I’m fine with adaptive speed, and I’m also fine with all kinds of driver aids. Lane-keeping assist? Shake the wheel and display a warning if I’m doing something wrong. Automatic emergency braking? By all means. These are things computers are good at, and which humans can’t do: seeing a specific set of circumstances and reacting faster than humans. Until the day when a car can drive me from my house to my office with no input from me—a day further away than most people think—the only safe way for me, or anyone, to drive is to be forced to pay attention.

Update 04/21/17
I’m not usually one to revisit already-posted articles, but this is just too much. In this Ars Technica comment, a Tesla owner describes “multiple uncommanded braking events” since the last software update. In the very same post, he calls his Tesla “the best car I’ve ever owned”.

If you needed further proof of the Tesla fan’s mindset, there it is.


  1. Whether by advanced computer systems and machine vision, or by the way flanged steel wheels on top of steel rails stay coupled in ordinary circumstances. 
  2. Sometimes, driver inattentiveness causes derailments, too, as when a driver fails to slow to the appropriate speed for a certain stretch of track. 
  3. I like his use of a topical top-level domain. We over here at .press salute you, sir! 
  4. Electric cars weren’t cool five years ago. Now they’re kind of cool10
  5. In a stroke of genius, Cadillac called a similar system ‘Super Cruise’. I’ll be frank with you: when a salesman is going down the list of options for your new Caddy, and he says, “Do you want to add Super Cruise?” your answer is definitely going to be, “Heck yes. What’s Super Cruise?” It just sounds that cool. Also, it has a better, though not quite ideal, solution to the driver attentiveness problem. There’s a little IR camera on the steering column which tracks your gaze and requires you to look at the road. 
  6. Yes, I realize that also describes me and this article. I also just fixed my glasses. 
  7. Never let it be said that our qualities do not include self-awareness and self-deprecation! 
  8. The occasional embarrassed dozing pilot story notwithstanding. 
  9. That’s why it’s always news on the exceedingly rare occasions when it happens, and frequently news when it doesn’t, but merely almost happens. 
  10. If poorly built, but Tesla say they’re working on that. 

OpenTafl v0.4.6.2b released

OpenTafl has seen some major development work for the first time in a while, culminating in the recent release of v0.4.6.2b.

This version adds some major features which have been on my to-do list for some time. First, the in-game and replay board views now support keyboard input, a major usability enhancement. No longer will mistaken command entries torpedo your games!

That feature, however, was merely a happy piece of good fortune, falling out of the true headline feature for the v0.4.6.x releases: a variant editor. That’s right. OpenTafl is now the best tool available for designing tafl rules, bar none. Not only can you individually tweak every rule described in the OpenTafl notation specification, you can also edit the board layout to look any way you like.

If you’re interested in playing a few games with the new UI or experimenting with rules variants all your own, you can, as always, get the latest version from the OpenTafl website. I haven’t promoted v0.4.6.x to the stable release yet, but I expect to do so soon.

With these features done, I turn my attention next to a few network-centric things for v0.5.x. OpenTafl’s network play server has not, to date, seen much use; now that PlayTaflOnline.com is getting close to its new architecture, I hope to write a PlayTaflOnline front end for OpenTafl, so you can use OpenTafl to play games at PlayTaflOnline, with all the rich support for replays, commentary, and analysis from OpenTafl. OpenTafl’s network server mode and self-contained network play will continue to be a supported mechanism for remote games, but won’t see new features. v0.5.x will also contain an automatic updater, to simplify the end-user updating process.

Looking further into the future, I’m running out of OpenTafl features I want to do. With luck, 2017 will see a v1.0 release.

The Crossbox Podcast: Episode 18 – Monsters Among Us

In this October-themed episode for April, we talk monster hunting, monster-size reference books, and monstrous failures, with a side order of cheap beer and cheap Glocks.

Further reading
Spoilers on Resident Evil 7 from about 32:00 to 35:30.
Predictably, John was correct and Jay was not on a point of firearms trivia: the Remington R51 is a modernized Remington Model 51, an early-20th-century pocket pistol.

Continue reading

Fishbreath Plays: MHRD Review

If you like puzzle games, it’s a good time to be alive. You’ve got your programming puzzle games, like Shenzhen I/O, SpaceChem, and really, the entire Zachtronics catalog; you’ve got your process optimization puzzle games, like Big Pharma and Production Line; you’ve got puzzle games of every shape, size, color, and description.

You even have, it turns out, logic puzzlers. That’s where MHRD comes in. You’re a hardware engineer for the waggishly-named Microhard, a company building the Next Big Thing in CPU design in the 1980s. You start with a single logic element: a NAND gate (for the uninitiated, that means not-and). You end up with a simple but entirely functional 16-bit CPU1, designing all the logic circuits you need along the way. Start with NAND, build the rest of the logic gates, figure out your multiplexers, demultiplexers, adders, and memory elements, put it all together into your higher-level CPU components.

It’s packaged in a fun, oldtimey DOS-style terminal editor, and unlike a lot of retro UIs, it doesn’t wear out its welcome. All your circuit design happens in a hardware description language, in an in-game editor. The editor has some foibles: it doesn’t scroll, and it only does line wrapping when adding text. On the other hand, it has a decent auto-completion engine. The hardware description language makes sense: you refer to pins and buses by name, connecting them to other pins and buses with an arrow operator. For instance, opCode->alu.opCode would connect the circuit’s opCode input to the ALU’s opCode input. Generally, the syntax is straightforward and easy to remember. Sound effects are basic; you get a background fan whir befitting an old PC, and an IBM keyboard sound effect which wears out its welcome after a while.

That’s all there is to it, which brings me to my next point. Is it good as a game? That’s a harder question to answer. It is sited in a difficult middle ground. It can’t be too freeform—given an instruction set and a CPU specification, very few people who don’t already know how could build all the necessary subcomponents. At the same time, it shouldn’t be too static, or else it feels a little too much like rote construction to the truth table for the component at issue. MHRD errs a bit too far in the latter direction. There is no real sandbox. All you’re doing is building the gates and circuits the game tells you to, in exactly that order. There’s no discovery to be had, and not a lot of freedom to design solutions in different ways. Unlike, say, Shenzhen I/O, the problems are small enough that it’s never all that unclear how to solve them.

That isn’t to say that there’s no fun to be had. If you aren’t a hardware engineer, or a software engineer with a deep interest in hardware2, you will find it fascinating how few steps it takes to get from a NAND gate to a functioning processor3. There are leaderboards, too, based on NAND counts for each element. Given that logic design is a fairly well-understood field, the NAND counts are uniformly the smallest possible number of gates required for each task, which gives you a nice target to aim for. The developer is active on his Steam forum, and seems to have more planned for the game. Given that it’s an atmospheric logic puzzle that I, an experienced software engineer, found enjoyable and educational, I think it’s worth a buy. (You may want to wait for a a sale.)

At the same time, there’s a better way. (If you’ve been reading the footnotes, you should have seen this coming.) There’s a free course out there, a Computer Science 101 sort of thing, called Nand2Tetris. As the name suggests, it’s similar to MHRD in that you’re building a CPU from a NAND gate alone. Nand2Tetris differs in two ways. First, it isn’t a game. As such, there isn’t a plot (MHRD’s is skeletal, but present), or any pretension that it’s about anything besides learning. Second, it goes a lot further. MHRD stops at the aforementioned functional CPU. The last puzzle combines the instruction decoder, the ALU, and some registers, and that’s it. It verifies your solution by running a few instructions through the CPU, and you’re done.

Nand2Tetris, as the name suggests, keeps going. After you finish the CPU, you write a compiler to generate your microcode. After you write your compiler, you write an operating system. After that, you can run Tetris. Furthermore, although you have assignments, you also have a proper sandbox. You get a hardware design language and a hardware simulator, and you can build anything you like. That, I feel, is the promise of a logic design puzzle game, and MHRD doesn’t quite deliver.

In the final reckoning, though, I say MHRD is worth the price of entry. I don’t really have the inclination to write my own compiler, and I have plenty of other software projects besides. If you’re only interested in the logic design portion, you ought to get MHRD too. If, on the other hand, you want to really understand how computers work—how the processor you built becomes the computer you use every day—try Nand2Tetris instead.


  1. It’s very similar in architecture, I understand, to the CPU designed in the Nand2Tetris course. We’ll come back to that. 
  2. Or a very good memory for that hardware class you took back in college. 
  3. Not counting the memory elements, the CPU task takes fewer than 800 NAND gates in the minimal solution. My current best is 3500. 

Nathaniel Cannon and the Lost City of Pitu Released!

Nathaniel Cannon and the Lost City of Pitu

The year is 1929. In the aftermath of the Great War, the world rebuilds, and the mighty zeppelin is its instrument. Carrying trade between every nation, airship merchantmen attract an old menace for a new age: the sky pirate. One man stands out above the rest. Ace pilot, intrepid explorer, and gentleman buccaneer Nathaniel Cannon and his gang, the Long Nines, prowl the skies in hot pursuit of wealth and adventure.

Cannon receives word from a sometime friend in Paris about a job in the Dutch East Indies. The contact tells a tale of a mysterious lost city, bursting with treasure, not seen by human eyes for a thousand years. Will his tip pay off? Or will it lead the Long Nines straight to a fight for their lives, lost in the unfriendly depths of the Indonesian jungle?

Nathaniel Cannon and the Lost City of Pitu, the first of the Nathaniel Cannon adventures by Soapbox contributor Jay Slater, is now available at Amazon, Barnes and Noble, Apple iBooks, Kobo, and Smashwords for $1.99. E-books include two never-before-seen short stories featuring the Long Nines. Get your copy today.

The Crossbox Podcast: Episode 17 – Grab Bags

In this episode, we can’t decide on one item for each topic, so instead we bring you a grab bag of grab bags. Jay talks about backwards aircraft carriers and the origin of the minimap, John tells you about news which was fresh when we recorded and old when we publish, and new audio setup reduces Jay’s obnoxious breathing noises by up to 80%.

Further reading
John breaks the Wilson Combat EDC X9 story several hours before any major source
John decides the EDC X9 is stupid
Continue reading

How-To: Two USB Mics, One Computer, JACK, and Audacity

The Crossbox Podcast is going upmarket: I now have two USB microphones, and for the March episode, parvusimperator and I will each have one directly in front of us. This is a wonderful advance for audio quality, but it does pose some problems:

  1. Audacity, our usual recording tool of choice (and probably yours, if you ended up here), only supports recording from one source at once.
  2. Though other tools support recording from multiple sources, the minor variations in internal clocks between two USB microphones mean that each microphone has a sample rate which varies in a slightly different fashion, and that longer recordings will therefore be out of sync.

Modern Linux, fortunately, can help us out here. We have need of several components. First, obviously, we need two microphones. I have a Blue Snowball and a CAD Audio U37, with which I’ve tested this procedure1. Second, we need a computer with at least two USB ports. Third, we need the snd-aloop kernel module. (If your Linux has ALSA, you probably already have this.) Fourth, we need JACK, the Linux low-latency audio server. Fifth, we need the QJackCtl program.

Before I describe what we’re going to do, I ought to provide a quick refresher in Linux audio infrastructure. If you use Ubuntu or Mint, or most other common distributions, there are two layers to your system’s audio. Closest to the hardware is ALSA, the kernel-level Advanced Linux Sound Architecture. It handles interacting with your sound card, and provides an API to user-level applications. The most common user-level application is the PulseAudio server, which provides many of the capabilities you think of as part of your sound system, such as volume per application and the ‘sound’ control panel in your Linux flavor of choice. (Unless you don’t use Pulse.)

JACK is a low-latency audio server; that is, a user-level application in the same vein as Pulse. It has fewer easily accessible features, but allows us to do some fancy footwork in how we connect inputs to outputs.

Now that you have the background, here’s what we’re going to do to connect two mono USB microphones to one computer, then send them to one two-channel ALSA device, then record in Audacity. These instructions should work for any modern Linux flavor. Depending on the particulars of your system, you may even be able to set up real-time monitoring.

  1. Create an ALSA loopback device using the snd-aloop kernel module.
  2. Install JACK.
  3. Build QJackCtl, a little application used to control JACK. (This step is optional, but makes things much easier; I won’t be providing the how-to for using the command line.)
  4. Use JACK’s alsa_in and alsa_out clients to give JACK access to the microphones and the loopback device.
  5. Use QJackCtl to connect the devices so that we can record both microphones at once.

We’ll also look at some extended and improved uses, including some potential fixes for real-time monitoring.

Create an ALSA loopback device
The ALSA loopback device is a feature of the kernel module snd-aloop. All you need to do is # modprobe snd-aloop and you’re good to go. Verify that the loopback device is present by checking for it in the output of aplay -l.

The loopback device is very straightforward: any input to a certain loopback device will be available as output on a different loopback device. ALSA devices are named by a type string (such as ‘hw’), followed by a colon, then a name or number identifying the audio card, a comma, and the device number inside the card. Optionally, there may be another comma and a subdevice number. Let’s take a look at some examples.

  • hw:1,0: a hardware device, card ID 1, device ID 0.
  • hw:Loopback,1,3: a hardware device, card name Loopback, device ID 1, sub-device ID 3.

For the loopback device, anything input to device ID 1 and a given sub-device ID n (that is, hw:Loopback,1,n) will be available as output on hw:Loopback,0,n, and vice versa. This will be important later.

Install JACK
You should be able to find JACK in your package manager2, along with Jack Rack. In Ubuntu and derivatives, the package names are ‘jackd’ and ‘jack-rack’.

Build QJackCtl
QJackCtl is a Qt5 application. To build it, you’ll need qt5 and some assorted libraries and header packages. I run Linux Mint; this is the set I had to install.

  • qt5-qmake
  • qt5-default
  • qtbase5-dev
  • libjack-jack2-dev
  • libqt5x11extras5-dev
  • qttools5-dev-tools

Once you’ve installed those, unpack the QJackCtl archive in its own directory, and run ./configure and make in that directory. The output to configure will tell you if you can’t continue, and should offer some guidance on what you’re missing. Once you’ve successfully built the application, run make install as root.

Run QJackCtl
Run qjackctl from a terminal. We should take note of one feature in particular in the status window. With JACK stopped, you’ll notice a green zero, followed by another zero in parentheses, beneath the ‘Stopped’ label. This is the XRUN counter, which counts up whenever JACK doesn’t have time to finish a task inside its latency settings.

Speaking of, open the settings window. Front and center, you’ll see three settings: sample rate, frames per period, and periods per buffer. Taken together, these settings control latency. You’ll probably want to set the sample rate to 48000, 48 kHz; that’s the standard for USB microphones, and saves some CPU time. For the moment, set frames per period to 4096 and periods per buffer to 2. These are safe settings, in my experience. We’ll start there and (maybe) reduce latency later.

Close the settings window and press the ‘Start’ button in QJackCtl. After a moment or two, JACK will start. Verify that it’s running without generating any XRUN notifications. If it is generating XRUNs, skip down to here and try some of the latency-reduction tips, then come back when you’re done.

Use JACK’s alsa_in and alsa_out clients to let JACK access devices
Now we begin to put everything together. As you’ll recall, our goal is to take our two (mono) microphones and link them together into one ALSA device. We’ll first use the alsa_in client to create JACK devices for our two microphones. The alsa_in client solves problem #2 for us: its whole raison d’être is to allow us to use several ALSA devices at once which may differ in sample rate or clock drift.

Now, it’s time to plug in your microphones. Do so, and run arecord -l. You’ll see output something like this.

$ arecord -l
**** List of CAPTURE Hardware Devices ****
card 0: PCH [HDA Intel PCH], device 0: ALC295 Analog [ALC295 Analog]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 1: Audio [CAD Audio], device 0: USB Audio [USB Audio]
  Subdevices: 1/1
  Subdevice #0: subdevice #0
card 2: Snowball [Blue Snowball], device 0: USB Audio [USB Audio]
  Subdevices: 0/1
  Subdevice #0: subdevice #0

This lists all the currently available capture hardware devices plugged into your system. Besides the first entry, the integrated microphone on my laptop, I have hw:1 or hw:Audio, the CAD Audio U37, and hw:2 or hw:Snowball, the Blue Snowball.

Next, set up alsa_in clients so JACK can access the microphones.

$ alsa_in -j snow -d hw:Snowball -c 1 -p 4096 -n 2 &
$ alsa_in -j cad -d hw:Audio -c 1 -p 4096 -n 2 &

Let’s go through the options. -j defines the label JACK will use for the microphone; make it something descriptive. -d declares which ALSA device JACK will open. -c declares the number of channels JACK will attempt to open.

On to the last two options: like the JACK settings above, -p defines the number of frames per period, and -n defines the number of periods per buffer. The documentation for alsa_in suggests that the total frames per buffer (frames per period multiplied by period) should be greater than or equal to JACK’s total frames per buffer.

Next, set up an alsa_out client for the ALSA loopback device.

$ alsa_out -j loop -d hw:Loopback,1,0 -p 4096 -n 2 &

The arguments here are the same as the arguments above.

Use QJackCtl to hook everything up
Now, we’re almost done. Go back to QJackCtl and open the Connect window. You should see a list of inputs on the left and a list of outputs on the right. Your inputs should include your two microphones, with the names you provided in your -j arguments. Your outputs should include system, which is linked to your system’s audio output, and an output named ‘loop’, the ALSA loopback device.

Assuming you have mono microphones, what you want to do is this: expand a microphone and highlight its input channel. Then, highlight the system output and hit ‘connect’ at the bottom of the window. This will connect the input channel to the left and right channels of your system audio output. At this point, you should be able to hear the microphone input through your system audio output. (I would recommend headphones.) The latency will be very high, but we’ll see about correcting that later.

If the audio output contains unexpected buzzing or clicking, your computer can’t keep up with the latency settings you have selected3. Skip ahead to the latency reduction settings. That said, your system should be able to keep up with the 4096/2 settings; they’re something of a worst-case scenario.

If the audio output is good, disconnect the microphones from the system output. Then, connect one microphone’s input to loop’s left channel, and one microphone input to loop’s right channel. Open Audacity, set the recording input to Loopback,04, and start recording. You should see audio from your microphones coming in on the left and right channel. Once you’re finished recording, you can split the stereo track into two mono tracks for individual editing, and there you have it: two USB microphones plugged directly into your computer, recording as one.

Recording more than two channels
Using Jack Rack, you can record as many channels as your hardware allows. Open Jack Rack, and using the ‘Channels’ menu item under the ‘Rack’ menu, set the number of channels you would like to record. In QJackCtl’s connections window, there will be a jackrack device with the appropriate number of I/O channels.

In Audacity, you can change the recording mode from ALSA to JACK, then select the jackrack device, setting the channel count to the correct number. When you record, you will record that many channels.

Jack Rack is, as the name suggests, an effects rack. You can download LADSPA plugins to apply various effects to your inputs and outputs. An amplifier, for instance, would give you volume control per input, which is useful in multi-microphone situations.

Reduce frames per period
If you’re satisfied with recording-only, or if you have some other means of monitoring, you can stop reading here. If, like me, you want to monitoring through your new Linux digital audio workstation, read on.

The first step is to start reducing the frames per period setting in JACK, and correspondingly in the alsa_in and alsa_out devices. If you can get down to 512 frames/2 periods without JACK xruns, you can probably call it a day. Note that Linux is a little casual with IRQ assignments and other latency-impacting decisions; what works one day may not work the next.

You can also try using lower frames per period settings, and higher periods per buffer settings, like 256/3 or 512/3. This may work for you, but didn’t work for me.

If you come to an acceptable monitoring latency, congratulations! You’re finished. If not, read on.


Fixing latency problems
Below, I provide three potential latency-reducing tactics, in increasing order of difficulty. At the bottom of the article, just above the footnotes, is an all-in-one solution which sacrifices a bit of convenience for a great deal of ease of use. My recommendation, if you’ve made it this far, is that you skip the three potential tactics and go to the one which definitely will work.

Further latency reduction: run JACK in realtime mode
If JACK is installed, run sudo dpkg-reconfigure -p high jackd (or dpkg-reconfigure as root).

Verify that this created or updated the /etc/security/limits.d/audio.conf file. It should have lines granting the audio group (@audio) permission to run programs at real-time priorities up to 95, and lock an unlimited amount of memory. Reboot, set JACK to use realtime mode in QJackCtl’s setup panel, and start JACK. Try reducing your latency settings again, and see what happens.

Further latency reduction: enable threaded IRQs
Threaded IRQs are a Linux kernel feature which help deliver interrupt requests5 more quickly. This may help reduce latency. Open /etc/default/grub. Inside the quotation marks at the end of the line which starts with GRUB_CMDLINE_LINUX_DEFAULT, add threadirqs, and reboot.

Further latency reduction: run a low-latency or real-time kernel
If none of these help, you might try running a low-latency kernel. You can attempt to compile and use low-latency or real-time kernels; the Ubuntu Studio project provides them, and there are packages available for Debian. If you’ve come this far, though, I’d recommend the…

All-of-the-above solution: run an audio-focused Linux distribution
AV Linux is a Linux distribution focused on audio and video production. As such, it already employs the three tactics given above. It also includes a large amount of preinstalled, free, open-source AV software. It isn’t a daily driver distribution; rather, its foremost purpose is to be an audio or video workstation. It worked perfectly for me out of the box, and met my real-time monitoring and audio playback requirements for The Crossbox Podcast6. I recommend it wholeheartedly.

Given that my laptop is not primarily a podcast production device, I decided to carve a little 32gb partition out of the space at the end of my Windows partition, and installed AV Linux there. It records to my main Linux data partition instead of to the partition to which it is installed, and seems perfectly happy with this arrangement.

So am I. Anyway, thanks for reading. I hope it helped!


  1. Two identical microphones actually makes it slightly (though not insurmountably) harder, since they end up with the same ALSA name. 
  2. If you don’t have a package manager, you ought to be smart enough to know where to look. 
  3. This is most likely not because of your CPU, but rather because your Linux kernel does not have sufficient low-latency features to manage moving audio at the speeds we need it to move. 
  4. Remember, since we’re outputting to Loopback,1, that audio will be available for recording on Loopback,0. 
  5. Interrupt requests, or IRQs, are mechanisms by which hardware can interrupt a running program to run a special program known as an interrupt handler. Hardware sends interrupt requests to indicate that something has happened. Running them on independent threads improves the throughput, since more than one can happen at once, and, since they can be run on CPU cores not currently occupied, they interrupt other programs (like JACK) less frequently. 
  6. Expect us to hit our news countdown time cues a little more exactly, going forward. 

The Crossbox Podcast: Episode 16 – Winter, Planely Negative

In this episode, Jay picks a horrifyingly punny title, we agree that shotguns are for door locks, beanbags, and pigeons clay and live, John picks older items in a gaming topic than Jay for once, and we discuss the only pitiful species of hornet in the world.

Further reading
Clashes and 11 Days of Christmas, Marshall L. Michael III
A Gripen-C (payload 11,400lb) can indeed carry an empty A-4F (weight 10,450lb). We couldn’t find payload figures for the Gripen-A in our admittedly abbreviated after-show research.


(Download)

Visit the Soapbox for articles and commentary authored by the hosts of the Crossbox Podcast. Find back episodes of the Crossbox Podcast here.