Tag Archives: design

USPSA Elo Ratings: how do they work?

In USPSA circles these days, I’m known in roughly equal measure for being ‘that East Coast revolver guy’ and ‘that Instagram Elo ratings guy’. This post is about the second topic. In particular, it’s a simple, mostly non-mathematical primer about Elo ratings in general, and Elo ratings as I have applied them to USPSA with the tool I call USPSA Analyst1.

This article is up to date as of September 2023. It may not be up to date if you’re reading it later on, and it may not be fully complete, either—this is not meant to be full documentation of Analyst’s guts (that’s the source code), but rather a high-level overview.

Elo basics

The Elo2 rating system was first developed to measure the skill of chess players. Chess is what we call a two-player, zero-sum game: in tournaments, each chess game is worth one point. If there’s a draw, the two players split it. Otherwise, the winner gets it. Elo operates by predicting how many points a player ought to win per game, and adjusting ratings based on how far the actual results deviate from its predictions.

Let’s consider two players, the first with a rating of 1000, and the second with a rating of 1200. The second player is favored by dint of the higher rating: Elo expects him to win about 0.75 points per game—say, by winning one and drawing the next, if there are two to be played. If player 2 wins a first game, his actual result (1 point) deviates from his expected result (0.75), so he takes a small number of rating points from player 1. Player 2’s rating rises, and player 1’s rating falls. Because the deviation is relatively small, the change in ratings is relatively small: we already expected player 2 to win, so the fact that he did is not strong evidence that his rating is too low. Both players’ ratings change by 10: player 2’s rating goes up to 1210, and player 1’s rating goes down to 990.

On the other hand, if player 1 wins, his actual result (1 point) deviates from his expected result (0.25) by quite a lot. Player 1 therefore gets to take a lot of points from player 2. We didn’t expect player 1 to win, so his win is evidence that the ratings need to move substantially to reflect reality. Both players’ ratings change by 30, in this case: player 2 drops to 1170, and player 1 rises to 1030.

The amount of rating change is proportional to the difference in rating between the winner and loser. In Elo terms, we call the maximum possible rating change K, or the development coefficient3. In the examples above, we used K = 40 multiplied by the difference between expected result and actual result4: 40 \times 0.25 makes 10, for the rating change when player 2 wins, and 40 \times 0.75 makes 30, for when player 1 wins.

Expected score can be treated as an expected result, but in the chess case, it also expresses a win probability (again, ignoring draws): if comparing two ratings yields an expected result of 0.75, what it means is that Elo thinks that the better player has a 75% chance of winning.

There are a few things to notice here:

  • Standard Elo, like chess, is zero-sum: when one player gains rating points, another player must lose them.
  • Standard Elo is two-player: ratings change based on comparisons between exactly two players.
  • Elo adjusts by predicting the expected result a player will attain, and multiplying the difference between his actual result and expected result by a number K.
  • When comparing two ratings, Elo outputs an expected result in the form of a win probability.

Elo for USPSA

Practical shooting differs from chess in almost every way5. Beyond the facially obvious ones, there are two that relate to scoring, and that thereby bear on Elo. First, USPSA is not a two-player game. On a stage or at a match, you compete against everyone at once. Second, USPSA is not zero-sum. There is not a fixed number of match points available on a given stage: if you win a 100-point stage and another shooter comes in at 50%, there are 150 points on the stage between the two of you. If you win and the other shooter comes in at 90%, there are 190 points.

The first issue is simple to solve, conceptually: simply compare each shooter’s rating to every one of his competitors’ ratings to determine his expected score6. The non-zero-sum problem is thornier, but boils down to score distribution: how should a shooter’s actual score be calculated? The article I followed to develop the initial version of the multiplayer Elo rating engine in Analyst has a few suggestions, but the method I settled on has a few components.

First is match blending. In stage-by-stage mode (Analyst’s default), the algorithm blends in a proportion of your match performance, to even out some stage by stage variation7. If you finished first at a match and third on a stage, and match blend is set to 0.3, your calculated place on that stage is 1 \times 0.3 + 3 \times 0.7 = 2.4, rewarding you on the stage for the match win.

Second is percentages. For a given rating event8, the Elo algorithm in Analyst calculates a portion of actual score based on your percentage finish. This is easy to justify: coming in 2nd place at 99.5% is essentially a tie, as far as how much it should change your rating, compared to coming in 2nd place at 80%. The percentage component of an actual score is determined by dividing the percent finish on the stage by the sum of all percent finishes on the stage. For instance, in the case of a stage with three shooters finishing 100%, 95%, and 60%, the 95% finisher’s percentage contribution to actual score is 95 / (100 + 95 + 60) = 0.372.

The winning shooter gets some extra credit based on the gap to second: for his actual score, we treat his percentage as P_{1} \div P_{2}, where P is the percentage of the given shooter. For example, \frac{100}{95} / (\frac{100}{95} + 95 + 60), for an actual score of about 0.404 against 0.392 done the other way around9.

Analyst also calculates a place score according to a method in the article I linked above: the number of shooters minus the actual place finish, scaled to be in the range 0 to 1. Place and percent are both important. The math sometimes expects people who didn’t win to finish above 100%, which isn’t possible in our scoring, but a difficult constraint to encode in Elo’s expected score function. (Remember, the expected score is a simple probability, or at least the descendant of a simple probability.) Granting points for place finish allows shooters who aren’t necessarily contesting stage and match wins to gain Elo even in the cases where percentage finish breaks down. On the other hand, percentage finish serves as a brake on shooters who win most events they enter10. If you always win, eventually you need to start winning by more and more (assuming your competition isn’t getting better too) to keep pushing your rating upward.

Percent and place are blended similar to match/stage blending above, each part scaled by a weight parameter.

That’s the system in a nutshell (admittedly a large nutshell). There are some additional tweaks I’ve added onto it, but first…

The Elo assumptions I break

Although the system I’ve developed is Elo-based, it no longer follows the strict Elo pattern. Among a bevy of minor assumptions about standard Elo that no longer hold, there are two really big ones.

Number one: winning is no longer a guaranteed Elo gain, and losing is no longer a guaranteed Elo loss. If you’re a nationally competitive GM and you only put 5% on the A-class heat at a small local, you’re losing Elo because the comparison between your ratings says you should win by more than that. On the flip side, to manufacture a less likely situation, if you’re 10th of 10 shooters but 92% of someone who usually smokes you, you’ll probably still gain Elo by beating your percentage prediction.

Number two: it’s no longer strictly zero-sum. This arises from a few factors in score calculation, but the end result is that there isn’t a fixed pool of Elo in the system based on the number of competitors. (This is true to a small degree with the system described above, but true to a much larger degree once we get to the next section.)

Other factors I consider

The system I describe above works pretty well. To get it to where it is now, I consider a few other factors. Everything listed below operates by adjusting K, the development coefficient, for given stages (and sometimes even individual shooters), increasing or decreasing the rate of rating change when factors that don’t otherwise affect the rating system suggest it’s too slow or too fast.

  • Initial placement. For your first 10 stages, you get a much higher K than usual, decreasing down to the normal number. This is particularly useful when new shooters enter mature datasets, allowing them to quickly approach their true rating. Related to initial placement is initial rating: shooters enter the dataset at a rating based on their classification, between 800 for D and 1300 for GM.
  • Match strength. For matches with a lot of highly-classified shooters (to some degree As, mostly Ms and GMs), K goes up. For matches heavier on the other side of the scale, K goes down.
  • Match level. Level II matches have a higher K than Level I, and Level III matches have a higher K than Level II.
  • DQs and DNFs. When a shooter DNFs a stage (a DNF, in Analyst, is defined as zero time and zero hits), that performance is ignored for both changing his own rating, and contributing to rating changes for other shooters. If match blend is on in stage-by-stage mode, it is ignored for DQed shooters. In by-match mode, DQed shooters are ignored altogether.
  • Pubstomps. If the winning shooter at a match is an M or GM, two or more classes above the second place finisher, and the winner by 25% or more, his K is dramatically reduced, on the theory that wins against significantly weaker competition don’t provide as much rating-relevant information as tighter finishes. This mostly comes into play in lightly-populated divisions.
  • Zero scores. Elo depends on comparing the relative performances of shooters. One zero score is no different from any other, no matter the skill difference between the shooters, so the algorithm can’t make any assumptions about ratings based on zero scores. If more than 10% of shooters record a zero on a stage, K is reduced.
  • Network density. Elo works best when operating on densely-connected networks of competitors. The network density modifier (called ‘connectivity’ or ‘connectedness’ most places in the app) increases K when lots of shooters at a given match have recently shot against a lot of other shooters, and decreases K when they haven’t. In a sense, connectivity is another measure of rating reliability: someone with low connectivity might be an artifact of an isolated rating island, shooting against only a few people without exposure to shooters in the broader rating set.
  • Error. Rating error is a measure of how closely a shooter’s actual scores and expected scores have matched recently. If the algorithm’s predictions have been good, error gets smaller, and K also gets smaller. If the algorithm’s predictions have been bad, error gets bigger, and so does K, to help the rating move toward its correct value more quickly. This is an optional setting, but is currently enabled by default.
  • Direction. Direction is a measure of how positive or negative a shooter’s recent rating history is: positive direction means that recent events have generally led to increases in rating, with 100 meaning that every recent change is a positive change. Direction awareness increases K when a shooter has highly positive or negative direction and is moving in that direction, and decreases K when a shooter moves opposite a strong direction trend11. Direction awareness is an option, currently disabled by default.
  • Bomb protection. Bomb protection reduces the impact of single bad stages for highly-rated shooters. Because people with above-average ratings lose much more on the back of a single bad performance than they gain because of a single good performance, ratings can unfairly nosedive in the case of, say, a malfunction or a squib. Bomb protection attempts to detect these isolated occurrences, and reduces K significantly for them. Repeated bad performances lose protection, and allow the rating to begin moving freely again. Bomb protection is an option, currently disabled by default.

In 2024, bomb protection and direction awareness will be enabled for the main ratings I do. Notably, these make the ratings slightly less accurate at predicting match results. It’s sufficiently small an effect that it may not be statistically significant, and it also substantially improves the leaderboards according to the experts who have seen the output. At the same time, the predictions will continue to use the 2023 settings, since (again) they are slightly but consistently more accurate.

In any case, that’s the quick (well, quick-ish) tour of Elo ratings for USPSA. Drop me a comment here, or a DM on Instagram (@jay_of_mars) if you have questions or comments.


  1. The name may change at some point, if USPSA asks, to avoid stepping on their trademark. At the time of writing, I’m not affiliated with USPSA, and they have not publicly sanctioned my work (in either the positive or negative sense of ‘sanction’). 
  2. Not ELO— the system is named after its inventor, Hungarian-American physics professor Arpad Elo. It’s pronounced as a word (“ee-low”), not as an initialism (“ee-el-oh”). 
  3. You’ll find ‘development coefficient’ in other sources, mostly. I always call it ‘K’. 
  4. I will probably refer to these as ‘expected score’ and ‘actual score’ elsewhere in the article. They’re the same thing. 
  5. “You don’t say!” 
  6. Expected scores and actual scores are scaled so that they sum to 1 across all competitors. This is mainly for convenience and ease of reasoning. 
  7. Some, but not all. Stage-by-stage mode works better in my experience: the more comparisons an individual shooter has, the better the output, even if winning stages isn’t quite what we’re measuring in USPSA. 
  8. A rating event is a stage, in stage-by-stage mode, or a match in match-by-match mode. 
  9. Writing this article, I realize I should probably be making the adjustment for every shooter, but that’ll have to wait until the 2024 preseason12
  10. I’m a good example of the latter: Analyst isn’t willing to bump my rating by very much at most revolver matches, because it’s expecting me to put large percentages on the field. 
  11. The reasoning here is that a shooter whose rating matches his current level of skill should have a direction near 0, which is to say a 50-50 mix of positive and negative changes. 
  12. There’s probably a more correct way to generate percentage-based scores in general, but I haven’t set upon it yet, even if I have a few ideas of where I’m not quite on track. 

Bureaucracy: a game mechanic idea

I’m creeping toward the end of a game of Stellaris, and an incongruity hit me: why am I hiring more bureaucrats to make things in my large empire cheaper, against all reason and every historical example?

Let me explain. In Stellaris, there is a soft cap on the size of your empire: sprawl. Population, production buildings, and territory all generate sprawl. Bureaucrats increase your sprawl cap. If you go over your sprawl cap, things cost more. Ergo, bureaucrats make things cost less, which is facially absurd.

But how do you solve it? I think the answer is that maybe you don’t, or at least not entirely. You just need to measure two things: first, sprawl, which bureaucrats counteract, and second, administrative efficiency. Sprawl works like it does in Stellaris: the more empire there is, the more expensive things get, unless you have administrators to counteract the effect.

Administrative efficiency models the loss of efficiency from thicker red tape. The more administrators you have, the lower your efficiency gets, and the more things cost. Administration is less expensive than a vast kleptocracy, but still expensive compared to a smaller, leaner state.

Of course, that’s just a surface-level implementation. You might tune things so that size of empire and size of administrative state play off of each other, which would let you (imaginary game-designing reader) set a soft cap on effective empire size. Or, if you’re really into the concept that bureaucracy is at best a mixed bag, implement a kind of cost disease.

After a while, the goal of any large organization of humans becomes ‘justify this organization’s continued existence’. Bureaucracies almost never shrink over the long run, absent some outside cataclysm. They’re much more likely, instead, to grow. So, a given unit of bureaucracy is created to administer a given amount of stuff. The amount of stuff per unit bureaucracy never goes up, but the administrative cost of each unit of bureaucracy does.

Which brings us to the final form of the idea. When your empire expands, it needs bureaucrats. When you hire bureaucrats, you introduce a slowly-growing ossification into the structure of your empire. Eventually, your bureaucrats cut into your ability to do productive things, taking up more and more of your output until your empire is paralyzed by the cost of running itself, and eventually torn down by forces without.

This neatly mirrors some real-world trajectories. Depending on how you tune things (capping the penalty one bureaucrat imposes, tweaking the rate at which the penalty grows) it’s probably possible to set up an empire that can survive in a sort of semi-stasis, handicapped but not quite self-destructing. It opens the door to a wide variety of empire traits and events (moral, dutiful bureaucrats who aren’t as bad in the long run, bursts of patriotic fervor in e.g. a war temporarily reversing the downward trend, and so forth) to boot.

Anyway, I’m not working on anything in the 4X/empire-builder genre, so if you like it, take it. I’d love to play a game which leans into this idea.

Some brief thoughts on game design: make the player earn it

Among the many things parvusimperator and I chat about on our coffee breaks at work are video games, and in particular those we’re playing at any given moment. For me, for now, that’s BattleTech, the recent turn-based entry by BattleTech (the miniatures wargame) creator Jordan Weisman. For parvusimperator, it’s been Resident Evil 2 2, PS4 boogaloo. That is, the recent Resident Evil 2 remake1. The two are very different games, but in the end, they do make the player earn it.

BattleTech: mercenary life, paycheck-to-paycheck edition

In BattleTech-the-setting, mercenary companies are undisputably the coolest way to play. The meta-story around the battles writes itself—dragging damaged mechs back to the dropship, patching them up as best you can, sending them out again to pay the bills.

A lot of BattleTech-the-setting PC games have only partially delivered on this promise in the past. The majority of them have been mech-piloting games rather than mech-management games, which makes it more difficult to come up with an AI that properly challenges the players. Too, it takes a more serious masochist to pilot a degraded mech in first-person than it does to manage some other poor shmuck doing the driving.

BattleTech, on the other hand, leads hard into the mercenary-life-is-painful trope. Not quite as much as Battle Brothers, but not too far behind it, either. In particular, early in the game, you’ll find yourself barely getting by, scrabbling for easy money wherever you can come across it, and cursing the moments when your intel misses some key piece of information about the strength of the opposition.

Eventually, things get better. You hire a few more mech pilots, so that losing one to injury doesn’t put you so far behind the curve. You salvage a few more mechs2, so you can field more weapons or sub in a B lance if your A lance is in for repairs. I’m in the early midgame now, and have a few months of salary cushion and close to a second lance. Things are still tight, though, and unlikely to get very much less tight until I can bulldoze missions with maximum firepower. One or two bad drops, and I’ll be right back where I was, only getting along by the skin of my teeth.

What you get over time is resilience—the game itself doesn’t get any easier, but setbacks get smaller proportional to what you’ve attained.

Resident Evil 2: the cool toys are for closers

My thoughts on this one are less my own and more parvusimperator’s transcribed, but he’s working on defense commentary articles, and we all want him to keep working on those, so here we are.

I’d wager that many of the people playing the Resident Evil 2 remake have fond memories of Resident Evil 2 the original. The other side of the coin is that those same people remember how Resident Evil 2 went. So, in addition to the variations present in the original (that you can play from the perspective of both main characters), it adds a few more wrinkles, which I’ll leave parvusimperator to expand upon in a comment, if he wants3.

Eventually, after you’ve beaten the game with a given character in a given manner, you can go back and play with all the toys from the get-go, infinite ammo, and suchlike things. You know, how you would approach a zombie thing if you knew one was coming, rather than (like the characters) you’re surprised by it.

What you get over time is ease—the game gives you tools to beat it more readily.

Conclusion: winning easily is more fun if it was hard at first

In both games, the end result is positive feedback loops. Play well? The game makes it easier for you to win later. Put another way, the difficulty curve is a hill: it starts on an upslope and ends on a downslope.

“I should make my game easier just as people are getting better at it” sounds like a questionable design choice, but it makes a lot of sense in both cases. In BattleTech, the change in difficulty curve is subtler, but important nevertheless. If the game was so finely tuned that no matter how impressive a mercenary company you put together, you’re always just barely getting by, it wouldn’t feel at all rewarding.

In Resident Evil 2, the change is more obvious. “Here’s infinite ammo!” is not sneaky. At the same time, though, it makes sense. Why are you replaying the game? Because you enjoyed it the first time through, and want to see it again. Do you want to do things the survival horror way? Maybe you don’t. After you’ve seen it how you were supposed to, the game ceases to care if you want to play outside the boundaries.

So there you have it4. Make your game get harder at first, then sneakily (or not, depending on your goals) easier later on, so that your players can properly experience gaining mastery.


  1. I’m going to bury this tidbit to see how closely he reads my articles: Resident Evil 3 is reportedly getting the same treatment
  2. And that’s your only option. Nobody sells fully-functioning mechs—why would they? They’re difficult or impossible to make. If you have a working one, you keep it. If it breaks down and you can’t fix it, you sell the bits on and use the money to buy bits to repair your other mechs. 
  3. There’s a lot of creativity in how many New Game+ options you have. 
  4. It’s something I’ve been thinking about with respect to tabletop RPG design, too, and why perfect balance is not necessarily desirable. If you get more powerful, but your foes also do at exactly the same rate, what have you accomplished but for reskinning the fight against six rats at the very start of the campaign? 

On Sailless Submarines

Per the WWRW report this week, the Chinese have made a sailless submarine prototype. This is not a new idea; the United States, the Soviet Union, and France have kicked this idea around in the past. Let’s look into why one might want to delete the sail, and what tradeoffs that brings.

Why would one want to delete a sail? That’s simple: speed. US Navy sub designers reckoned that deleting the sail (and the drag from it) would gain you about 1.5 knots of speed, all other things being equal. It also removes the problem of inducing a snap roll tendency in turns.

Like everything else, it’s not without its tradeoffs. Clearly, we still need masts and some way for the crew to enter and leave the submarine. We also are going to need some sort of conning tower facility to steer the sub when it’s surfaced. Prior designs tend to accomplish the first by folding masts and periscopes down into a fairing, and having a retractable conning tower for steering and crew access, retracting again into a fairing. That fairing will add some drag back. Bureau of Ships actually figured that a fairing capable of handling all of the relevant systems would be about as draggy as a well-designed, small sail.

Having a sail allows the submarine to be a bit deeper at periscope depth, which helps with stability in rougher sea states. The sail itself is also an aid to stability, and means that the rear fins don’t have to be as large.

The US Navy actually gave serious consideration to the sailless concept twice, going as far as to make some models of the concept when designing what would become the Los Angeles-class. However, this ran into opposition from Hyman Rickover, who wanted fast submarines now, and did not want to do a bunch of hullform comparisons when he could simply design aroudn a larger reactor and call it a day. Rickover managed to kill the concept, and the Los Angeles boats all had a traditional (albeit small) sail.

Bradley Advanced Survivability Test Bed

We’ve known that crew survivability can be enhanced by isolating crew from the ammo, and providing blow-out panels to direct any cook-offs away from the crew. These features are usually designed in from the beginning, as in the M1 Abrams or T-14. Let’s look at a test bed designed to add these features after the fact.

The M2 Bradley Fighting Vehicle and M3 Cavalry Fighting Vehicle carry an awful lot of ammunition, and aren’t super well protected. US Army studies indicated that an infantry carrier like the Bradley was likely to be targeted by anything on the battlefield, including the antitank weapons that it really wasn’t designed to resist. While explosive reactive armor could be added to supplement existing armor, this wouldn’t do very much against APFSDS rounds.

The Bradley Advanced Survivability Test Bed (ASTB) implemented a pretty extensive redesign of stowage. Most of the TOW missiles were moved to hull stowage racks outside of the crew compartment, with three missiles in an external compartment in addition to the two in the launcher. Two more were stored low on the floor of the crew compartment, although these could be replaced with Dragon missiles that were of more use to the dismounts. This limited amount of stowage in the crew compartment was intended to allow the vehicle to fight if the external stowage was not immediately accessible. Reserve 25mm ammunition was compartmentalized, with blow-off panels and separation for the rounds provided in the compartments. As a result, reserve ammunition capacity was reduced from 600 rounds in a regular M2 to 588 rounds in the ASTB.

Fuel was also mostly moved to large, external tanks at the back of the vehicle to prevent fires in the crew compartment. A 30 gallon “get home” reserve tank was provided internally.

The ASTB was also fitted with spall liners, additional applique armor, and protection for the sights. These features would get rolled into production models of the Bradley after live-fire testing of several models, including the ASTB, in 1987.

As for the rest of the features, I do not know why more were not adopted.

Wargames I Would Play: Civil War Operational Logistics

As regular readers may remember, I’m slowly slogging my way through Shelby Foote’s Civil War, and I’m struck by how little most Civil War wargames resemble the battles recounted therein, in two different ways.

First: movements in the field were often dictated by logistical concerns: I’m at the end of a tenuous supply line, and Jeb Stuart just cut it; Vicksburg is supplied over the railroad from Jackson, and Grant just captured Jackson. Wargames usually abstract supply to ‘in supply’ or ‘out of supply’, without regard to combat and noncombat supplies. It was entirely possible for an army to have plenty of food but no ammunition or vice versa, and in fact it was frequently thus. Wagons, horses and mules, forage for same, and rations or foraging for the soldiers were daily concerns.

Nor do the Civil War wargames I’ve played fully emphasize the crucial importance of railroads and river control. A torn-up railroad in your backfield wasn’t a minor inconvenience, it was a critical problem which could derail (ha) an entire offensive. Supply dumps were important, but so were the routes by which those supplies reached the front. See also Grant’s first few moves at Chattanooga.

Second: an army commander’s interactions with his troops were almost entirely through his corps commanders. He might shuffle a division from place to place, detaching it from one corps to reinforce another, but he generally wouldn’t dictate exactly how each division was supposed to be arrayed. His communications with his corps commanders would also often be over insecure or unreliable channels—letters entrusted to couriers, telegraph lines, or runners on the battlefield. His corps commanders might misapprehend his instructions, or those instructions might be rendered irrelevant or impossible to follow by changing circumstances or bad maps.

So, could a wargame simulate some of these snags? I think so, with some combination of the following features. I might work on this some myself, or leave it as an exercise for the enterprising reader. Either way, I lay no claim to any of the ideas here.

Grant’s campaign against Vicksburg is the obvious grand campaign for a game with a logistics focus: it lasted longer than any other in the war while taking place along one of the most interesting theaters in history, the Mississippi River.

Unreliable Maps

Unreliable maps are so common in war, and such a linchpin for the other features implied by the above and laid out below, that it’s surprising that no wargame I’m aware of has done them.

The primary kind of unreliability in the Civil War is missing features: maps which don’t show certain roads or certain impassable terrain features. Ultimately, this is just a different kind of fog of war, eliminated not by simply moving into it, but by dedicated scouting and mapping. It also seems to require different levels of fog of war, to represent easy or difficult features to uncover.

In the Mississippi campaign, Grant wasted a bunch of time on various canal-digging and river-diverting projects, in large part because his maps were no substitute for detailed local knowledge. Only by attempting those projects and failing at them did he eventually come to some workable solutions.

Detailed Terrain and River Systems

In the Mississippi campaign, high-resolution elevation data and a river level simulation are all but requirements, and probably the hardest part of doing a good wargame of this sort.

In the 1860s, the Father of Waters rose and fell with the rains and the seasons. A canal dug in November might overflow its banks when the spring flood comes, as the surrounding countryside floods too. A river passable by ironclad in late April might be entirely unnavigable by steamboats in late August. The ever-shifting terrain of the Mississippi basin makes for a fascinating battlefield, and one that isn’t ordinarily well-represented by wargames.

Detailed Supply

The wargame I picture is driven much less by combat than by supply. You can get away, then, with abstracting combat pretty heavily. (See below.) I don’t think you can get away with abstracting supply as much as usual. At the same time, you don’t want to get too deeply into the weeds. Some items you probably want to track separately:

  • Food: either bring it with you in your train, or forage from the countryside around you. The latter option requires constant motion, or else you run out.
  • Forage: distinct from food for the troops, forage is food for your army’s beasts of burden. It comes up a lot in Foote’s history. If you don’t have forage, you have trouble moving your train as well as your artillery.
  • Ammunition (small arms and artillery): you probably don’t need to track ammunition with more granularity than the foregoing parenthetical.
  • Environmental supplies: winter coats and boots, tents, and the like. Less a problem for the Union. More a problem for the Confederates.

You probably also should track things like pontoon bridges separately—their lateness to the battle was what torpedoed Burnside’s crack at Lee. (Well, that and Burnside’s decision to go ahead with an attack after it was no longer a surprise.)

Weak Command

An army commander’s experience in the field was generally limited to watching from a headquarters, receiving reports from the field, and hearing (or failing to hear) subordinates engaging in battle.

The extent of his command, too, was limited: ensuring subordinates are in the right place, ordering attacks at a given time, and shuffling divisions around.

I think the Command Ops approach is a reasonable one for a game of this sort, though likely with even greater obstacles between the commands you give and their execution by the troops. Your runners might be captured or killed, and in most cases your orders will move at the speed of horse. If a corps of yours gets into action elsewhere on the field, you may not even know about it before you get reports saying they’ve retreated. Certainly you’ll have a hard time exercising much direct command in battle.

Conclusion

The question, then, is would a game like this be enjoyable to play? It’s hard to say. Command Ops manages to make order delays fun, but I’m not so sure that they would stay fun when your duties are primarily ordering people to capture, repair, or tear up railroads, and you rarely have very much direct control over the course of a battle.

Like I said, I don’t have the time to make this a reality, not even at the prototype stage, so history may never know the answer to the question above. Still, I think it would make for a fresh and interesting take on simulations of the Civil War.

Fishbreath Shoots: CZ P-09 .40 S&W ‘C-Zed’ Race Gun Build

If you’ve been following us for a while, you may remember my two race gun proposal posts from last year, in which I justified my desire to build a USPSA Limited gun on the cheap.

You may also recall the shootout post, in which I decided that the gun to buy, between the Beretta 96 and the CZ P-09, was the CZ.

Lastly, you may recall the CZ P-09 .40 review from last summer, in which I reviewed the base model gun.

We’re now nearly to the end of the series. In this post, we’ll explore what I did to the P-09 and what supporting equipment I bought, and, at the end, come up with a cost.

Requirements

Beyond the requirements imposed by the USPSA Limited rules, there are a few requirements I gave myself, too.

  1. A decent competition holster, preferably something with drop, offset, and adjustable retention.
  2. At least 60 rounds of ammunition on the belt. That was my setup with the M9, and I didn’t want to go any lower.
  3. A sturdy belt to hold everything.

Internals

The C-Zed’s guts are all Cajun Gun Works all the way. I bought their hammer, with different spur geometry for reduced single-action trigger pull, the short reset kit, which included an extended firing pin, and a number of springs: a main spring, a reduced-strength trigger return spring, reduced springs for the firing pin plunger, and an increased-strength sear spring.

The increased-strength sear spring sounds like it’s the wrong tool for lightening a trigger, pull, doesn’t it? You would be correct. Cajun Gun Works sells them as a tool for adding weight to a dangerously light trigger. I didn’t expect to need it and didn’t use it in the end, but figured that, at $10, it was worth the money just in case.

The other items on the list all work together. The hammer reduces single-action pull, the main spring reduces the work the trigger has to do, the reduced trigger return and firing pin plunger springs reduce the spring weight you’re pulling against. The extended firing pin is necessary for the lighter main springs, because the reduced hammer impulse can cause light strikes.

I haven’t had any trouble with cheap Magtech ammo, though, with the full setup. All my primers are well-punched; none are punctured.

Everything was relatively easy to install except the trigger spring. It’s a coil spring with offset legs. The trigger has two ears and a space in the middle, and a hole for one leg of the trigger spring. You have to get one end of the spring in the hole, one end on a shelf, and the trigger ears and spring coil lined up with the holes in the frame for the pin, all while pushing the pin in. It was a four-handed job at Soapbox World HQ.

In the end, the combination of modifications resulted in a smoother 7lb double-action trigger pull, and a very crisp 2.5lb single-action trigger pull (albeit with the expected double-action takeup). Those are significant improvements over the stock 10lb double-action pull, and the stock 4.5lb single-action pull. There were also improvements in crispness, creep, and reset, thanks to the Cajun parts.

Sights

Cajun Gun Works sells Dawson Precision-made sights in traditional competition configuration: blacked-out rear sights, fiber-optic front. It comes with green and red bits of fiber, so you can pick which one you want.

These were the most annoying parts to install. The Dawson rear sight was tremendously oversized, and took about half an hour of filing before I could punch it into place. The CZ factory front sight had been glued in. Try as I might, I couldn’t even begin to loosen it. I ended up stopping by the Friendly Local Gun Shop, which has a much better heat gun; they got it in a few minutes.

Not to be outdone, the front sight from Dawson took some filing to get installed, too. Precision is not an accurate descriptor of the sights’ fit into the dovetails.

Magazines

Cajun Gun Works’ part in things completed, I turned to CZ Custom for magazines and magazine wells. The C-Zed now mounts the large CZ Custom magazine well, which makes a big difference in ease of magazine insertion.

The P-09’s magazines, with the CZ Custom 140mm base plates and spring-and-follower kits, have a claimed capacity of 21. Parvusimperator suggested I take that with a grain of salt, so I assumed 20. I decided I wanted four magazines rather than just three to give me more flexibility on reloads; at the same time, I was looking to keep the total cost of the project down. I settled on four magazines with the 140mm baseplate, but only three with the spring-and-follower kit.

The end result is three magazines which hold 20 rounds of .40 S&W, and one magazine which holds 17. The latter can be used to get a round into the chamber before loading one of the 20-rounders to start a stage, and serves as my backup.

Belt Etc.

Midway USA makes a cheap two-part belt. I’m not looking for anything super-fancy, but the two-part setup is nice. I can mount all my gear on the outer belt and just velcro it onto the inner belt come match time, without having to undo any buckles. It holds my gear just fine. (That’s 1lb, 14oz of gun for those of you keeping track, plus 77 rounds of .40 and four magazines.)

Cook’s Holsters makes a decent Kydex competition holster starting at $47.95, or $67.95 if they install the TekLok and drop/offset rig for you. I had them do so. The holster is low-cut in the front, and has adjustable retention by means of a pair of screws running through springy rubber washers. The drop and offset are nice, making the draw a good bit easier.

I’ll continue to use my ten-dollar MOLLE-strap canvas Amazon-bought triple pistol mag pouches for magazine carriage. They do the job just fine; the retention straps fold out of the way easily, and on the Midway USA belt, they’re pinned in place by the inner belt.

In Sum

Here’s what I spent.

  • $506: CZ P-09 .40, night sights, 3 magazines
  • $294.60: Cajun Gun Works internals
  • $303.20: CZ Custom magazine well and magazine parts
  • $46.53: Fourth magazine
  • $104.27: Holster and belt

In total, the cost of this race gun project was $1224.60. (Or $1254.60, if you’re buying the magazine pouches too.) Even counting a trigger scale I bought and a case of test ammunition, the project tips the scales at under $1500. Has it reached the magical point of ‘good enough’? Only match experience will tell. Check back toward the end of April for some thoughts with that in mind.

The Crossbox Studio: multiple mic podcast recording for $60 per person

If you’re a Crossbox Podcast listener, you may have noticed that we sound pretty good. Now, granted, our1 diction is poor, and we’re still figuring out the whole hosting thing. Our voices, however, come through loud and clear, with minimal noise. While we’re recording, we monitor our audio in real time. Some people will tell you quality podcast recording with features like that takes a big investment.

They’re wrong.

The Crossbox Studio is proof. We connect two USB microphones to one computer, then mix them together in post production for maximum quality and control.

In this article, I’ll show you how you can build our recording setup, starting with microphones and accessories, and moving on to software. Let’s dive in.

Hardware

We’ll start with microphones. For high-quality recording, each host has to have a separate microphone. This is a huge help both during recording and editing; being able to edit each speaker individually provides a great deal more flexibility to whoever gets stuck with the task of editing2.

For The Crossbox Podcast, we use one Blue Snowball—too pricey to hit our goal—and one CAD Audio U37. As studio-style condenser microphones go, the U37 is extremely cheap. It comes in at a hair over $39, and the sound quality and sensitivity are superb. I recommend it wholeheartedly.

Next, we need to mount the microphones in such a way as to minimize the transmission of vibrations to the microphone. This means that the microphone won’t capture the sounds typing on a laptop keyboard or touching the table. First off, we’ll need a microphone boom. This one clamps to the table. You don’t need anything fancier3. To hold the microphone, you’ll want a shock mount. Shock mounts suspend the microphone in a web of elastic cord, which isolates it from vibration.

If your environment is poorly acoustically controlled (that is, if you get other sounds leaking in, or if you have a noisy furnace, say), you ought to look into dynamic microphones. (The Crossbox may switch in the future.) These Behringer units are well-reviewed. If you get XLR microphones like these, you’ll also need XLR-to-USB converters.

Lastly, you’ll need a pop filter. Clamping onto the spring arm, the pop filter prevents your plosives and sibilants4 from coming through overly loud.

Let’s put it all together. Clamp the boom arm to the table. Attach the shock mount to the threaded end. Expand the shock mount by squeezing the arms together, and put the microphone in the middle. Clamp the pop filter onto the boom arm, and move it so that it’s between you and the microphone.

Congratulations! You’ve completed the hardware setup. Now, let’s talk recording.

Software

Moving on, we’re going to follow a procedure I laid out in an earlier article. Using two USB microphones at once brings some added complexity to the table. If you want to read about why this is so, hit the link above for a deeper discussion. Here, we’re going to keep it simple and just talk about the solution.

First off, you’re going to need a decently quick laptop5. Memory isn’t important. What we want is raw processing power. The amount of processing power you have on tap determines how many individual microphones you can record from.

Next, you’re going to want a specialized operating system6. Go get the appropriately-named AV Linux. This article is written targeting AV Linux 2016.8.30. Later versions change the default audio setup, which may cause problems. Create a bootable USB stick containing the image—here’s a good how-to. Boot it and install it. If you don’t plan on using AV Linux for everyday tasks (I don’t), install it in a small partition. (As little as 20 gigabytes will do, with room to spare.) Later on, when recording, you can choose a directory for temporary files, which can be in your everyday partition7.

Let’s move on. Now we’re to the point where we can talk about recording tools. The Crossbox Podcast uses two separate tools in our process. First, we route our microphone inputs through Ardour. Ardour, a digital audio workstation program, is powerful enough to do the entire process on its own. That said, we only use it for plugins, and as a convenient way to adjust our microphone levels relative to one another. We then route the audio from Ardour to Audacity, which we use to record, make final adjustments, and add sound effects.

Setting up audio routing: JACK

Time for a quick refresher on audio in AV Linux. It starts with ALSA, the Linux hardware audio driver. AV Linux, along with many other audio-focused Linux distributions, uses JACK as its sound server. JACK focuses on low latency above all else, and AV Linux ships with a real-time kernel8 to help it along. The upshot is that two ALSA devices, like our USB microphones, can be connected to our computer, using JACK plugins to resample their input using the same clock to guarantee that they don’t go out of sync.

We’ll touch on how to set up and manage JACK later. For now, let’s briefly discuss the overall audio routing setup, in terms of the path the audio takes from the microphone to your hard drive.

First, we’re going to use some JACK utilities to set up JACK devices for each of our microphones. We’ll run audio from those JACK devices through Ardour for mixing, plugins, and volume control. Next, we’ll make a dummy JACK device which takes audio from Ardour and sends it through the ALSA loopback device on the input side. Finally, we’ll use Audacity to record audio from the ALSA loopback device output.

Setting up audio routing: microphone in

We’ll need a few scripts. (Or at least, we’ll want them to make our setup much more convenient.) Before that, we’ll need some information. First off, run the arecord -l command. You should see output sort of like this:

**** List of CAPTURE Hardware Devices ****
card 0: PCH [HDA Intel PCH], device 0: ALC295 Analog [ALC295 Analog]
  Subdevices: 1/1
  Subdevice #0: subdevice #0

This tells me that my laptop currently has one recording device plugged in: card 0, device 0, the built-in microphone. With your USB microphones plugged in, you should see more lines starting with card and a number. For the example above, the address is hw:0,0; the first number is the card number, and the second is the device number.

For each microphone, create a file on your desktop and call it microphone<#>.sh, filling in some number for <#>9. In this file, paste the following script.

#!/bin/bash
alsa_in -j name -d hw:1 -c 1 -p 512 &
echo $! > ~/.name.pid

The first line tells Linux to execute the script with the bash shell.

The second line starts a JACK client based on an ALSA device. -j name gives the JACK device a human-readable name. (Use something memorable.) -d hw:1 tells JACK to create the JACK device based on the ALSA device hw:1. Fill in the appropriate device number. -c 1 tells JACK this is a mono device. Use -c 2 for stereo, if you have a stereo mic10. -p 512 controls buffer size for the microphone. 512 is a safe option. Don’t mess with it unless you know what you’re doing. The ampersand tells Linux to run the above program in the background.

The third line records the process ID for the microphone, so we can kill it later if need be. Change name.pid to use the name you used for -j name.

Setting up audio routing: final mix

Onward to the mix. If you look at the output to the aplay -l or arecord -l commands, you should see the ALSA Loopback devices.

card 0: Loopback [Loopback], device 0: Loopback PCM [Loopback PCM]
  Subdevices: 8/8
  Subdevice #0: subdevice #0
  Subdevice #1: subdevice #1
  Subdevice #2: subdevice #2
  Subdevice #3: subdevice #3
  Subdevice #4: subdevice #4
  Subdevice #5: subdevice #5
  Subdevice #6: subdevice #6
  Subdevice #7: subdevice #7
card 0: Loopback [Loopback], device 1: Loopback PCM [Loopback PCM]
  Subdevices: 8/8
  Subdevice #0: subdevice #0
  Subdevice #1: subdevice #1
  Subdevice #2: subdevice #2
  Subdevice #3: subdevice #3
  Subdevice #4: subdevice #4
  Subdevice #5: subdevice #5
  Subdevice #6: subdevice #6
  Subdevice #7: subdevice #7

Audio played out to a subdevice of playback device hw:Loopback,1 will be available as audio input on the corresponding subdevice of recording device hw:Loopback,0. That is, playing to hw:Loopback,1,0 will result in recordable input on hw:Loopback,0,0. We take advantage of this to record our final mix to Audacity. Make a script called loopback.sh.

#!/bin/bash
alsa_out -j loop -c 3 -d hw:Loopback,1,0 &
echo $! > ~/.loop.pid

The -c 3 option in the second line determines how many channels the loopback device will have. You need one loopback channel for each microphone channel you wish to record separately. Lastly, we’ll want a script to stop all of our audio devices. Make a new script called stopdevices.sh.

kill `cat ~/.name.pid`
kill `cat ~/.name.pid`
kill `cat ~/.loop.pid`

Replace .name.pid with the filenames from your microphone scripts. Running this script will stop the JACK ALSA clients, removing your devices.

Managing JACK with QJackCtl

By default, AVLinux starts QJackCtl at startup. It’s a little applet which will show up with the title ‘JACK Audio Connection Kit’. What you want to do is hit the Setup button to open the settings dialog, then change Frames/Period and Periods/Buffer to 256 and 2, respectively. That yields an audio latency of 10.7 milliseconds, which is close enough to real-time for podcasting work.

That’s all you need to do with QJackCtl. You should also, however, pay attention to the numbers listed, at system start, as 0 (0). Those numbers will increase if you experience buffer overruns, sometimes called xruns. These occur when JACK is unable to process audio quickly enough to keep up in real time. Try using 256/3 or even 512/2, increasing the values until you get no xruns. (A very small number may be acceptable, but note that xruns will generally be audible in audio output as skips or crackles.)

Ensure QJackCtl is running before starting Ardour. Also, connect your microphones and run your microphone scripts.

Mixing with Ardour

Ardour is a free, open-source digital audio workstation application. It is ridiculously full-featured, and could easily do everything needed for a podcast and more. Since we have an established workflow with Audacity as our final editing tool, we use Ardour as a mixing board. In the Crossbox studio, Ardour takes input from two (or more) microphones whose input arrives through JACK, evens out recording levels, and runs output to a single JACK device corresponding to the ALSA loopback device. We then record the ALSA loopback device, which has a separate channel for each microphone we’re recording11.

How do we set Ardour to do this? It turns out that it’s complicated. Start Ardour and make a new session. (Since we’re using Ardour as a mixing board rather than a recording tool, we’ll reuse this session every time we want to record something.) For each microphone, make a new track. (That’s Shift+Ctrl+N, or Tracks->Add a new track or bus.)

Once you’ve done that, select the ‘Mixer’ button on the top bar. You should see a column for each of your tracks. You can use these to adjust volumes individually; you can also apply plugins or filters to each one.

Open up the Audio Connections window (under the Window menu, or by hitting Alt-P). We’ll want to do three things here.

Connect microphones to tracks

On the left side of the Audio Connections window, select Other as the source. (All devices which use the alsa_in and alsa_out JACK devices show up in the Other tab.) On the bottom of the Audio Connections window, select Ardour Tracks as the destination.

Connect each microphone to its track by clicking on the cell where its row and column intersect. You’ll see a green dot show up. Now the microphones are connected to Ardour tracks, and we don’t need to worry about microphone hardware anymore.

Connect microphone tracks to loopback device

Select Ardour Tracks as the source and Other as the destination. Connect each microphone track to one channel of the loopback device. (If recording in stereo, each microphone track channel needs its own loopback channel. If recording in mono, connect the left and right channels from one microphone to one loopback channel.)

Audio from the microphone tracks will now be routed to the ALSA loopback device, where we can record it with Audacity.

Connect microphone tracks to Ardour monitor bus

Select Ardour Tracks as the source and Ardour Busses as the destination. Connect each microphone to the Master bus. (Whether recording in stereo or mono, connect the left channel of each track to the Master left channel, and the right channel of each track to the Master right channel.)

By default, Ardour connects the Master bus to the system audio output. When you connect your microphone tracks to the Master bus, you should be able to hear yourself in headphones attached to your headphone jack. If you’re connecting more than two sets of headphones, you may need to get yourself an amplifier. This one seems nice enough. If you don’t have 1/4-inch headphones, you can use these converters.

Recording with Audacity

One more piece to the puzzle. Open Audacity. Select ALSA as the sound framework. Select the Loopback: PCM(hw:0,0) device. When recording, audio from one microphone should show up in each Audacity channel.

Adjusting hardware volumes

In AVLinux, you can use the applications Volti or Audio Mixer to provide a GUI to adjust hardware volumes. Volti is a tray volume control; right-click on it to get a view of the mixer. In either tool, to adjust the input volume of a microphone, select it (either in the dropdown or the tab bar) and adjust its mic level. To adjust the monitor output volume, adjust the output volume for your built-in soundcard. To adjust the recording output volume, adjust the volumes for the Loopback device.

Podcast recording shopping list

And that’s that. You now have all the information you need to replicate our studio setup. Please feel free to leave questions in the comments; I’m not much good at this sort of thing, but I may be able to point you to someone who can help you out. Below, I’ve included a shopping list for your perusal.

Buy one

Per person (non-microphone)

Per person (condensers)

Per person (XLR dynamic mics)

XLR connections are the industry standard for microphones. If you’re planning to expand to a true mixing board, you’re probably best off getting XLR mics so you don’t have to buy new ones when you make the switch. On the other hand, you’ll need an XLR-to-USB interface for each microphone to connect it to your computer, which pushes the price up somewhat.

Per person (USB dynamic mics)

If, like the Crossbox, you’re unlikely ever to proceed past two hosts with USB microphones, you should look into USB dynamic microphones. Like the USB condenser microphones above, they plug directly into a computer, doing the digitization internally. They are, however, less future-proof.

Cost breakdown

  • USB dynamic microphone: $30
  • Shock mount: $10
  • Mic boom: $9
  • Pop filter: $8
  • Total: $57

  1. Okay, my. 
  2. That’s me. 
  3. We, however, clamp our mic booms to spare chairs at our broadcast table. This means we can bump the table without jostling the mount, which makes for much cleaner recordings given our typical amount of movement. 
  4. P, B, T, S, Z, etc. 
  5. I realize this pushes the price well above $70 per person, but I figure it’s reasonable to assume you probably have a laptop of acceptable specifications. 
  6. Yes, it’s possible to do low-latency monitoring and USB microphone resampling/synchronization with Windows and ASIO, or your Linux distribution of choice with a low-latency kernel, but (at least in the latter case) why on earth would you want to? 
  7. If this paragraph made no sense to you, try this how-to guide. In the partitioning step, you may have to choose your current partition and select ‘resize’, shrinking it to make a little bit of room for AV Linux. 
  8. For the uninitiated, it means that JACK is always able to take CPU time whenever it needs it with no waiting. 
  9. Or, if you like, call it something else. Makes no difference to me. 
  10. The recommended CAD U37 is a mono mic, but has stereo output. We run it with mono input. 
  11. The astute reader will note that this may impose a limit on the number of simultaneous channels you can record. That reader, being more astute than me, could probably tell you how many channels that is. I figure it’s at least eight, since ALSA supports 7.1 output. If you need more than eight, you should probably look into recording directly in Ardour. 

USASOC’s URG-I for the M4

Thanks to SHOT Show and the good folks at Brownells, we can see what the US Army’s Special Operations Command is doing to improve their M4s. Let’s take a look. First, the product page.

Now, there are a bunch of things to note here. The upper receiver is unchanged. Still has that forward assist and that dust cover. The 14.5″ barrel is made by Daniel Defense, who have some excellent cold hammer forges for such things. The barrel has some unspecified improvements to work better with M855A1 ammunition, which has an exposed, hardened steel tip. I would expect these changes to be to the geometry of the feed ramp in the barrel extension, but I can’t confirm this yet. And I don’t know if there are other changes. The rest of the barrel is pretty boring. 1:7 twist rate, that government profile,1 and a midlength gas system. The midlength gas system is a noticeable difference, being somewhat longer than the standard carbine length. A midlength gas system is somewhat softer recoiling, and probably leads to improved reliability when using a suppressor (which increases the gas pressure in the system). Note that they did not specify the medium-weight “Socom” profile barrel. Overkill for expected uses? Not proven? Weight Conscious? I’m honestly not sure.

The handguard is Geissele’s Mk 16, and is 13″ long and free floated. It has a picatinny rail at the top and Mlok slots all around2. This is a big improvement over the usual plastic handguard or the KAC RAS system, which has picatinny rails and isn’t free floated. Plus a longer rail means more room for one’s hand as well as accessories. The older handguards had room for lights and lasers or your hand, but not both. Geissele handguards are very nice, and have a well-designed attachment system.

The full length handguard means the standard triangular front sight block has to go. It’s been replaced by the Geissele Super Gas Block, which is low profile, and held in place by two setscrews and a taper pin. I like pinned gasblocks. They’re sturdier. Good choice here.

Geissele also makes the charging handle. It’s bigger, sturdier, and better suited to just grabbing or pulling at one side, like lots of modern guys do. It’s a fine choice.

The other difference in play is the muzzle device. The Brownells version (for civvies) has the Surefire S3F, which is a three-pronged flash hider that also serves as an adapter for the quick-detach mechanism used in Surefire’s silencers. The military is probably getting the S4F (with four prongs). I don’t know why the difference there. It’s still a suppressor adapter, and remember, Surefire’s silencers won the SOCOM testing.

Overall, I’d say it’s a pretty solid set of improvements, and results in a gun better than the previous PIP proposal. I would like to see more if it were up to me, namely a better barrel profile and some bolt carrier group improvements. Both Lewis Machine and Tool and Knights Armament have some available improvements there, and I’d like to see some evaluations. Especially if suppressors are going to be used a lot.

Will I buy one? No. I don’t have much use for factory uppers these days. Building my own isn’t hard, and then I get to make all of the parts choices, and get things suited for me and my uses. And I don’t do clone builds. But it’s a solid upper if you’re in the market for one.

Finally, let’s do a quick weight comparison with the upper for a standard M4. The lower is separate, and needs no changing provided it has the safe/semi-/full-auto trigger group. Some of these weights are approximate because of what is and isn’t available on the market yet, but I wouldn’t expect them to change too much. I’ll update these as I get better numbers.

PartM4Weight (lbs)URG-IWeight (lbs)
Barrel14.5″ gov’t. carbine gas1.614.5 gov’t. mid gas1.5
Upper receiverA30.6A30.6
Handguarddouble shield0.72Geissele Mk 16 (13″)0.92
Gas BlockFSB0.33Geissele sgb0.1
Gas Tubecarbine0.04midlength0.05
BCGstandard0.72standard0.72
Muzzle DeviceA2 Birdcage0.14SF4P0.28
Charging Handlestandard0.08geissele sch0.09
TOTALM44.23URG-I4.26

Notes: Upper receiver weight includes the dust cover and forward assist. Listed handguard weights include all mounting hardware. The Mk. 14 only has Mlok slots at 3:00, 6:00, and 9:00.

Not bad. Despite the stupid government profile barrel, only a little weight was gained. At least according to my back of the envelope calculations, and that’s a win More capability without a lot more weight.

Edited 09/12/18 to use correct weights for the Daniel Defense 14.5″ CHF Government profile midlength gas barrel, Geissele Mk 16 handguard, and Surefire SF4P flash hider.


  1. Which I hate. A lot. It’s profoundly stupid, but that’s probably why it’s called the “government” profile. I guess we can’t expect them to fix everything at once. 
  2. “All around” being 1:30, 3:00, 4:30, 6:00, 7:30, 9:00, and 10:30. Also, Mlok is lighter than picatinny rails, woo. And some study found it tougher than the rival keymod. 

On Glock Safeties

A few weeks ago, Fishbreath and I were looking at another striker-fired pistol1 being found to be not drop safe. Fishbreath commented that he’d really like to see these barrel-up-at-30-degrees drop tests done to the Glock 43 and the M&P Shield. I promptly obliged him with a video. Glocks have three safeties designed to work together to prevent firing when dropped at any angle. Let’s take a look at how they work. An understanding of the trigger mechanism and the safeties it employs is also useful when attempting to modify that trigger system.
Continue reading