With primary season finally kicking off in earnest, I thought I should give my thoughts on the state of the race for the GOP.
The Contenders
Trump: The ongoing surprise at his sticking power misses a few facts. Trump’s appeal comes from the center and the disillusioned voter, not a broad part of the conservative base. (See Cruz for a note on that.) The center and the disillusioned are generally the poorly informed, which jives with the sort of person who might support Trump—the sort which doesn’t realize that Trump holds different positions almost daily, or positions that would never actually work. Unfortunately, since most people are poorly informed, Trump’s strategy has been working so far. Fortunately, he gets enough news coverage that even the worst-informed of primary voters is starting to understand that Trump is style, not substance. May win South Carolina, but expect it to be closer than the polls show.
Cruz: If I were handicapping, I’d give Cruz about 40%. His ground game is superb, the best of any GOP candidate, which he parlayed into an upset win in Iowa, and a solid third place in New Hampshire, considering he spent about zero dollars. Questions about his values seem misplaced to me: stories about his Iowa operation remark on how he let his volunteers go off-script when canvassing, which fits the conservative ideal of bottom-up organization. Concerns about his likability are overblown. Not every candidate has to be an inspirational orator. Has an outside chance to win South Carolina: most polls show him well behind, but several leaked polls from candidate campaigns in the last few weeks have put him much closer than major polls would indicate.
The Possible Surprises
Rubio: The establishment’s golden child is underperforming expectations; his Marco Robot impression in the New Hampshire debate didn’t help anything. Light on substance in the same way that Trump is, without the populist shiny to draw in the jackdaw voters. Has the benefit of money and Washington backing, which will keep him in the race, and maybe even in a few top-3 finishes. The most Obama-like of the Republican candidates in terms of oratory. He’ll eventually peter out, and his supporters will lean Cruz: neither Trump nor Cruz is inspirational in the same way, but Cruz lines up a little better with the thoughtful conservative values Rubio purports to represent.
The Death Watch
Jeb!: Why anyone thought another Bush running would work is beyond me. (And I say that as someone who thinks history will be significantly kinder to W than the media of his time were.) He seems a little confused by the lack of support, but name recognition is not the same thing as preference. Jeb!’s deep pockets, and the deep pockets of his supporters, will keep him around long past his use-by date, but he probably won’t climb above 15% in any primary. The SEC primaries, with their proportional delegate awards with a minimum threshold, will probably knock him out of contention altogether.
Carson: It grieves me that we see this side of him. One of the first biographies I ever read was a short, middle-school-level take on him. I still think he has an amazing story of faith, a self-reliance informed by that faith, and a climb from obscurity to preeminence in his field. I don’t think he has ‘president’ in him.
Kasich: No matter how much a certain set of centrist Republican voters want this to happen, it isn’t happening. He’s burned too many bridges with the base, and seems to be running a general election campaign in the primary. Maddeningly, his record is solidly conservative, and I suspect he wouldn’t be all that bad a choice, but he seems set on running as the Democrat’s preferred Republican primary candidate. Unfortunately for him, most Republican primary voters are Republican, and not buying it.
Pondering my AR builds, both extant and forthcoming, as well as modern combat trends has given me some ideas on how one ought to kit out an infantry force. A couple of them aren’t very revolutionary, and one is pretty different. We’ll start with the least controversial, and go on towards things that will require a bit more arguing.
Premise 1: Issue body armor all around. This one’s a pretty easy sell. Frontline troops have been widely issued body armor since (at least) the Vietnam war. That body armor was a flak jacket, which is designed to provide protection from fragmentation weapons. Body armor saves lives, and that protects the investment in soldier training, plus looks better to the civilians at home. The trick with body armor is to balance weight and protection, which will be the focus of another article. It’s important to not forget to include load bearing equipment in the body armor system. The vest should be designed to distribute the weight of the armor already, and PALS webbing (or similar) saves having to deal with yet another wearable. This is not only awkward, but it makes it harder for medics to get to an injured soldier to provide care.
Premise 2: Every longarm should have an optic Once again, this one’s pretty simple. Optics are way better than iron sights. The trick has always been getting them rugged enough and cheap enough to issue generally, and we’ve been nailing that since the 90s (maybe earlier). With modular picatinny rail mounts, we needn’t specify which optic to the weapon designer. There are a lot of options here, and we’ll have a future article devoted to the choice. In brief though, there’s the red dot optic, the low-magnification, fixed-power scope, and the low magnification, variable-power scope. Magnification gives the ability to identify targets at range if they’re hiding (maybe insurgents in a crowd, or maybe soldiers in the brush), but the dot is simpler and faster to use. A well designed low power variable scope gives the best of both worlds, but the variable power adds weight and complexity, and they’re not as rugged.
Premise 3: Pistols suck. Therefore, issue carbines This one’s pretty easy to argue. Happily, it also hurts the feelings of idiots. But a carbine is a much more lethal weapon than a pistol. It shoots a more powerful round, holds more ammo, and is easier to shoot well. Carbines rock. Issuing carbines generally to officers has the fringe benefit of making them stand out less in a sniper’s scope. Pistols are historically a badge of authority. Or, a ‘Shoot Me’ indicator, depending on which side of the scope you are. So there’s a benefit there. The issue, of course, is that carbines are bigger and heavier than pistols. In a highly mechanized force though, this isn’t a huge problem since one’s base vehicle can carry that carbine backup weapon. Even light infantry type forces can go this route: the US Marines issue M4s to just about everybody. Even officers as high as Lieutenant Colonel get M4s. We should follow suit. About the only role I can think of that can’t is fighter pilots. Maybe if I break the weapon down I can get it into a survival kit.
Premise 4: Every carbine, rifle, and man-portable machine gun should have a suppressor Okay, here’s the one that’s a little out there, mostly because I no longer have a real world force to lean on. SOCOM does this, but they’re all special forces guys. So why would we do it generally? Like optics in the 90s, we’ve got suppressors that are mature enough to minimize the disadvantages. Modern suppressors are reasonably lightweight and quite durable. The Surefire SOCOM RC2 (5.56) suppressors, for example, weigh just over a pound and the Surefire SOCOM-556MG suppressors weigh just under a pound and a half. Great! But, as well-educated firearms enthusiasts, we know that suppressors don’t actually silence firearms like you see in lame action movies. That’s fine. We actually get many benefits from the suppressor anyway, even if it can’t turn a bunch of grunts into ninjas.
The first and most obvious benefit is that a suppressed gun is easier on one’s hearing. This is most noticeable indoors, and is why so many special forces and SWAT guys run suppressors. The suppressor might be thought of as taking the edge off of a gunshot, and this is great if you train a lot indoors, or find yourself indoors. It takes the edge off outside too, which is helpful when you and your buddies are engaging some enemy scumbags. Suppressors also eliminate flash. This brings two more advantages: first, this helps mask a soldier’s position. There’s no big obvious flash to pinpoint his position. Second, in a low-light setting where a soldier might be using night vision equipment, a suppressor prevents flash from washing out the light amplification systems in the goggles. Finally, that ‘taking the edge off’ of the report of weapons also helps obscure the soldier and make his position less obvious in a quick engagement or ambush. It’s not about completely eliminating sound, it’s just about managing it and making it harder to track.
There we go. Four ways to maximize the effectiveness of soldiers. And one of them is even pretty aggressive and forward-looking.
Edit to add: Since it’s come up a few times in the comments, and I’d hate to leave conclusions there to fester, let’s talk prices and make some comparisons. Currently, SOCOM has tested and approved Surefire suppressors for deployment in the field. The MSRP of one of these models is $1,375.00. Let’s look at the MSRP of some other pieces of equipment commonly issued. The USMC’s standard issue optic has been the Trijicon ACOG. The current model of choice is the TA31RCO-A4CP which has an MSRP of $1,724.00. Aimpoint doesn’t list MSRPs on their website, but their Comp M4, used by the US Army, the Norwegian Army, and a whole bunch of others, seems to have an MSRP of about $850.00 or so. Oh, and while not being sold to civilians, the price of one of the super awesome GPNVG-18 Panoramic Night Vision goggle sets used in the Bin Laden raid is about $65,000.00. All prices given in US Dollars and are current to the best of my knowledge as of April 10, 2018.
The tank platoon is the basic unit of armored organization. How you structure it will shape tactics and has a direct bearing on costs. One might think that tank tactics are an extension of infantry tactics. And that a tank platoon should have subelements that the platoon leader can use to perform fire and maneuver organically, i.e. without attached elements, just like an infantry platoon does. To facilitate this, the traditional tank platoon consists of five tanks: two maneuver elements of two tanks and one tank for the leader. This formation works. It was the standard formation for both the US Army and the Wehrmacht Heer in World War 2. If it’s good enough for Heinz Guderian and George Patton, it’s good enough for me.
Only kidding. That would make for a very short post. Almost invariably, the girly-men in accounting start objecting as tanks get pricey, and cut the leader-tank, reducing the platoon to four. It happened to the German heavy tank platoons near the end of the Second World War, which only had four tigers instead of five. When the US Army moved from the old M60 to the big, expensive M1, it too lost the leader-tank. And for once the bean counters appear to be right. There doesn’t appear to be much lost effectiveness in the four tank platoon. Certainly it wasn’t an impediment for the tiger platoons, and the US Army doesn’t appear to complain overmuch. Fine. So, four tanks per platoon. The platoon is cheaper that way. Don’t tell the bean counters that I agree with them though. They’ll just demand more cuts.
The clever reader will no doubt note that I haven’t mentioned the Russians yet. They have a three-tank platoon, and have used it since the Great Patriotic War. Three is a natural alternative to four, and was easier for novice Russian tankers to command, especially as they lacked radios. Of course, our tanks have radios. In the air, the finger four formation has proven superior to the three-plane vic formation. But the Russians haven’t complained, despite spending an awful lot of time fighting the Germans with the four- and five-tank platoon. The Russians do use a finger-four type formation in the air presently. Of course, tanks are not fighter planes, and we should beware too many comparisons without adequate backing.
Interestingly, the army with the most post-World War II tank combat experience, the Israeli Defense Force, has moved from the old Western standard five-tank platoon to the Russian standard three-tank platoon, and are quite happy with the change. The Americans, British, and Germans have all studied the three-tank platoon, and the British and Germans have both taken steps toward adopting it. Generals Balck and von Mellenthin, formerly of the Wehrmacht and with extensive experience on the Eastern front, were also big fans of the three-tank platoon for being easier to command. They have written somewhat extensively on the subject, and have used it to good effect in NATO war games. This is a trend, and the trend is your friend, as several of my old professors used to say.
One might ask “Why?” More is usually better, not worse. Why should tank platoons follow the example of taxes and not money? Fascinatingly, the US Army may have the answer, even though they presently stick with the four-tank platoon. In simulated combat studies in both the late seventies and early 2000s, the three tank platoon is as good as or better than the four or five tank platoon in any reasonable metric you care to name, and these benefits seem to derive from the fact that it is easier to maneuver and direct the fire of a three-tank platoon. It’s about as survivable and is generally able to more effectively kill enemy armor. The exception comes in urban areas, when the effectiveness is not statistically different. As a bonus, it appeases the bean counters. And it is easier for a young lieutenant to command, even if that lieutenant has modern radios.
Thus there shall be three tanks in a platoon, and the number of tanks in a tank platoon shall be three. Four is right out.
In our extremely belated Christmas edition, we visit John’s favorite decade: the 80s. Join us as we clean up the mean streets of New Orleans, work out just how bad American anti-ship missiles are, and insult a very wealthy man for fifteen minutes.
I’ve released a new version of OpenTafl. Among several other changes, there are two I would like to highlight.
First off, the groundwork for OpenTafl Notation-style engines is complete: there is code in OpenTafl which can produce and read in OTN rules records, code which can produce and read in OTN position records, and code which can produce OTN move records. This is a major milestone: producing and reading position and rules records is tricky, as is producing move records. (Move records should be fairly easy to read in, compared to the others; they encode a much smaller amount of critical data.) Note that the notation spec has changed somewhat, based on lessons I’ve learned over the course of my weekend of implementation. If you’ve done any work based on it yourself, you’ll want to go over the spec again. Hopefully, it won’t change much going forward, although the specification for OpenTafl-compatible engines is likely to change as I work on implementing that in the next month or two. You can see some of the work in version 0.1.6.1b: each position should provide its OTN position record, and games against the AI will report the AI’s thinking using OTN move records.
Second, OpenTafl’s memory usage has once again been dramatically reduced, to the point where it should be comfortably playable to the maximum depth at which it’s feasibly playable (currently depth 6-7 for brandub, depending on speed, and depth 4-5 for other games, depending on size and complexity) without running into memory issues, on any reasonable computer. I intend to go into a little more depth on how I accomplished this, but for the moment, you’ll just have to take my word for it.
You can find the full change logs at the OpenTafl site.
While I’m primarily an AR guy, and think that’s the best overall choice right now for the vast majority of rifle-y things that a guy might do, I’m also fond of old service rifles. They tell stories. Today we’re going to look at my oldest, a Mauser Karabiner 98k.
The Karabiner 98k, or Kar 98k, was a development of the Gewehr 98, by way of the Karabiner 98b. The Kar 98b wasn’t really a carbine at all, just a G 98 with better sights. It was still a long-barreled rifle. But after World War I, the Germans finally got to figuring that maybe they should standardize on one, shortish carbine for everyone who needed a rifle, rather than worry about infantry rifles and cavalry carbines. So, in 1934, they made what was to be the last in the long line of Mauser 98 designs, the Kar 98 kurz.1 In addition to the obviously shorter length, it also has a turned-down bolt handle, which makes mounting optics easier.
The Kar 98k has that wonderful, controlled-feed action that Mauser is famous for, and that so many have copied. It holds five rounds of 7.97x57mm ammunition, and proved to be a reliable and accurate weapon. It was the standard service rifle for the Wehrmacht Heer during the Second World War, and also saw use by the Soviets and many smaller powers after the war. It was also widely copied.
Let’s look at mine. It was made in 1938 in Suhl by J.P. Sauer und Sohn. Due to the time period of manufacture being before production had ramped up, Sauer was using some older parts. For this reason, the receiver bears both Weimar Waffenamnt proof marks and Third Reich Waffenamnt proof marks, which is kinda cool. Based on the age, we can conclude that this rifle saw plenty of service. Several parts are marked by an electropen with a different serial number than what is stamped on the gun. From this, we can conclude that this rifle was on the Eastern Front, was captured by the Soviets, and spent time reissued and in their arsenals. It has an X marking on the receiver that indicates it was eventually mustered out of Red Army service, and it eventually made its way to America and then to me.
Conditionwise, the rifle is in solid, but not excellent condition. The soviet arsenals have mixed up a few of the smaller parts, and they do not have matching serials with the rest of the gun. I’m happier that way, because it means the price is lower. The wood and finish show some wear, but are generally in good condition, and the bore doesn’t show too much wear either. There isn’t any pitting, and the grooves aren’t too worn out. When I got it, I was missing a few incidentals, which I decided to pick up. I got a surplus, beat-up looking sword bayonet of the appropriate late-thirties era, with oversized 9.75 inch blade, a cleaning rod, and a new-production sling.
For all its age, my Mauser shoots really well. The action is smooth, and the trigger is pretty good for a service rifle. It’s more or less two stage, and is somewhat heavy, but not gritty or creepy. The sights are ok. If you take your time and line them up right, the rifle is very accurate. They’re a simple notch and wedge-shaped post though, so these aren’t altogether fast or precise. Hardly my choice, but I didn’t design this. As is fitting and proper, the sight has range markings out to a hopelessly-optimistic two kilometers. I haven’t tried to hit anything at this range.
Recoil isn’t terrible. It’s certainly not a .22, but it’s not abusive the way a Mosin is. When I’ve brought it out for friends, I’ve gotten neither complaints nor habitual flinches, which is a good endorsement. The bolt isn’t as fast as a Lee Enfield, but it has never given me trouble.
My Mauser is a really nifty piece of history. It’s nearly eighty years old, but it still looks and shoots great. It’s a real treat to have and to run some rounds through.
If only it could talk.
1.) Short. Because it’s actually a carbine-length carbine as opposed to a longer, rifle-length carbine.
If you’re a software person, read the post. Linus Åkesson, this year’s winner, is brilliant, and I want you to appreciate just how much.
If you aren’t, stick around, because you should appreciate Mr. Åkesson’s brilliance, too. I’m going to explain this in layman’s terms. The International Underhanded C Code Competition, or IOUCCC, is an annual competition where software engineers and computer scientists from all the world over compete to write the most innocent-looking yet fiendishly guilty C programs possible. This year’s challenge is as follows:
There are two countries, which have signed a nuclear disarmament treaty. Each wants to carry out inspections to ensure that the other side is decommissioning real bombs, and not fakes, but neither side wants to expose the actual gamma ray spectra produced by their real bombs—that’s sensitive information on their nuclear programs. So, they have need of a tool which takes an observed spectrum and compares it to a reference spectrum, returning either yes (if the observed spectrum matches the reference spectrum, and is therefore a bomb), or no (if that is not so). Here’s the underhanded bit: one country wants to cheat. They want to keep more bombs than the treaty permits, but they have to fool the inspectors—that is, they have to figure out a way to show inspectors fake decommissioned bombs, which nevertheless yield a ‘yes’ when inspected with the tool. They can’t just fake the reading, because the reading has two parts: first, the placement of the peaks in the spectrum and their relative sizes, which only come from the specific mixes of fissile materials used in bombs; and second, the overall radioactive energy level of the bomb, which only comes from having a large amount of radiation coming from the bomb. It’s easy to replicate the peaks with a small amount of fissile material, and easy to replicate the total energy with an x-ray source, but impossible to combine them to look like a bomb without being a bomb.
We need to establish some background information about how spectra appear to digital tools. The way you collect a spectrum is to point a detector at a radioactive source of some sort. Energetic particles strike the detector face. An event is recorded, and deposited into a ‘bin’ based on the energy of the event. The collection of bins composes the spectrum. So, a spectrum, to a computer, looks like this: energies between 0 and 100, 20 events; energies between 101 and 200, 40 events; energies between 201 and 300, 153 events; and so on. In C, the easiest way to represent this is an array—that is, a region of computer memory which holds a list of numbers all in a row. Since, to a computer, numbers are a fixed number of bits (binary digits), the numbers don’t need to have any space between them; number N+1 is a fixed length from the start of number N.
We also need to talk briefly about floating point numbers. Floating point numbers are how computers represent decimals (generally speaking). Since there are an infinite number of real numbers between any two real numbers, a computer obviously can’t represent them precisely. Instead, it uses scientific notation, akin to this example: , or 120. (In binary, obviously, but that’s not important yet.) We call the 1.2 part the mantissa, and the 2 the exponent. (For a computer, the base of the exponent is always two, because of binary.) Moreover, they are arranged like this: sign bit, exponent bits (highest place to lowest place), mantissa bits (highest place to lowest place), and the mantissa is always assumed to be one, unless the exponent is zero. (Instead of writing 1.2, we just write .2, and remember that we usually put a one before the decimal point.)
Finally, we need to discuss how one might want to compare two spectra. First, we’ll introduce a quantity called the dot product: if you have two lists of numbers of the same length, and go number-by-number, multiplying the first by the first, the second by the second, the third by the third, and so on, then add all of those products, the number you get is called the dot product. If you take the dot product of a list of numbers with itself, then take the square root of the dot product, you get the magnitude. A list of numbers, each entry in which has been divided by the list’s magnitude, is said to be normalized. Finally, if you take the dot product of two normalized lists of numbers, you get a number between 0 and 1. If you interpret the lists of numbers as multidimensional vectors (remember your geometry?) the number you just got is the cosine of the angle between them. For our purposes, it’s a number called the spectral contrast angle. If it’s 1, the two vectors are completely identical. If it’s -1, they’re completely different. So, when we get our two spectra as lists of numbers, we want to calculate the spectral contrast angle. If it’s sufficiently similar, then the spectra match.
So, Mr. Åkesson’s program takes two lists of numbers, representing spectra. It runs some preprocessing steps: smoothing the data, taking its first-order derivative to remove any constant-slope noise1, and smoothing the derivative to obtain a spectral fingerprint: a representation which contains the locations and relative heights of the peaks. Before going any further, it checks to see whether the total energy in the spectrum is above a threshold—a decommissioned bomb will still have a small signature, because of the remains of the fissile materials inside—then hands off the data to another source code file, which handles normalizing the spectra and doing the dot product stuff.
The underhandedness comes in that handoff. The data is defined as a list of numbers of type float_t. float_t is, in turn, defined in a header file (a file which holds definitions) as a double, a 64-bit (i.e. double-precision) floating point number, of which one bit is the sign of the number, 11 bits are the exponent, and the remaining bits are the mantissa. It is important to note a few things about doubles and Mr. Åkesson’s code: first, data in a double-precision number is left-justified. In cases where a given number can be expressed with sufficient precision, the bits on the right-hand side of the double’s position in memory are set to 0, unused. Second, the values in this particular program are generally fairly small—no more than a few thousand. Third, the preprocessing has the effect of making the numbers smaller: a derivative, as a rate of change, will always be smaller than the total value, except when it changes instantly. So, after the preprocessing, you have a list of double-precision floating point numbers, whose values range from about 0 to 100. Representing these numbers is simple, and doesn’t require using very much of the 64-bit length of the number.
The list is passed to the second file as the argument to a function, still a list of numbers of type float_t. The thing is, in this file, float_t means something else. This file does not include the definitions header from the first file, because it doesn’t need it. Instead, it includes a standard math library header file, and the math library header file defines float_t as a 32-bit (i.e. single-precision) floating point number, of which one bit is a sign bit, eight bits are the exponent, and the remaining bits are the mantissa. So, instead of a list of 64-bit numbers of length n, you have a list of 32-bit numbers of length 2n—but you’ve told the code you have a list of size n, so the code only looks at n numbers. That is, the code only looks at the first half of the spectrum. This lets you evade the total energy threshold without having to have a bunch of radioactive material in the bomb: place an x-ray source in your fake bomb. The total energy goes up so that it passes the threshold, and it won’t be considered in the peak-matching code, because it’s on the right side (the second half, the high-energy side) of the spectrum.
You still have to get the peak matching right, but the underhandedness has a second, subtler component which will help. It has to do with what happens when you convert a 64-bit floating point number into a 32-bit floating point number. Remember that, when a floating-point number doesn’t need its full length to express a number, all of the significant bits are on the left side of the number. Therefore, representing numbers between 1 and 100, especially whole numbers (which the preprocessing helps to ensure) doesn’t take anywhere near 64 bits. It doesn’t even take 32 bits. So, when you turn an array of 64-bit numbers into an array of 32-bit numbers, you split each 64-bit number in half. The second half is all zero bits, and ends up being zero. Since this happens to both the reference and the observed spectra, every other number in both arrays ends up being zero, and zero is the same as zero, so we don’t need to look at them any more.
The first half is the interesting half. The sign bit works exactly the same. The exponent, though, gets shortened from 11 bits to 8 bits, and the three bits that we leave off are the highest bits. Removing the low three bits moves the high bit over three places, which is the same as dividing by eight2. The three bits which were the end of the exponent become the beginning of the mantissa. This is fine: it preserves the relative sizes of numbers. (The least significant digits of the exponent are more significant than the most significant bits of the mantissa; you’ll just have to trust me here, or do some examples in base 10 yourself, following the rules I gave for floating point numbers.)
So what we’ve done is dramatically reduced the sizes of each energy level bin in the spectrum. Furthermore, we’ve also compressed their dynamic range: the difference between the lowest peaks and the highest peaks is much smaller, and so they resemble each other much more closely. The recipe for a fake bomb, then, is to take a small amount of fissile material and a powerful x-ray source, and put them in the casing. The x-ray source takes care of the total energy level, and the dynamic range compression from the 64-bit to 32-bit switch takes care of the low-energy peaks.
Congratulations! You just convinced the other country that you’re about to decommission a real bomb, when in actuality, it’s a fake.
Congratulations also to Linus Åkesson, who wrote such a clever piece of deception into a mere 66 lines, and did so with such panache I could not resist writing on it.
1. This appears in real-world spectra, because there’s naturally a lot more low-energy noise than high-energy noise. In support of this assertion, I offer the evidence that you are not suffering from radiation sickness right now. (Probably.) 2. 1000 is 8, in binary. (From right to left, you have the 1s-place, the 2s-place, the 4s-place, and the 8s-place.) Remove three zeros, and you have a 1 in the 1s place: 1.
Submarines have been a serious threat to shipping since the Great War. Recently, the Russians are putting subs to sea like they did in the Cold War, ready to menace the shipping lanes once more. And submarines are more deadly then ever, with modern torpedoes like the Mk. 48 ADCAP having a range of upwards of twenty seven nautical miles. By detonating under the keel, they can split many ships in half. And, unlike antiship missiles, there aren’t many good ways to deal with torpedoes. You’re basically limited to a few decoy systems. So what’s a surface ship to do? Why, attack the sub, of course. This usually involves helicopters that can drop sonobuoys and dip sonars. They can also drop torpedoes if they find a sub.
What if the surface ship needs to engage a submarine directly? Suppose the helicopter isn’t nearby, or is out of torpedoes, or the surface ship detected the sub with her own sensors? Modern lightweight (read: anti-submarine) torpedoes have a range of anywhere from about five to about twelve nautical miles, depending on what speed setting they’re using. That’s a bit less than half of what the submarine’s torpedoes can do, giving him the shot long before you have it. What other options do we have for engaging?
We could use a rocket to get the torpedo closer before we drop it. If you have Mark 41 VLS cells, you could use the RUM-139 VL-ASROC, which puts a Mk. 46 torpedo about fifteen nautical miles from the launching ship. There are versions available with the more recent Mk. 54 lightweight torpedo, which has a much better seeker. Depending on speed settings, this gives us very nearly the range that the opposing sub has with his torpedo. Detente.
For those of you who’ve forgotten your high school French, or you uncultured swine who never had any, detente is a French word that means “you both get to die”. Yay. Personally, I’d rather not die, and would love to have the range for the first shot given a good sonobuoy contact and no torpedo-equipped helicopters nearby. For this, we come to another casualty of dwindling budgets in the ’90s, the RUM-125B Sea Lance.1
The Sea Lance has a bigger motor and a better inertial navigation system. It still fits in a regular Mk. 41 VLS cell. The RUM-125B was originally specced around the Mark 50 lightweight torpedo, but an enterprising designer could fit most any NATO lightweight torpedo in, since they’re all about the same size. The RUM-125B had a range of thirty five nautical miles, so if you see him first, you can shoot him first, helicopters or no. With a powerful weapon like this, it makes the surface ship a more active participant in the search for subs, rather than just a mothership to provide fuel.
But wait, there’s more. You may be wondering why the designation started with B. It didn’t. B is just the normal, conventional-warhead model. Throw a torpedo, have it engage. When you really, really want range, when Ivan’s sub just absolutely, positively has got to die, and when you want to really piss off greenpeace, there’s the RUM-125A. This missile variant can lob a 200 kiloton nuclear depth bomb out to a range of one hundred nautical miles. So you’re probably going to be safe from that blast. Maybe. It’s not very accurate, but then, it doesn’t have to be. This is the mother of all depth charges. Guaranteed to crush hulls, kill marine life, and cause an international incident, or your money back!
That’s not all. There were variants (designated UUM-125A and UUM-125B) that could be launched from submarines. These would get launched from the torpedo tubes in a buoyant capsule that would float to the surface and then launch the missile. It’s a great way to give attack subs a long range punch if they’re aware of a sub threat. Or just want to nuke the whales.
So go ahead, Captain Viktor Tupolev. Push your pissant Alfa-class boat as hard as you want. You’ll only die overheated.
Now, if only Sea Lance would work on those pesky land whales on Twitter.
Verdict: Approved by the Borgundy War Department Procurement Board
1.) Yes, this is a lower designation number. Trust me, it’s more advanced. Or don’t. More for me. 2.) This post is all in nautical miles, because we’re talking about things at sea. If you’re a communist, and prefer metric units, multiply all range figures above by 1.85.
A few days ago, Google’s DeepMind announced that they reached the most significant milestone in pure AI since the solving of checkers in 2007: a Go AI called AlphaGo that’s competitive with human professionals. (Strictly speaking, computers could play Go before yesterday, but they were ranked around midlevel amateurs at their best.) The whole paper is available here.
For those of you who aren’t as much nerds about AI as I am, here’s a quick primer on why this was thought to be a very hard problem (so hard that the people involved in the prior state of the art thought it was at least a decade away):
In the most theoretical conception, game-playing computers for perfect-information zero-sum games (most abstract board games: those with no hidden state with all players working toward the same objectives, to be not entirely accurate but more readable than perfect accuracy allows for) are as simple as exploring every possible move and every possible countermove from the current position to the end of the game. Assuming perfect play on both sides, every result will be either a win, a loss, or a draw—that is, abstract strategy games are perfectly deterministic. (Checkers isn’t completely solved, but we do know now, as a result of the work from 2007, that perfect play on both sides from the start always yields a draw.)
This is, however, a massively impractical way to actually play a game, because the number of positions to explore rapidly turns intractable. Speed-focused modern chess engines search on the order of millions of nodes (in the game tree) per second, but searching a chess position exhaustively to a depth of 7 requires a search of about 60 billion nodes. Traditional games like chess and checkers yielded to some optimizations on this process:
It’s easy to look at a chess or checkers board and tell how well the game is going for a certain player: in both games, it comes down primarily to the balance of pieces. (The last I read, advantages in position are worth a pawn or two in a game of chess if they’re all going your way; the queen is worth nine pawns.) So, you don’t need to explore all the way to the bottom of the game tree to get an idea of which directions are promising. You just explore as deep as you can and evaluate the position.
If you have a good evaluation function (that is, one that generally only evaluates a position as better when it’s closer to winning), you can do some easy pruning when you come across a game state that’s known to be worse than the worst possibility you’ve explored: in that case, you just don’t explore any further in that direction. It works even better if you can assess which moves are likely to be good and explore those first: if you try the best move first, every subsequent move is going to turn out to be worse, and you’ll save a bunch of time.
So chess computers today, working with those tools, are better than the best human players: the effective branching factor (the number of moves to explore at each state in the tree), using the pruning techniques above, goes from about 35 to between 1 and 10, depending on the exact techniques used. The reason Go didn’t fall to traditional techniques is because it’s just so much more computationally difficult. Chess’s branching factor (the number of possible moves at each state) is about 35, and games last about 80 moves; Go’s branching factor is about 250 on average, and runs about 150 moves. It also features a few difficulties that chess does not:
It’s a very non-local game, both in space and in time: a move made at the very start of the game halfway across the board could have implications fifty turns later on the strength of the positions played at the start. This is a horizon problem: in chess, most positions become quiescent—not likely to affect the end effect of that position on the overall evaluation—after the captures stop. Modern chess engines will play all the way through capture sequences for this reason; there’s no similar metric to use for go engines.
It’s very difficult to evaluate a board position on purely objective grounds, or rather, we haven’t figured out how to phrase, mathematically, what about a good go position is good. Neither present control of territory nor present number of captures bears very strongly on the eventual result.
Because of the size of the problem space for Go, traditional techniques don’t work. The branching factor remains too high. Modern Go programs use one (or sometimes both) of two approaches: either they use hand-coded expert knowledge to sort and select promising moves for tree expansion (which frequently misses oddball moves that are nevertheless good), or they randomly play out a bunch of games from the current position to the end, and sample the result to pick the best move on aggregate (which frequently misses obviously good moves). The best of the pre-AlphaGo bunch used both, combining expert knowledge to pick the best moves to sample with the oddball-finding power of random simulation.
AlphaGo does a little bit of that, but at its heart, it learned to play a lot like humans do: DeepMind’s researchers fed it a diet of 30 million sample positions and the eventual results, and built a neural network to identify what a good board looks like and what it doesn’t. (Literally—the input into the program is a 19×19 image, with pixels set to values representing empty, black stone, or white stone.) They built a second neural network to identify which states are the best ones to simulate through in a random simulation, and Bob’s your uncle: a professional-level Go program. Their paper suspects it’s about as good as a mid-level human professional—the exhibition tournament they included in the paper saw AlphaGo beat the European human champ five games to zero, four by resignation, but the Euro champ isn’t a top-tier player worldwide. February will see an exhibition tournament between the South Korean champion, a world-class player; we’ll see how it does against him. AlphaGo also won 499 out of 500 games against the prior state of the art Go computers.
The most interesting thing about this development is that it learned to play a lot like a human would—it studied past games and figured out from that study what was good play and what wasn’t, then played a lot (against itself and past versions of itself), which is roughly analogous to playing a lot and working on go problems (the way human go players are supposed to get better). The obstacle to a general game-playing AI (I could buy a copy of Arkham Horror, finally, without having to rope the wife into an eight-hour marathon of doom!) is that training neural networks is currently pretty slow. As I mentioned in the first post, AlphaGo had to study about thirty million positions and play itself many, many times to get to the level of competence it’s at now; presumably, that will improve as DeepMind hones its learning techniques and discovers new and faster ways.
That said, humans still have one thing to be proud of: efficiency. The version of AlphaGo that beat the European champ ran on about 1200 CPUs and about 200 GPUs, whereas the human brain, which is nearly as good, draws about 20 watts.
Some extra reading for you: GoGameGuru has all the game replays, which are interesting if you’re familiar with Go (it doesn’t take much—if you’ve seen a human game or five and a game-against-computer or five, you’ve probably already noticed the differences in play, if only subconsciously) AlphaGo has a style that looks very human. Apparently, Go professionals expect the Korean champ to beat AlphaGo, and another source has the game taking place in March. If I were the champ, I’d be pushing for, like, next week. I don’t know how long the million-dollar prize is gonna be good for.
Here’s a commentary on one of AlphaGo’s games against the European champ.
I originally wrote this analysis for the people over at the Quarter to Three forum, which explains the breezy, off-the-cuff style. If you’ve been following us for a while, some of the introduction will have seemed repetitive, too. Anyway, I hope it was still informative and interesting. -Fishbreath
Picture your favorite helicopter gunship. I can’t tell you much about it without knowing what it is, but I can tell you one thing: unless you’re a weirdo like me, it has two seats. I do not think this must be so. To explain why is going to take a little detour into the tactical thinking of helicopter pilots, and how that affects the way they’re employed on the battlefield.
Picture yourself as a fixed-wing pilot. You can easily fly above all but the most specialized of ground-based weapons systems. Compared to anything in the dirt, you are extremely fast, so fast that they may as well be standing still. Your bog-standard general purpose bomb is several times more explosive than the largest explosive projectiles commonly hurled by things on the ground. Your precision-guided weapons are more precise, your sensors are better, you can see further. You are as unto a god, or at least a hero of Greek or Norse myth, striking down your foes with the weight of inevitability behind you.
Got that image in your mind? Savor it for a minute. Now forget all about it, because that isn’t how flying a helicopter works at all.
Picture yourself as a helicopter pilot. If you fly high, a plane will shoot you down, or a long-range air defense system. If you fly low, things on the ground a plane would laugh at will shoot at you, and might shoot you down. You are fast, but you aren’t so fast that you can really use it to enhance your survivability. You do not generally carry especially heavy weapons, and your sensors are pretty good, but you aren’t high enough to see a long way. You are certainly not as unto a god. You’re scary, but it’s the kind of scary your adversaries can actually kill.
What does that mean for you, noble helo pilot? How does it shape your doctrine? If you’re looking for a metaphor, the right analogue for a helicopter is not an IFV or a tank. If you’re a helicopter pilot, your mindset is ‘sky infantry’. You keep out of sight, use natural cover, engage quickly before getting out of sight, and generally skulk around in the mud. Just like the infantryman has a pretty bum deal on the ground, the helo pilot has a pretty bum deal in the sky. The only difference is that the helo pilot has someone to look down on.
Why do attack helicopters generally feature two crew? Because there are three jobs in a helicopter, and one person can’t do all three at once. You need to fly the helicopter, which is a difficult task on its own; you need to use the weapons, which often requires going heads-down; you need to keep your eyes up to see threats visually, since a lot of the things that can shoot down a helicopter can only be detected by the Mark I Eyeball1. The pilot can fly and watch, if the gunner is working with the sensors or weapons systems, and the gunner can keep an eye out, if the flying gets especially hard on the pilot. Simply put, each crewman can do about one and a half things simultaneously, and each helicopter has three things you need to do. Perfect coverage.
Mathematically, it looks bad for the single-seat concept. One crewman can do one and a half things. The helicopter has three things that need to be done. Let’s work on bringing those numbers closer together.
First off: we can install an advanced autopilot. We’ll go the Ka-50, the only single-seat attack helicopter ever to see combat service, as our example2. Taking its age into consideration, the Ka-50 has one of the most advanced autopilot systems ever installed in a helicopter. It’s fully capable of flying the helicopter through a noncombat mission from just after takeoff to just before landing, and can take control in nearly every combat situation that doesn’t involve immediate evasive action, or nap of the earth flying. This reduces our list of things to do to two, but we still only have one and a half tasks doable with our single crewman.
How can we fix that? Add a second crewman, but put him in a different airframe. Your helicopters fly in pairs. How many things will we need to do at once? Fly, but the autopilot takes care of that for us. Use weapons, yes, but that’s a shared task: only one helicopter needs to be engaging at a time. That’s one thing between us. Keep an eye out, yes: ideally, both of us should be keeping an eye out, but in a pinch, one pilot can keep an eye out for the whole team. That leaves us two crewman, who together can do three things, and two or three things to do between them (that is, weapons, eyes, eyes, or weapons, eyes).
That’s really all there is to the argument. Additional automation can help reduce the workload further. A fancy threat warning system helps reduce the need for constant lookout, and helps direct pilot attention during the few, emergency situations where the autopilot is insufficient. Better weapons and datalinks allow for off-board targeting, which can be used to move the weapons employment burden around between helicopters. Autopilots with more options yield further reductions in flying workload—a terrain-following radar or lidar, for instance, would give the Ka-50 the ability to fly nap of the earth at high speeds. Better sensors help reduce the time spent heads-down for weapons employment.
I’m nearing my target word count here, so I’ll wrap up with some quick pros and cons. I’ve made a decent argument that a single-seat attack helicopter is a reasonable choice, so why might you prefer one? To start, you have reduced aircrew requirements, and reduced aircrew losses—half of two airframes is one, and half of one airframe is zero. You have a great deal of large-scale tactical flexibility. Since the two-ship element is the basic unit of maneuver, you can choose to advance in bounding overwatch, for instance, or widely separate your eyes from your weapons. Your eyes helo might be just behind solid cover on a ridge outside of enemy engagement range, able to peek and feed coordinates to your weapons helicopter, which might be advancing in concealment much nearer the enemy. In separating eyes and weapons, terrain may sometimes allow a quick attack from two angles in rapid succession, or at entirely the same time. If you have a small number of helicopter pilots, single-seat airframes let you put more into the sky at once. It’s a setup optimized for tankbusting: large targets, relatively easily spotted and shared.
Why might you choose the standard two-seater? It’s better in moderately threat-heavy COIN situations, where the front lines are poorly defined and any territory may become enemy territory. Two-seat helicopters have better small-scale tactical flexibility, and a single two-seat helicopter swing between navigation, evasion, and counterattack much more quickly than a pair of single-seat airframes. For another, two-seaters are tried and tested. Nobody operates a single-seat attack helicopter in any real number today, not because it’s not a workable theory, but because the only modern example entered service well after its technology started down the hill toward obsolescence. Today, you’d have to build your own single-seater, or buy a bunch of Kamovs and refit them, while you can buy Havocs or Cobras or, for that matter, the Ka-52, basically off-the-shelf. Two-seat helicopters have better engagement speed: for a given number of helicopters and a given number of weapons, the two-seaters will distribute their arms faster, because each airframe is a self-contained targeting and shooting unit, not depending on another helicopter for overwatch or targeting data.
That’s about all I have. One of these days, I’ll take a look at the concept, and come up with some justifications for why Luchtburg might choose a single-seat helo.
1. Or the Mark II Eyeball, also known as the missile launch warning system. 2. The Ka-50 is outmoded in today’s market, but if you look at its competitors in late 80s, when it first appeared on the scene, it’s a much closer case, and depends mainly upon some tactical considerations I’ll get into later.