Author Archives: Fishbreath

US Ground Combat Systems Are Not Obsolete

I came across this article in the Free Beacon this morning, whose headline reads as follows: “Army’s Ground Combat Systems Risk Being Surpassed By Russia, China”.

Look, if you’re reading this article, you’ve read a lot of our articles. You know that I, Fishbreath, am not the expert on ground combat systems. Not really my cup of tea. You know, therefore, that when I say, “Man, this article is dead wrong,” that it really is just flat out dead wrong. Let me revise the Free Beacon’s headline: “Army’s Ground Combat Systems Risk Being Roughly Equalled By Russia, China After 40 Years Of Curb-Stomping Dominance”.

In the modern era, a combat system’s age is not nearly as important as its current capability. The T-14 and the Type 99 are modern tanks. They compete against the modern American system, the M1A2, in the three categories by which all armored fighting vehicles are judged: firepower, protection, and systems1.

First off: firepower. The American contender mounts the stalwart Rheinmetall 120mm smoothbore gun in the 44-caliber length. The Germans, being a little squeamish about depleted uranium2, made an L/55 version for higher muzzle velocities. This gun, either the lengthened version or the original with depleted uranium, still sits in the top tier of tank guns as far as penetration goes3. The Russian and Chinese entries both use the Russian standard 125mm caliber; the Armata uses the 2A82, the shiny new version sans fume extractor for installation in the unmanned turret, while the Type 99 uses the ZPT-98, the traditional Chinese clone of the 2A46. Neither is clearly superior to the Western choice of gun. Standard 125mm ammo is nevertheless lighter and shorter overall (counting the penetrator and propellant) than the one-piece 120mm loads usually fired through the Rheinmetall guns. In exchange, the Russian-style gun gains the ability to launch ATGMs—questionably effective against modern tanks—and a little bit more power for HEAT rounds, which have the same issue as the ATGMs. Call this one a slight win for the Abrams.

Next: protection. The Type 99 falls behind quickly here; it’s more or less a T-72 hull, and the T-72 doesn’t have a great deal of headroom for armor. Too, the Type 99 has to deal with the swampy, rice-paddied Chinese south. The Chinese can’t build a T-72-based tank much heavier than the current 52 to 54 tons, and the protection they can achieve there is limited, given what they have to work with. The Armata, though it weighs in in the 50ish-ton range itself, has the benefit of an unmanned turret. Unmanned turrets can be smaller, and armored volume is expensive in weight terms. Our own parvusimperator claims Armata has roughly Western-equivalent protection. Give Armata an edge, even; there are no squishy humans in its turret, and no explodey ammo in its hull. The unmanned turret, unproven though it may be, neatly isolates the two. Call this one a slight win for the Russians.

Finally: systems. This is the hardest one to write about, since the Russians and the Chinese aren’t talking. We know more or less what’s in the M1A2: nice digital moving-map navigation, color displays, modern sighting units, separate ones for the commander and gunner, with nice thermal displays. I think it’s reasonable to assume the Armata has similar. We can see that it has an independent sight for the commander, and the Russian avionics industry has built color MFDs and moving map systems in the past. Presumably, the charionics4 in their tanks won’t be too far behind. It’s even less possible to speculate about the Chinese; their latest MBT entered service around the turn of the century, and who knows what they’ve stuck in it. Call this one a tie between the Americans and the Russians.

In a way, though, systems are the least important item here. Unlike armor or guns, swapping out the computers, stabilizers, navigation systems, and sights in tanks is more or less trivial. There may be integration costs, and there are definitely upgrade costs, but ordinarily, you don’t run into the same sort of critical design problems you find when, say, trying to cram a 140mm gun into an Abrams turret.

So that about wraps it up. Contra the Free Beacon article, the new Combloc5 tanks do not surpass the Abrams in any meaningful way. Where they are superior, it’s a matter of degrees. Elsewhere, they still fall behind the Abrams. What we have today is not a new era of Combloc dominance. It’s a return to parity for the first time in almost forty years.

Let’s go back a few years more than that. It’s 1972, and the fearsome T-72 has just entered service. It’s faster than the M-60, hits harder, has better armor, and is being cranked out of the Soviet tank factories at an astonishing rate. The armored fist of the Soviet Union could well crush Western Europe. This doesn’t sit well with Western Europe.

The Germans and Americans are already hard at work on the MBT-70. It reaches a little too far, and doesn’t quite work out. The Germans and Americans each take the blueprints and build something on their own, and we get the Leopard 2 and the M1 Abrams, entering service in 1979 and 1980. This begins the aforementioned era of Western tank dominance. The Abrams and the Leo 2 are vastly superior to the T-72 and T-80. The Russians do some various upgrade projects to the T-72 and T-80 over the years, but never regain the lead. The Leo 2 and Abrams see upgrades on more or less the same schedule; they’re still a generation ahead.

Finally, today. The Russians have Armata, a legitimate contender; the Chinese have the Type 99, which is sort of the Gripen to the Abrams/Armata F-22: some of the same technologies, still half a class behind. Which brings us to the final decider. Quantity.

The Russians have about one hundred Armatas. They only entered service last year, so I give them a pass. Their eventual plan is to acquire about 2300.

The Chinese have about 800 Type 99s. I have no idea if they’re still being produced.

The Americans have roughly 1000 M1A2s, the most recent Abrams. Of course, we also have about 5000 M1A1s of various marks, most of which have been upgraded to include nearly-modern electronics.

Even if we allow that the Type 99 and the Armata are superior to the average Abrams in American service, which is wrong, we still have twice as many as both other types combined.

The Free Beacon may say otherwise, but I say we’re doing just fine.


  1. To include sights and viewers, as well as command and control computers. 
  2. Understandable, given that in most hypothetical wars, the Wehrmacht Bundeswehr would be shooting it over their own land. 
  3. As far as anyone knows. Armies are a little cagey about revealing how punchy their guns are, for some unfathomable national security reason. 
  4. Electronic systems for tanks, by analogy to avionics. (An avion is a French plane, a char is a French tank.) 
  5. Yes, I know they are, respectively, not Communists anymore and nowadays only Communists inasmuch as they’re heirs to a truly Communist body count. I don’t care. ‘Combloc’ is a reasonable way to refer to Russia and China in the context of this article. 

Swedish Strike Saturday: the AJS-37 Viggen

The AJS-37 Viggen is a modernized classic: a 1990s update of the 1971 AJ-37 Viggen.

Why is it a classic, though? You may be forgiven for not knowing. In fact, I did not know until I saw that Leatherneck Simulations1 are making a DCS AJS-37. So, on this first Swedish Strike Saturday, let’s take a look at why the Viggen is such an icon, and why you ought to be excited for it.

In doing so, we first have to take a trip back in history, back to Sweden circa 1961. The enemy du jour is the Soviet Bear. Although the Saab 35 Draken matches up well against Soviet fighters of the day, the Saab 32 Lansen, a late first-generation jet which handles the attack role, is looking a little long in the tooth. It’s time to make something better.

Much better. The Swedes had a history of pioneering aircraft designs out of Saab, and the Viggen was no exception.

It was the first canard aircraft to enter front-line service, and featured the first afterburning turbofan in a strike fighter. By date of start of development, the Viggen’s computer was the first integrated-circuit computer designed for use on an aircraft. For a time in the early 1960s, while development work was under way on the computer, Saab was the world’s largest buyer of integrated circuits. It was the first single-seat third-generation jet strike fighter to enter development, and the second to enter service2.

As one of the two first digital attack aircraft to enter service, it is, then, an object of some historical interest. Similarly, its computer is one of the first in the aviation world, and that makes it interesting to me (a computers guy). The CK37 (CK for Central Kalkylator) flight computer does just about everything data-related in the aircraft: it runs both of the cockpit displays (the HUD and the Central Indicator—think radar screen, but with navigational information, too), does navigational calculations, and handles weapon aiming.

Saab built the prototype, using individual transistors, in the 1960. It was table-sized, featured about 5,000 transistors, and ran at about 100,000 cycles per second. Total weight was about 450 pounds. Obviously, it wasn’t altogether suitable for aerial usage. Redesign efforts in 1961 used the newly-available ‘integrated circuit’.

Enter Fairchild, who beat Texas Instruments (!) for the contract. Their integrated circuits featured a whopping two transistors per square millimeter, ten times the density of discrete components. Some few years later, in 1964, Saab’s computing division delivered the final CK37 prototypes. This final version could run about 200,000 instructions per second, with about 28 kilobytes of magnetic core memory, with core density of about one core per millimeter3. It weighed about 150 pounds, comprised five computer units, and drew about 550 watts of power.

And, going by everything I’ve seen, it made for a tremendously effective aircraft. On seven hardpoints, the original Viggen could carry a combination of weapons: 135mm rockets, 120kg bombs, the RB-05A MCLOS missile, and the RB-04 anti-ship missile. Between the radar and the advanced (for its day) navigation system, the Viggen could fly in ugly weather, dropping unguided bombs precisely on any target it could see by radar. Although its air search capabilities were rudimentary, the radar could still cue Sidewinder seekers; on those grounds, it was not altogether ineffective as a fighter.

It did so without a navigator; the autopilot and navigation systems are sufficient to permit the pilot alone to fly and fight. By all accounts, the Viggen gave excellent service from its introduction date in 1971 to its retirement thirty-odd years later. Along the way, it gained the RB-75 missile4, and a variant called the JA-37. A fighter first and striker second, the JA-37 gained a better computer, a lookdown-shootdown radar, and support for the Skyflash5 missile. Much later, both the JA and the AJ Viggens saw some upgrades. The JA-37 became the JA-37D, with a glass cockpit and the ability to sling AMRAAMs6. The AJ-37 became the AJS-37, and that’s the plane we’re interested in today.

Development of the JAS-39 Gripen7, the follow-on to the Viggen and the Draken, began in 1979. It didn’t fly until 1988, and it didn’t enter service until 1997. In the interim, Swedish military planners began to get a little nervous about the state of their ground attack force. Though the Viggen was a solid workhorse, its armaments were outmoded, and its navigation system was fiddly.

Some of the Gripen’s weaponry was already available in the early 1990s, though, including the BK-90 submunitions dispenser8 and the RBS-15 anti-ship missile. The S-modification allows the Viggen to launch both, giving it access to modern smart weapons. At the same time, Saab’s designers added a data cartridge, greatly simplifying pre-mission preparation. The extra data capacity in the cartridge also allowed for a terrain contour matching function. The data cartridge contains information about the elevation contours expected during the mission and their locations; in flight, the computer correlates the expected contours to the actual, observed contours from the radar altimeter. This allows the computer to update the INS with true positions, correcting to some degree for drift during flight.

With those upgrades, the AJS-37 soldiered on until 2005, flying alongside the Gripen for eight years, at which point it was finally retired. An airplane of many firsts, it was also a notable last: the last of the great 1970s low-altitude strike fighters to fly its original mission profile. The Tornado, the F-111, and all the Viggen’s other contemporaries were upgraded to fly more modern, middle-altitude missions. The Viggen never lost its focus as a low-altitude interdictor.

Is the Viggen a good interdictor in its original threat environment? Do the upgrades make it better? Is it suitable for the modern world? How good is the Leatherneck recreation? This paragraph is where I had hoped to tell you that we would soon be finding out. Unfortunately, it’ll be a little longer than I had hoped; Leatherneck’s Viggen releases on January 27, and it isn’t looking like the Soapbox is big enough for a preview key. No matter—that just gives me more time to prep for the articles down the road. In February, you can expect two or three of them, touching on the answers to the questions posed at the start of this paragraph.

Stay tuned!


  1. Makers of the DCS MiG-21
  2. The A-7 Corsair came first, entering service in the late 1960s to the Viggen’s 1971 
  3. The CK37 divides its memory into 8192 words of 28 bits in length, with 1536 words as working space and the remainder write-protected data. 
  4. The AGM-65A Maverick; the Swedes have a thing about keeping American names for missiles. 
  5. Or RB-71. 
  6. Surprisingly, they call this one the AIM-120. 
  7. The Gripen is a longtime favorite of mine. 
  8. The Swedes are anti-cluster-bomb, so a weapon which drops explosive bomblets is called a ‘submunitions dispenser’. 

OpenTafl AI roundup: bugs and features

This post will cover OpenTafl AI changes since the last post I wrote on the topic, further back in the v0.4.x releases. First, bugs!

Let’s do some quick recap. OpenTafl’s AI is a standard, deterministic1 tree-searching AI, using iterative deepening. That means that OpenTafl searches the whole tree2 to depth 1, then to depth 2, then depth 3, and so on, until it no longer has enough time to finish a full search3.

You may have noticed, if you’ve used OpenTafl, that searching to depth n+1 takes a longer than searching to depth n, and that it’s altogether possible that the process above might leave OpenTafl with a lot of time left over. I did not fail to notice this, and I implemented a couple of additional searches (extension searches, as they’re known) to help fill in this time.

The first, I refer to as continuation search. Continuation search takes the existing tree and starts a new search at the root node, using the work already done and searching to depth n+1. Obviously, continuation search doesn’t expect to finish that search, but it will reach some new nodes and provide us some new information. After continuation search, OpenTafl does what I call a horizon search: it finds the leaf nodes corresponding to the current best-known children of the root node, then runs normal searches starting with the leaf nodes, to verify that there aren’t terrible consequences to a certain move lurking just behind the search horizon.

These are fairly easy concepts to understand, my poor explanations notwithstanding. The bugs I referred to in the title are more insidious. They come down to a much more complicated concept: what conditions must the children of a node meet for that node’s evaluation to be valid?

In the iterative deepening phase of the search, the answer doesn’t matter. Remember, OpenTafl discards any tree it doesn’t finish. When we’re doing extension searches, though, we don’t have that luxury. OpenTafl must be able to discern when a certain node has not been fully searched. I added a flag value to the search to note that a certain node has been left intentionally unvalued, which gets set whenever we have to abandon a search because we’ve run out of time. If a node did not exist in the tree prior to the extension search, and it has unvalued children, then it is also unvalued. If a node did exist in the tree prior to its extension search and it has unvalued children, this is okay! We ignore the unvalued children and use the information we’ve gained4. If an unvalued node is left in the tree after those steps, we ignore its value. Any unvalued node is misleading, and we should avoid using its value when deciding how to play.

This issue led to poor play, as both horizon and continuation search had a chance to introduce bad data into the tree. I finally tracked it down and fixed it in v0.4.4.6b.

After that, I came across another few bugs, lesser in severity but still quite bad for OpenTafl’s play: when evaluating a draw position for the attackers, OpenTafl would incorrectly view it as more defender-favorable than it should have been5. OpenTafl also had some trouble with repetitions, incorrectly failing to increment the repetitions table in some search situations. That’s one of the more important gains over v0.4.4.7b—v0.4.5.0b is absolutely incisive in playing out repetitions, as some of the players at playtaflonline.com discovered after the update.

Finally, a few minor time usage bugs are no longer present, although there are some headscratchers where the AI seems to lose half a second or so to some task I cannot locate, and some task it does not count when doing its time use accounting.

That about wraps up bugs. Features, as usual, are more fun.

First, OpenTafl now is willing to play for a draw in rare circumstances. If its evaluation tilts overwhelmingly toward the other side, and it sees a draw in its search tree, it evaluates the draw poorly, but better than a loss.

That depends on the second feature, which is an improved evaluation function. Rather than guess, I decided to be scientific about it: I built four OpenTafl variants, each with a certain evaluation coefficient raised above the rest. Those variants played each other in a battle royale, and based on the outcome, I picked new coefficients. They differ by size; 7×7 boards consider material more heavily, while larger boards prefer to play positionally6.

Positional play comes from the last and most important feature: piece square tables. Credit for the idea goes to Andreas Persson (on Twitter @apgamesdev), who linked me to the chessprogramming wiki article, and also provided a first pass at the tables.

I should back up a bit first, though. Piece square tables are descriptively-named tables which assign a value to having a certain piece type on a certain space. For instance, the space diagonally adjacent to a corner space in a corner-escape game is very important for the besiegers. That space gets a high positive coefficient. On the other hand, the spaces directly adjacent to the corners are quite bad for the attackers, and get a moderately large negative coefficient. OpenTafl’s evaluator handles the exact values.

The benefits of this approach are manifold: not only does OpenTafl know when the opponent is building a good shape, it now has a sense for position in a global sense. (It previously had some sense of position relative to other pieces, but that was not sufficient.) Because of this, it is much better now at picking moves which serve more than one purpose. If it can strengthen its shape by making a capture, it’ll do so. If it can weaken its opponent’s shape, so much the better. The code which generates the piece square tables can be found here7.

The outcome is most pleasing. I can no longer easily beat OpenTafl on 11×11 corner escape boards, and games in that family are presently the most popular in online play. Equal-time matches are all but a lost cause, and I have to engage my brain in a way I never did before if I allow myself more thinking time. Now, I am not all that good a player, and those who are better than me still handle OpenTafl pretty roughly, but it now plays at a low-intermediate level. Given that it barely even existed twelve months ago, I’d say that’s good progress.


  1. Mostly. 
  2. Kind of. 
  3. More or less. For being mostly deterministic, AI stuff is remarkably fuzzy. On an unrelated note, look! We have new footnotes, with links and everything! 
  4. You can construct a game tree where this assumption—that not searching all of the outcomes when exploring an already-valued node is acceptable—causes trouble, but the fault for that lies more with the evaluation function than with the search. In such cases, the evaluation function must be so incorrect as to evaluate a node which leads to a loss just over the horizon as better than a node which does not lead to an imminent loss to a slightly deeper depth. They are quite rare, though, and I haven’t come across one yet in testing. 
  5. It was supposed to be a plus sign, but it was a minus sign instead. Oops. 
  6. On the to-do list is changing coefficients over the course of a game—brandub is more or less an endgame study, and at some point, the evaluator should prefer material over position in endgames even on larger boards. 
  7. I chose to generate the tables because it’s easier to maintain and update. Work for corner-escape 11×11 boards generalizes to most corner escape variants; the same is true for edges. The only boards which really, truly take special cases are 7×7, since the corners are such a vast majority of the board, and moves which might be considered neutral or iffy on larger boards ought to be given a chance—there aren’t many options in the first place. 

2016 Tafl Efforts: Results and Roundup

First off: the inaugural OpenTafl Computer Tafl Open has come to a close. It was a bit of an anticlimax, I must admit, but fun times nevertheless.

To recap, only one entry (J.A.R.L) made it in on time. On January 2nd, I had the AIs run their matches, and it was all over inside of 20 minutes, with a bit of technical difficulty time to boot. You can find the game records here.

To move one layer deeper into the recap, both AIs won one game each out of the match. J.A.R.L won in 22 moves, OpenTafl won in 18, giving the victory to OpenTafl. Disappointingly for me, OpenTafl played quite poorly in its stint as the attackers, allowing J.A.R.L to quickly set up a strong structure funneling its king to the bottom right of the board. Disappointingly for Jono, J.A.R.L seemed to go off the rails when it played the attacking side, leaving open ranks and files and leaving a certain victory for OpenTafl. Deeper analysis is coming, although, not being a great player myself, I can’t offer too much insight. (Especially given that neither AI played especially well.)

I do expect that, when Jono finishes fixing J.A.R.L, it’ll be stronger than OpenTafl is today. He intends on making its source code available in the coming year, as a starting point for further AI development. (If feasible, I hope to steal his distance-to-the-corner evaluation.)

There will be a 2017 OpenTafl Computer Tafl Open, with the same rules and schedule. I’ll be creating a page for it soon.

Next: progress on OpenTafl itself. It’s difficult to overstate how much has happened in the past year. Last January, OpenTafl was a very simple command-line program with none of the persistent-screen features it has today; it had no support for external AIs, no multiplayer, no notation or saved games, and a comparatively rudimentary built-in AI.

The first major change of the year was switching to Lanterna, and that enabled many of the following ones. Lanterna, the terminal graphics framework OpenTafl uses to render to the screen, allows for tons of fancy features the original, not-really-solution did not. Menus, for one. For another, a UI which makes sense for the complicated application OpenTafl was destined to become. Although it’s the easiest thing to overlook in this list of features, it’s the most foundational. Very few of the remaining items could have happened without it.

Next up: external AI support. In the early days, I only planned for OpenTafl to be a fun little toy. At the end of that plan, it might have been something I could use to play my weekly (… well, kind of) tafl game without having to deal with a web interface. (For what it’s worth, Tuireann’s playtaflonline.com renders that goal obsolete, unless you really like OpenTafl.)

Later on, as I got into work on OpenTafl’s built-in AI, I realized what an amazing object of mathematical interest it is, and that it has not, to date, seen anything like the kind of study it richly deserves. As such, I decided I wanted OpenTafl to be a host for that sort of study. Much of what we know about chess, go, and other historical abstract strategy games comes from the enormous corpus of games played. That corpus does not yet exist for tafl games, the amazing efforts of people like Aage Nielsen and Tuireann notwithstanding. The quickest way to develop a good corpus is to play lots of games between good AIs. Good AIs are hard to come by if every AI author also needs to build a UI and a host.

So, OpenTafl fills the void: by implementing OpenTafl’s straightforward engine protocol, AI authors suddenly gain access to a broad spectrum of opponents. To start with, they can play their AI against all other AIs implementing the protocol, any interested human with a copy of OpenTafl, and possibly even the tafl mavens at playtaflonline.com. Not only that, but the AI selfplay mode allows AI authors to verify progress, a critical part of the development process.

Multiplayer was an obvious extension, although it hasn’t seen a great deal of use. (There are, admittedly, better systems out there.) It proved to be relatively straightforward, and although there are some features I’d like to work out eventually (such as tournaments, a more permanent database, and a system for client-side latency tracking to allow for client-side correction of the received server clock stats), I’m happy with it as it stands.

OpenTafl is also the first tafl tool to define a full specification for tafl notation, and the first to fully implement its specification. The Java files which parse OpenTafl notation to OpenTafl objects, and which turn OpenTafl objects into OpenTafl notation, are in the public domain, free for anyone to modify for their own AI projects, another major benefit.

In defining OpenTafl notation, I wanted to do two things: first, to craft a notation which is easily human-readable, in the tradition of chess notation; and second, to remain interoperable with previous tafl notation efforts, such as Damian Walker’s. The latter goal was trivial; OpenTafl notation is a superset of other tafl notations. The former goal was a little more difficult, and the rules notation is notably rather hard to sight-read unless you’re very familiar with it, but on balance, I think the notations people care about most—moves and games—are quite clear.

Having defined a notation and written code to parse and generate it, I was a hop, skip, and jump away from saved games. Shortly after, I moved on to replays and commentaries. Once again a first: OpenTafl is the first tool which can be used to view and edit annotations on game replays. Puzzles were another obvious addition. In 2017, I hope to release puzzles on a more or less regular basis.

Last and the opposite of least, the AI. Until the tournament revealed that J.A.R.L is on par with or better than OpenTafl, OpenTafl was the strongest tafl-playing program in existence. I’ve written lengthy posts on the AI in the past, and hope to come up with another one soon, talking about changes in v0.4.5.0b, which greatly improved OpenTafl’s play on large boards.

Finally, plans. 2017 will likely be a maintenance year for OpenTafl, since other personal projects demand my time. I may tackle some of the multiplayer features, and I’ll probably dabble in AI improvements, but 2017 will not resemble 2016 in pace of work. I hope to run a 2017 tafl tournament, especially since the engine protocol is now stable, along with OpenTafl itself. I may also explore creating a PPA for OpenTafl.

Anyway, there you have it: 2016 in review. Look for the AI post in the coming weeks.

Fishbreath Plays Total War: Warhammer

I fear we are too late.

Under the High King’s banner, we drove the grobi scum out of the halls of our ancestors. We chased them through the badlands and put them to the az, and now they will never trouble us again. Our diplomats traveled the whole of the world, drawing together the karaks and reforging the alliances of old. We stood side by side with men for the first time in a thousand years.

But while we looked south, Chaos fell on the world from the north. Kislev fell. Nordland teeters on the brink. Men fought men in the Empire’s heartland, and now tendrils of darkness reach the very gates of Altdorf.

The High King looks north now. Umgi and dawi alike are united under his command. So we march to the lands of men, az in hand, to face those who would bring about the end of all things—servants and champions of the dark gods.

The Empire is a shadow of its old self. The Wood Elves still make war on all who stand for order. Their stubbornness may yet doom us all. We are the world’s last hope.

– Elmador Oathforged

Warhammer is an excellent setting for storytelling.

You should need no further convincing, but in the event you do, let me elaborate. From its rather humble beginnings as a miniatures wargame, Warhammer Fantasy Battles1 developed a world full of timeless themes for war stories: dramatic final stands against insurmountable odds, the evil horde sweeping through the world to eradicate all that is good and right, brave men standing athwart the tide.

Total War games are story generators. Perhaps they aren’t as effective in that role as Crusader Kings 2, but they nevertheless make interesting alternate histories. Note I say ‘interesting’, as in, ‘huh, that’s interesting’, and not ‘compelling’, as in, ‘I cannot wait to see where this goes next’. Previous Total War games were interesting, but not compelling. Factions aren’t all that different, generals are more or less interchangeable, your enemies are the ones next to you, and your territory is whatever you can take.

Not so much in Total War: Warhammer. Factions are very different—some depend on siege weapons, some depend on strong infantry, some depend on movement and trickiness, and all feel almost like different games. Generals have a deep skill tree, and that helps to turn them from collections of bits into characters. (I didn’t even have to start the game to look up General Oathforged’s name.) Your enemies may be across the world. Chaos, remember, comes out of the north, and the dwarfs start in the south. You can’t take territory willy-nilly, either. Most factions have some territorial restrictions. Dwarfs, for instance, can only occupy territory which was originally dwarfen: the settlements in the central plains are right out, but old dwarfen settlements occupied by the greenskins are fair game.

Ultimately, though, the thing Total War: Warhammer has over previous Total War games is its setting. It probably isn’t quite correct to say that everyone knows Warhammer, but a lot of people know Warhammer. There are more people familiar with Warhammer, I would say, than the 18th-century history of the Netherlands2. Even if the numbers were equal, the Warhammer setting is a fictional setting. By their very nature, fictional settings generate stories more easily than historical ones. This isn’t to say that there aren’t interesting stories out of the Netherlands’ exploits in the 1800s—just that they aren’t as memorable or as frequent as the stories out of the dawi’s fight against the grobi, or the Empire’s strife with its neighbors, or the coming together of all the civilized peoples to stand against Chaos.

So, when compared to other Total War games, Total War: Warhammer has much deeper emotional impact because of its setting. Game systems reinforce this: I’m not just fighting a war of conquest, I’m fighting wars of conquest to rebuild the Karaz Ankor and reclaim what was lost to dwarfkind thousands of years ago. Or, I’m not just beating up on my neighbors to take their stuff, I’m beating up on my neighbors because they are to the south, they won’t stop fighting me until they’re defeated, Chaos is to the North, and the Empire is the first and best line of defense against the Ruinous Powers. Or, I’m not just swarming up out of the badlands because I’m looking for a scrap—well, okay. Maybe the greenskins aren’t the best example, but even if they do fight just for the sake of fighting, they have a reason for it. It’s what they do: beat up on anyone small enough to take a beating, then find the next biggest thing, rinse, and repeat.

That, in my opinion, is what previous Total War games were missing, and what Total War: Warhammer has in spades: context.

To hit on a few final, technical notes, battles play quickly, moreso than even the relatively quick games in recent Total War history, but the factions are varied, tactics are interesting, and the AI has a great sense for cavalry flanking maneuvers. The Creative Assembly finally got to cut loose and have some fun, and it shows here. Presentation is generally superb all around; the writers nailed the Warhammer feel, and the art design follows along. There are some spectacular battle maps, too.

Really, it’s the perfect union of theme and mechanics. I’m glad it took this long to happen, because they got it very right. Ordinarily, when I’m looking forward to a game, I build up a picture in my head of what it’ll be like. That picture is usually not altogether accurate, so when the game finally comes out, there’s a time of adjustment. The game may not be bad, but it isn’t what I’m expecting, and so in a sense, I’m disappointed. I never had that feeling with Total War: Warhammer: it is everything I had hoped it would be. If you like games that generate stories, the Total War formula, or Warhammer, you owe it to yourself to give it a whirl.

  1. May it ride eternal, shiny and chrome!
  2. Fun fact: your author’s next favorite Total War game is Empire, because he likes to be contradictory.

2016 Tafl Open: Liveblog

Tafl Open 2016: Server is now running

The OpenTafl server for the 2016 OpenTafl Tafl Open is now running at taflopen.manywords.press. Please see the README file included with your distribution of OpenTafl for instructions on how to use the server functionality.

Coverage begins at noon Eastern Time tomorrow, Monday, January 2nd. (That’s 1700 UTC, to be as clear as I can.) I hope to stream live here. A live blog post will be available here at the Soapbox, too. See you then!

Tafl Open 2016: An Update

Unfortunately, turnout for the tournament ended up being rather disappointing. The only entrant to submit an entry was Jonathan Teutenberg, with J.A.R.L.

By the rules as written, this leaves him with the title of champion. Fortunately for you, dear tournament follower, I spoke with Mr. Teutenberg, and we agreed to a one-match playoff between OpenTafl and J.A.R.L.

You can expect to see that game on Monday, January 2nd, starting at about noon Eastern time. There will be coverage available in several forms: a liveblog here, a stream at my hitbox channel (I’ll provide a link when the time comes), and live viewing at a special OpenTafl multiplayer server.

To connect to that multiplayer server, go to Options in your OpenTafl client and change the server address to taflopen.manywords.press. (That server is not up yet; I’ll be starting it on Sunday night.) Connect to the server. (Note that accounts will not be transferred from the main Many Words OpenTafl server. Logging in with a new username and password will register a new account.) The game will be made available for spectators at about the start time.

Stay tuned later today for OpenTafl v0.4.5.0b, the version which will play in the tournament.

The Crossbox Podcast: Episode 14 – Christmas Special

Merry Christmas! We dive into our wishlists for a very special special episode.

Further reading
YAGM-169
In defense of the single-seat attack helo
Website search for ‘tank’, since John has written about a million of ’em


(Download)

Fishbreath Plays: Train Simulator vs. American Truck Simulator

If you caught the most recent episode of The Crossbox Podcast, you may recall that I cited these two games as examples of a genre I don’t quite understand. (I’ve come to call it the Podcast Screensaver genre1.) At the same time, I said I kind of understood the appeal of Train Simulator. Namely, driving a train is at least a little unusual. Driving a truck on a highway is a little too similar to my daily commute.

Predictably—inevitably—further experience has made me change my tune.

What makes a good entry in the Podcast Screensaver genre? It needs to take a little attention, but not so much that you can’t follow the thread of the podcast. It should present occasional challenges—if it doesn’t, it ceases to be a game in the Podcast Screensaver genre, and you might as well just watch a screensaver. Ideally, it should be immersive. Most importantly, it should be pleasing to look at.

Let’s go down the list.

Takes a little attention
American Truck Simulator fits the definition more or less perfectly. If you drive a car, you know this. Driving isn’t difficulty, but it does take a constant minimum expenditure of brainpower.

Train Simulator, on the other hand, is a little harder to defend. Driving a train, though it is more exotic than driving a truck, takes basically no attention at all. You have to watch out for signals every mile or two, and if one of them is red, you have to fiddle with some brakes. Things get more complicated if you’re running a steam engine, but not dramatically more complicated.

The distribution of required attention is different, too. A driving game requires a relatively constant amount, whereas a train simulator takes extra thought when you’re coming up to a signal: you have to squint through the window to see the thing, decide whether or not to brake, and then carry out the action of braking to stop where you want to stop. This is not conducive to paying attention to a second thing. (At least, not for me.) The human mind (or my human mind) is much better at handling two constant cognitive loads (such as driving and listening) than it is at handling one constant load and one highly variable load (such as listening and train driving).

Points, then, to the truck simulator.

Presents occasional challenges
It may perhaps be a result of Train Simulator’s demographic2, or perhaps it is a result of the inherent ease of driving trains3, but Train Simulator is easy. Nor is it only easy because trains are easy. Even the scenarios labeled ‘difficult’ (for example, using a tiny British tank engine to haul a rack of passenger cars up a hill, or using an enormous American gas turbine locomotive to haul a bunch of hopper cars up a different hill, and taking a steam locomotive low on water4 to its next stop) are straightforward. I’ve seen some people on forums complain about the difficulty of these precise scenarios, while I—a train neophyte if ever there was one—had no trouble whatsoever.

American Truck Simulator is also not all that difficult, provided you’ve driven a vehicle with a trailer before. That said, there are some places where it is honestly hard, mostly relating to maneuvering trailers in tight spaces, whether they be right-angle corners or narrow loading docks.

Again, points to the truck simulator.

Is immersive
Immersion is, of course, subjective, and I can see how it might go either way. For the particular games I’ve played (American Truck Simulator and Train Simulator with 2016 and 2017 routes), it comes to a coin toss.

I’ve done a little bit of driving in the American Southwest, and ATS gets that right on a reliable basis. Sunrise and sunset are also super-pretty, and the sound design is excellent. That said, Train Simulator’s Sherman Hill route also has things to recommend it, and in fact, the scenario I played there obscures one of Train Simulator’s biggest flaws.

Is pretty
This, unfortunately, is where Train Simulator falls down a bit. In terms of graphics and audio design, it lags far behind American Truck Simulator5. For a game in the Podcast Screensaver genre, visual and aural beauty are non-negotiable. The whole idea is that, while your brain is mostly focused on listening to something, you have a pleasant background scene to enjoy. If the background scene is ugly, then it all falls apart.

As I mentioned, there are moments where Train Simulator looks and sounds good. I was hauling a load of empty hopper cars up Sherman Hill at sunset. A rainstorm was overhead, but it didn’t reach the horizon, and as the sun went down, it lit the scene in a perfect gloomy orange. The sounds for the turbine locomotive I was driving were also excellent, lovely whirring, a bell which rang as clear as itself, and an air horn in the finest tradition of train air horns. Moment to moment, though, I give this one to the truck simulator.

Conclusions
As scored above, the final tally goes to American Truck Simulator, 3-0, with one tie. I should note that the difference is not quite so vast as I make it seem. For instance, the Unreal Engine 4-based Train Sim World, the next in Dovetail Games’ series, is extremely good-looking, and the sound design is just superb. That would pretty handily tip the balance in the ‘pretty’ and ‘immersive’ categories, and suddenly the score is 2-2.

Or is it? If you’ve looked at American Truck Simulator and Train Simulator on Steam, you’ll have noticed a certain crucial difference: price.

American Truck Simulator has a list price of $20. At press time, it’s on sale for $14. Going by European Truck Simulator 2, we might expect DLC prices in the $10-$20 range. Those DLCs massively expand the road network—ETS2 has DLCs for regions like France and Scandinavia—along with new cargo types, which are at least graphically interesting.

Train Simulator, on the other hand, seems bound and determined to extract as much money from its captive audience as possible. A small route runs $20 or $30, and I mean small. That’s about sixty miles of track, generally without any branches off the main line besides sidings. (Some routes, however, do give you a little more for your money. Sherman Hill has two routes over the hill.) You get one to three locomotives and a few types of rolling stock, and that’s it.

In this genre, repetition is bad. The world ought to be big enough so that by the time you see scenery again, you’ve forgotten what it looks like. If the world is small, it should be cheap to expand. Train Simulator has neither quality. American Truck Simulator has both. Buy the latter.

  1. There are evidently two classes of people unlike me: those who can simply sit and listen to a piece of audio-only content, and those who can multitask effectively enough that they need not focus primarily on a piece of audio-only content. If you’re one of those sorts of people, and you still like transport games, please drop me a line as to why.
  2. Let’s face it. On aggregate, train simulator fans are, well, old.
  3. The only major challenge is learning braking distances. Working out how to keep steam up in a steam locomotive is an additional challenge. Otherwise, it’s a vehicle which travels in one dimension, and navigation is done for you at the switching office.
  4. Well, not so low that you can’t make it if you don’t know how to use the water troughs the scenario tells you to use. Which I didn’t. (Neither knew how nor did use.)
  5. At press time, the next iteration in Dovetail Games’ train sim series, Train Sim World, is in preview-beta. Built on Unreal Engine 4, it appears to be quite a lot prettier, and a lot more sonically pleasing, than Train Simulator 2017, which is built on an eight-year-old engine.