Author Archives: Fishbreath

Fishbreath Plays: SimplePlanes

I’m a fan of sandboxes.

Many of my favorite games are sandboxes, or have a sandbox element: StarMade is altogether a sandbox, Rule the Waves gives you plenty of latitude to build your own navy and your own history, Falcon 4 lets you play with someone else’s castle or kick it down as you please, and Command Ops, though less open than the rest of this list, still gives you the chance to do largely as you please inside whatever scenarios you can find or make.

So, when I saw that SimplePlanes, an aeronautics sandbox by Jundroo, who made one of my favorite physics puzzle games on Android, was now on Steam, I had to give it a whirl. We’ll get the bad out of the way first: it’s a port of a mobile game, and so the interface is not as powerful as, say, Kerbal Space Program’s (which is the natural comparison), and the parts list isn’t quite as lengthy as I’d like. That said, the flight modeling is excellent for a wee building game like this, and as with any building game, there are some superb examples out there. For another downside, there isn’t a lot to do; as far as I can tell, there isn’t a way to add new maps or new challenges, which is a shame. Either one would add a ton of longevity to the game. Finally, the combat bits could be expanded upon a little—damage is very binary right now, and hitting a plane with anything will usually pop it.

With that out of the way, let’s talk about the good. I’m going to do this by discussing some of the things I have built; namely, the aircraft carried by the zeppelin Inconstant, from Skypirates: the Kestrel, Falcon, Vulture, Albatross, and Gorcrow. All are based off of real-world designs. The Kestrel is a riff on the XP-55 Ascender, the Falcon is based on any number of (generally French) twin-boom pusher designs of the immediate prewar and postwar periods, the Vulture is a recreation of the Sh-Tandem, a Russian ground-attack design, the Albatross is a Blohm & Voss-inspired asymmetric design, and the Gorcrow is more or less every medium bomber between 1930 and 1945. (Note that I made a few modifications to fit my zeppelin-borne aircraft requirements and restrictions, which you’ll find at the end of this post.)

The Kestrel is one of my favorites, owing to its handling characteristics. The twin coaxial engines, with a total of 1,500 horsepower for only 6,000 pounds full load, push it to speeds in excess of 400 miles per hour. It fields an excellent anti-aircraft punch, and has superb maneuverability at high speeds. Its weakness comes mainly in its low-speed handling: its vertical stabilizers are small, to limit the drag they add, but this creates a prominent tendency to yaw instability at landing speed. As such, it’s a design that’s likely very prone to landing mishaps, and requires a steady hand on the stick and active feet on the pedals to put onto the skyhook. Though the design is unusual, it flies very well, responding smoothly with little adverse yaw or other undesirable handling characteristics. At the edges of its envelope, it can sometimes get the pilot into trouble; unrecoverable flat spins are a possibility.

In design, the Falcon is much more conservative: it treads on no unusual aeronautical ground. The twin-boom design provides some added damage resistance; losing the back of one boom isn’t immediately fatal. It’s powered by a 1,250-horsepower engine, about the largest single engine we can expect to find in the world of Skypirates, and has a maximum takeoff weight of about 9,000 pounds. (The version posted is overweight, and needs to be slimmed down.) With rather a lower power-to-weight ratio, it only reaches about 320 miles per hour, significantly slower than the Kestrel. Although its gun armament is less heavy than the Kestrel’s, it makes up for that loss in firepower by mounting several racks for air-to-air and air-to-ground rockets. Its flight characteristics befits its character: rugged and dependable, with very few surprises, although it does have a tendency to stall the lower wing in tight, low-speed turns.

The Vulture is probably the one whose looks most closely match its intended purpose. A light bomber and ground-attack plane, the Vulture is the usual aircraft of choice when Inconstant needs to make a show of force. Its unusual design gives it a great deal of lift for its footprint, and permit all of its hardpoints to be placed along the same axis as its center of mass: dropping weapons doesn’t change its balance at all, making it a forgiving platform when carrying large weapons. The centerline mount supports an aerial torpedo, but only when the plane is air-launched—aerial torpedoes are too long otherwise. (Note that Inconstant doesn’t carry Vultures equipped with landing gear.) To my surprise, the Vulture’s handling is docile in the extreme, even when fully loaded, and turns downright peppy when empty, even though it only sports a 1,000-horsepower engine. I ran into no surprises anywhere in the envelope.

The Gorcrow, powered by a pair of 700-horsepower engines, is a conventional medium bomber, with all that implies. Its handling is ponderous, but it can sling a heavy load of bombs or rockets, or three aerial torpedoes, making it Inconstant‘s heaviest hitter by far. Three gun positions, one at the back of each engine nacelle, and one atop the fuselage, round out its weapon fit. Again, an unsurprising performer—not spritely, and predictable in its handling. Unlike the other aircraft on the list so far, its bringback weight is somewhat less than its full fuel empty weight. Inconstant being fairly light on avgas stores, her Gorcrows are generally only launched when absolutely necessary, to avoid having to dump fuel overboard before landing. The in-universe version has a glazed nose, but I haven’t figured that out yet.

The Albatross, powered by two 800-horsepower engines, is a long-range transport aircraft, and also one of my favorites for its sheer unlikeliness. Although Herrs Blohm und Voss built similar aircraft for the Luftwaffe during the Second World War, I was a little concerned that the flight engine wouldn’t handle it well, given the presumably-complicated aerodynamics at play. To my surprise, it worked fine, and isn’t even particularly touchy. Anyway, the 1,600 combined horsepower pushes her to a good turn of speed when empty, nearly as fast as the Falcon, and pegs her total cargo capacity at just over four tons. The asymmetry does mean she has some slight balance concerns, but in-universe, it’s easily trimmable. Low-speed handling is good, thanks to the fat wings. Even with the asymmetric nature of the pitching and yawing forces, owing to the offset position of the empennage, it has surprising maneuverability when empty. Same remark about the glazed nose.

Now, I didn’t even get into the built-in challenges, or into serious modding. I was just messing around, and in the course of learning how to build airplanes, building these, and coming up with my flight reports, I got more than my $10 of fun. I also got at least $10 of storytelling value out of it: I now have quirks and flight characteristics in mind better for each of these planes than I did before, and I can work that into stories.

If you’re looking for a plane construction sandbox, look no further.

Fishbreath’s Zeppelin-Borne Aircraft Construction Rules for SimplePlanes

  1. Airframes should range between about 3 tons and 12.5 tons full load.
  2. Aircraft must be shorter than 70 feet and have a wingspan less than 110 feet.
  3. No single engine may develop more than 1250 horsepower.
  4. Aircraft must have a bringback-weight stall speed of 110mph or less. (The other 20-30mph to get down to zeppelin speed is assumed to come from flaps.)

On tafl: what next?

In my last tafl post, I remarked that I’m nearing the end of my current OpenTafl sprint. Now, it’s wrapped up. Here’s what I finished:

  1. Write about 70% of a new evaluation function. It’s designed to easily accommodate tweaks and modifications, which will undoubtedly be required as I learn more about the game and uncover positions that the current function evaluates incorrectly.
  2. Move OpenTafl into its own repository, so I can publish the source.
  3. Package and test OpenTafl for binary distribution.
  4. Rewrite exploration to generate and sort moves prior to generating states.

All three of these are done, and OpenTafl downloads and source releases can be found at the Many Words Softworks site for OpenTafl. Play and enjoy. For questions, comments, or bug reports, please leave a comment, or drop me an email. (My contact information is in the About section at the main page.)

Here’s what’s on the list for the next sprint:

  1. Support for easy position entry. This will require creating some sort of short-form positional notation, akin to FEN from the chess world.
  2. External AI support, a la several chess engines. This will require defining a protocol by which OpenTafl can communicate with an external AI, including communicating things like the rules of the variant in question.
  3. Partial time control, so that OpenTafl can be limited to a certain number of seconds per turn.
  4. Some search improvements and tweaks. The prior three features are important stepping stones to this one; they’ll let me test OpenTafl against itself by quick-play tournaments, and therefore verify what sort of impact my changes are having.

As you can see, the next sprint is less about the nitty-gritty AI research and more about foundational work in developing OpenTafl as an engine for future AI research—my own and others. Ultimately, that’s a more important task.

On tafl: more optimizations

Having spent quite a lot of time on another code project, it’s high time I got back to a little tafl.

Remember where we left off: board states represented as arrays of small integers, and taflman objects inflated as needed during game tree search. It turns out that inflating and deflating taflmen is also a potential source of slowness, so I went ahead and removed that code from the game. (It also greatly simplifies the process of moving from turn to turn.) Wherever I would pass around the old taflman object, I now pass around a character data type, the sixteen bits I described in the previous tafl post.

This revealed two further slowdowns I had to tackle. The first, and most obvious, was the algorithm for detecting encirclements. For some time, I’ve known I would need to fix it, but I didn’t have any ideas until a few weeks ago. Let’s start with the old way:


for each potentially-surrounded taflman:
  get all possible destinations for taflman
    for each destination:
    if destination is edge, return true
return false

Simple, but slow. In the worst-case scenario, each taflman has to consider every other space on the board, and look briefly at every other space from each rank and file (where n is the number of taflmen and b is the board dimension):

O(n \times b^2 \times 2b) = O(nb^3)

I was able to cut this down some by running an imperfect fast check: if, on any rank or file, the allegedly-surrounded side has the taflman nearest either edge, that side is not surrounded. That had implications for a better algorithm, and sure enough, I hit on one.


list spaces-to-explore := get-all-edge-spaces
list spaces-considered := []
for space in spaces-to-explore:
  add space to spaces-considered
  get neighbors for space not in spaces-to-explore and not in spaces-considered
  for each neighbor:
    if neighbor is occupied:
      if occupier belongs to surrounded-side:
        return false
      else continue
    else:
      add neighbor to spaces-to-explore
return true

More complicated to express in its entirety, but no more difficult to explain conceptually: flow in from the outside of the board. Taflmen belonging to the surrounding side stop the flow. If the flow reaches any taflman belonging to the allegedly surrounded side, then the allegedly surrounded side is, indeed, not surrounded. This evaluation runs in a much better amount of time:

O(b^2)

Astute readers might note that, since the board dimension is never any larger than 19 at the worst, the difference between a square and a cube in the worst-case runtime isn’t all that vast. Remember, though, that we need to check to see whether every generated node is an encirclement victory, since an encirclement victory is ephemeral—the next move might change it. Running a slightly less efficient search several hundred thousand to several million times is not good for performance.

I found another pain point in the new taflman representation. The new, array-based board representation is the canonical representation of a state, so to find a taflman in that state, the most obvious move is to look through the whole board for a taflman matching the right description. Same deal as with encirclement detection—a small difference in absolute speed makes for a huge difference in overall time over (in this case) a few tens of millions of invocations. I fixed this with a hash table, which maps the 16-bit representation of a taflman to its location on the board. The position table is updated as taflmen move, or are captured, and copied from state to state so that it need be created only once. Hashing unique numbers (as taflmen representations are) is altogether trivial, and Java’s standard HashMap type provides a method for accessing not only the value for a given key, but also a set of keys and a set of values. The set of keys is particularly useful. Since it’s a canonical list of the taflmen present in the game, it saves a good bit of time against looping over the whole of the board when I need to filter the list of taflmen in some way.

Those are all small potatoes, though, next to some major algorithmic improvements. When we last left OpenTafl, the search algorithm was straight minimax: that is, the attacking player is the ‘max’ player, looking for board states with positive evaluations, and the defending player is the ‘min’ player, looking for the opposite. Minimax searches the entire game tree.

Let’s look at an example, a tree of depth 3 with max to play and a branching factor of 3, for 9 leaf nodes. The leaf nodes have values of 7, 8, and 9; 4, 5, and 6; and 12, 2, and 3. We can express the minimax perspective on this tree mathematically:

max(min(7,8,9), min(4, 5, 6), min(12, 2, 3))

That is to say, max makes the move which gives min the least good best response. Remember, though, that we search the tree depth-first, which means that in the formula above, we move from left to right. This leaves us with some room for cleverness.

Consider this:

max(min(7,8,9), min(4, x, y), min(12, 2, z))

Does it matter what the numbers at x, y, and z are? No: by that time, we’ve already established that the subtrees in which they appear (with best possible values 4 and 2) are worse than the first subtree (with known value 7). Therefore, we can stop exploring those subtrees. This is known as ‘pruning’.

Pruning is good, because, when done correctly, it saves us work without dropping good moves. Alpha-beta pruning, the kind described above, is known to produce identical results to minimax, and, in the ideal case, can search twice as deep. (So far, it’s only given me one extra ply, though; the case isn’t quite ideal enough for more.) Other kinds of pruning may find their way in later, and will take some more careful thinking to avoid pruning away potentially good branches.

What is the ideal case, then? We want to explore the best moves first. A cursory examination of the example case above provides the reasoning. If we explore the subtrees in opposite order, we can do no pruning, and we get no speedup. This presents us with a chicken-and-egg problem: if we could easily and programmatically determine which moves are good, we wouldn’t need to bother with variants on full-tree search at all.

Fortunately, the structure of a tree gives us an easy out. Most of the nodes in any given tree are at the bottom. Consider a simple example, with a branching factor of two. At the first level, there’s one node; at the next, two nodes; at the third, four nodes; and at the fourth, eight nodes. In every case, there are more nodes at the deepest level than at all the levels above combined. The difference becomes more pronounced with greater branching factors: a tree with a branching factor of 30 (which approximates brandub) has a distribution of 1/30/900/27,000. So, it follows that the large majority of the work in a game tree is expanding the game states at the target search depth, and evaluating those states; the work higher up the tree is almost trivial.

Take that and add it to this claim: the best move at any given depth is likely to have been one of the best moves at the immediately prior depth, too. We can use those facts together to figure out the best ordering for our moves at our target depth: search to the depth prior, keep track of the best moves at that depth, and search them first. This approach is called ‘iterative deepening’: generally, you start at depth 1, and use results from previous levels to order each node’s children. OpenTafl accomplishes this with a structure I’m calling a deepening table, which is generated for each . The deepening table is a list of maps, one per depth. The maps relate a position hash to an evaluation done at its depth. On each deepening iteration, the exploration function generates successor states, then sorts them according to the data stored in the deepening table before exploring further. (There’s some easy performance on the table here—since creating a full-on successor state is a non-trivial operation, I can probably buy myself some easy speedup by generating moves alone, sorting those based on the positional hashes which would result from making those moves, and only generating states immediately before exploring them. This would probably save a good bit of time by never generating states which would have otherwise been cut off by alpha-beta pruning.)

Implementing the deepening tables put me on the scent of another major optimization, the transposition table. A transposition table stores values for previously-visited states, along with the depth each state was previously searched to. (If we’re searching six plies deep and previously encountered a state at depth four, that state was only searched to a depth of two. If we encounter the same state at depth two, then if we use the cached value, we’ve left two plies of depth on the table.) Hits in the transposition table early on in the search save us expanding whole subtrees, which yields many fewer node expansions required per search, and lets us reach deeper more easily.

The transposition table, since it persists through the entire game, must be limited in size by some mechanism. OpenTafl’s transposition table is an array of Entry objects, with a fixed size in megabytes. The array is indexed by positional hashes, modulo the size of the table. When indices collide, OpenTafl decides whether to replace the old value based on its age (older values are more readily discarded) and the depth to which it is searched (deeper searches are kept, if both are equally as fresh).

That brings me to the end of another marathon tafl post. Stay tuned for another one—probably a much shorter one—in the not-too-distant future, as I wrap up work on this sprint of OpenTafl, with an eye toward the first public code and source release.

On tafl: practical difficulties

In the past, I’ve written about the problem of tafl AI mainly in terms of its theoretical challenges. Those challenges remain, but can be mitigated with clever algorithms. (That work is in progress.) The theoretical challenges, however, are only half of the issue—and, I’ve been thinking, they may not even be the big half.

Consider go AI. Go is widely regarded as computationally intractable for traditional AI techniques. As such, go AI fits one of two patterns: probabilistic and machine-learning approaches, and knowledge-based approaches. The former is interesting in its own right, but doesn’t bear on our problem. OpenTafl is not currently built with an eye toward probabilistic or machine-learning techniques1. Rather, we’ll look at the more traditional school of go AI. The knowledge-based approach takes expert knowledge about go and applies it to traditional AI techniques. Rather than exhaustively searching the game tree—something which becomes intractable very quickly, given the branching factor in go—a knowledge-based AI applies heuristics and rules of thumb at each board state to prune uninteresting branches, and reduce the branching factor sufficiently so that the pruned tree can be explored to a useful depth.

How does one get this knowledge? One plays go. Knowledge-based AIs are generally written by good go players who are also programmers. The player/programmers use their domain knowledge to design sets of guidelines for deep exploration which work well, and to decide which sorts of starting states each guideline can be applied to. Though chess AIs deal with a much more tractable tree-search problem, they also use some knowledge-based augmentations to increase depth around interesting series of moves. Chess knowledge is a little different than go knowledge, however; there are formal answers to a large number of chess questions. Piece values are well-known, as are the pawn-equivalent values of certain important board control measures like pawn structure and rooks on open ranks.

Both of the best-studied games in AI, then, use some domain knowledge to push the search horizon deeper, whether by encoding the sorts of heuristics a certain player uses, or by encoding formal knowledge about the game itself. Chess goes further by packing in databases of solved endgames and good openings, which increases the effective search depth at interesting parts of the game2. Let’s think about applying these tactics to tafl.

Can we work like we might in go, using expert player knowledge to inform an AI? Yes, technically, but there’s a huge obstacle: tafl games are a tiny niche in the global board game market. My rough estimate is that there are perhaps a few thousand serious players. I wouldn’t be very surprised if fewer than ten thousand people have ever played a full tafl game, whereas I’d be highly surprised if more than a hundred thousand have. Writing about tafl tactics is therefore in its infancy5. There’s very little in the way of reading material that an average player like me could use to develop more in-depth knowledge, and, as far as I know, there are no other tafl players with a computer science background who are working on the problem of tafl AI.

Can we work like we might in chess, then? The answer here is simple: a categorical no. The amount of formal knowledge about tafl games is even smaller than the amount of non-formal knowledge about them6. Endgame databases infeasible7>, and nobody has carried out study of good openings for various taflman arrangements on boards of various sizes. Beyond that, we can’t formally answer questions about even the most elementary pieces of tafl strategy. In chess, I can say with some certainty that a rook is worth about five pawns. Given that favorable board position and pawn structure are only worth about a pawn or a pawn and a half, if I’m going to trade my rook for good board position, I should try to get at least a minor piece (a knight or bishop) and a pawn out of the bargain, too. Tafl is a much more positional game, which makes questions like this harder to pose, but an analogue might be, “Is it worth it to move my king from c3 to make a capture and set up another, or should I hold the good position?”

Here’s another question we don’t have a good answer to: what is the value, in terms of win percentage, of an armed king captured on four sides (a strong king) against an armed king captured on two (a weak king)? Certainly, we know that an armed king is an extremely powerful piece in material terms. The king’s side has a nearly 2-1 disadvantage in material, but in many common 11×11 variants with a strong armed king, he wins at about a 2-1 rate in games between expert players8. On the other hand, 9×9 Sea Battle, a variant with an unarmed strong king escaping to the edge and no hostile central square, is still biased about 2-1 in wins toward the king’s side. On the other other hand, 9×9 Tablut, the variant on which all other variants are based9, features an armed strong king with a corner escape and a hostile central square, and is approximately as evenly balanced as chess. We can’t even answer questions about a king’s worth, or about the shift in balance provided by various rules tweaks.

So, the formal approach, for now, is out, and we’re left with knowledge-based approaches. This presents me with a variant of the chicken-and-egg problem. I have one chicken (a tafl engine), but I lack a rooster (a strong AI) with which to produce a viable egg (answers to interesting questions). The task before me is this: use my mad scientist’s toolkit (my brain, my passing familiarity with tafl games) to cobble together some sort of Franken-rooster that get my chicken to lay eggs, which will in turn produce less monstrous roosters.

Leaving aside tortured metaphors, it’s relatively easy to get an AI to a moderate level of play. All you have to do is apply some well-known algorithms. On the other hand, getting it past that moderate level of play does require knowledge, and the knowledge we have is not yet sufficient to do that. Hopefully my work on this topic will push us further down that road.

1. I’ve been writing OpenTafl with the intention that it be relatively easy to swap in different kinds of AI, both so that I can continue my research into tafl AI, and so that others can contribute. I suppose it may also find a place as a teaching tool, but that seems more unlikely.
2. Consider: at the start of the game, you can pick a common opening line, assume the other player will follow it to the end, and search from there, until the other player goes off-book. At the end of the game, you can search down to positions with a number of pieces—chess programs usually store a database of all endgames with five pieces, these days3—from which point you’re assured of perfect play. In either case, your effective search depth is your actual search depth plus the amount of information in the database.
3. Storage is the limiting factor. A five-move endgame database in the widely-accepted, heavily-compressed standard format is about 7 gb. There’s a chess program out there called Shredder which manages to pack it into 150 mb, two percent of the standard format, by removing any information besides whether the position is eventually won or eventually lost, and playing along those lines4. A six-piece database takes 1.2 tb in the standard form, which would be 24 gb in the Shredder format. The seven-piece tables come to between 100 tb to 150 tb, the range I’ve seen across a few sources, which would be take between two and three terabytes. Not really the sort of thing you’ll find in Chessmaster 2016.
4. It seems to me that without a native ordering, this method may take more moves to come to a winning position, but I suppose you can impose some order on them by adding ‘how long’ to the won or lost evaluation function, without taking up much more space.
5. I’m aware of only one place with any real detail, Tim Millar’s site.
6. Which isn’t a value judgement, mind. Go knowledge is nonformal, and in many ways seems largely aesthetic, but does eventually lead to wins, which makes it valuable, if hard to use from a game theory perspective.
7. Calculating the chess endgames with 7 pieces on an 8×8 board took a significant amount of time on a supercomputer in Moscow in 2012. On a 9×9 or 11×11 board, a tafl endgame with 7 taflmen is trivial—games usually end well before the amount of material on the board gets down to the amounts where enumeration of endgames is possible.
8. I’m reminded of my introduction of tafl to my residence hall my senior year of college. The initial feeling was that the besieging side had a massive advantage, but as we developed as players, we realized the advantage was much more on the king’s side.
9. Linnaeus, on his expedition to Lapland, observed the game being played and recorded the rules, thus saving tafl games for the ages.

Four of the oldest warships in active service, as of November 2015

I read earlier today that the US Navy’s new SSBN class is expected to serve until the 2080s. I wondered whether that was even remotely plausible. As American ships go, USS Kitty Hawk had a good run of it, hitting almost 50 years. I couldn’t find any American examples with a longer service life than that in commission and in active service today, but it turns out there are some out there. Here are four of the oldest warships in active service, by original commissioning date.

#3 – BAP Almirante Grau, formerly De Ruyter, Dutch-built cruiser in Peruvian service, November 18, 1953
Almirante Grau was laid down in 1939 by the Dutch, and launched in 1941 by the Nazis, so by that standard, she is indeed the oldest actual warship on this list. She’s also the most functional: a major refit between 1985 and 1988 gave her then-modern sensors and decoys, Otomat AShMs, and OTO Melara rapid-fire guns in place of her old Bofors mounts.

#2 – ROCS Hai Shih, formerly USS Cutlass, Tench-class submarine in Taiwanese service, March 17, 1945
Deserving of extra acclaim because she’s apparently still a reliably-submersible submarine built in the closing stages of the Second World War, she saw an actual war patrol in her days as Cutlass. She was transferred to the Taiwanese Navy in 1973, and has been in active service since. Her sister ship, ROCS Hai Pao, was commissioned in 1946 and transferred in 1976. They serve primarily as training ships and aggressors, and, incredibly, are still cleared to submerge 70 years after their commissioning.

#1 – BRP Raja Humabon, formerly USS Atherton, Cannon-class destroyer escort in Philippine service, July 26, 1943
The Philippine Navy is the oldest navy, on average, in the world; seven members of the Rizal and Miguel Malvar classes also date to before Hai Shih, and the Philippine Navy had two more Cannon-class ships before storms and whatnot sank them. Rajah Humabon is rather light on capabilities these days. Her ASW fit was removed due to lack of spare parts for Second World War-era sonars and depth charges; her gun director is no longer present; her weapons fit is exactly the same otherwise as in 1943.

Honorable Mention – U17 Parnaíba, Brazilian river monitor, March 9, 1938
After a brief huddle with parvusimperator, we decided that a river monitor is not a real warship, and doesn’t count. That said, Parnaíba is the oldest armed ship I was able to find in service with a navy, and deserves a spot on the list. She was commissioned before the next-oldest ship on the list, Almirante Grau, was laid down. She’s also definitively the oldest warship in service built by a yard in the country in which she currently serves, likely by at least two decades.

There you have it. A 65-year service life, as the Navy is proposing for the SSBN(X)-class, isn’t impossible, but it does seem highly suspect. All of these vessels were state of the art on their construction; the only one I wouldn’t instantly designate for scrapping is the Almirante Grau, and even with its modernizations, it probably isn’t worth the upkeep. 65 years from now, will the Navy’s new boomer be any different?

Fishbreath Plays: MechWarrior Online

The Battletech Kickstarter kicked me back into playing some MechWarrior Online. I ought to note that, on the whole, I like it. I don’t think it quite hits the heights that previous MechWarrior games hit, but it’s a perfectly serviceable stompy robots game in terms of gameplay. That isn’t what this post is about, though.

No, this post is a rant.

Let’s talk about the grind. Oh, the grind. That necessary evil in free-to-play games, right? Yes, but this one is particularly egregious. Fans of the game may try and tell you it’s less grindy than World of Tanks, say, or World of Warships. They are lying to you. Let us be perfectly clear. There is precisely one type of player for whom MechWarrior Online is less grindy than the Wargaming.net games: high-level competitive players. World of [insert vehicle] does indeed require more out of you than MechWarrior Online does if you want to experience what counts as the endgame. I would posit that these players, though, make up such a tiny proportion of the playerbase that their opinions are basically meaningless. Let’s get down to brass tacks with a comparison.

I’m a fairly casual player in World of Warships, and a fairly casual player in MechWarrior Online. As such, my goal in the former is to have a couple of historical Second World War vessels, which occupy (generally) the top half of the tech trees. I need to get to about tiers 4-6. I average about a thousand experience per game, or more if I’m playing primarily for first-win-of-the-day. To go from tier 5 to tier 6 costs about 30,000 experience, which comes to about 30 games. (Going from tier 1 to tier 5 is about that difficult too, if I recall correctly.) It takes me about 60 games—certainly, no more than 75 on a bad streak, and no more than 100 on a horrid streak. So far, I’ve never had to grind for money; in the course of getting through my 60-100 games to hit the next tier, I’ve made enough to buy my way up.

In MechWarrior Online, as a fairly casual player, my goal is to build up a stable of mechs of various sizes and roles I can switch between as the mood strikes me. There are two obstacles here. First, earnings: I make about 80,000-100,000 c-bills per match in MechWarrior Online. A single light mech chassis costs about 2,000,000 c-bills. (A little less for 20-ton mechs, a little more for 35-ton mechs, a whole lot more for mechs which can mount ECM—note that ECM capability usually makes a given mech variant the most desirable of its chassis.) A medium mech costs about 3,750,000 c-bills. You’ll spend between 5,000,000 and 6,000,000 for a heavy, and 7,500,000 to 10,000,000 for an assault.

You begin to see the magnitude of the problem. Buying a light, a medium, a heavy, and an assault chassis takes nearly 20 million c-bills and a shade under 200 games. Outfitting a mech can sometimes double that cost, especially if you don’t have a bunch of weapons and equipment sitting around your mech lab, if you’re a new player. We’re already up to about a 400-game grind to buy and outfit four individual mechs. That’s a big time sink.

It also isn’t the end of it. When you buy a mech, you unlock skill trees for that mech. Consider this: I earn about 800xp per match. If I don’t sink about 30,000 experience into each variant, that variant is between 10% and 50% worse in a variety of extremely important performance measures (speed, heat capacity, turn rate, and more) than someone else’s copy of that variant who has done the grind. That’s a 40-game grind in each mech after you’ve acquired it. I’ll grant you, that comes out to less than you need to acquire the mechs (a mere 160 games), but that isn’t the whole story.

You see, the mech skill trees come in three tiers. To unlock each tier, you need to finish the tier beneath it… on three separate variants of a given chassis. So, you don’t need 400 games and 40,000,000 c-bills to buy and outfit four mechs. You need 120,000,000 c-bills and 1200 games to grind out the requisite twelve variants to avoid being a gigantic drag on your team. More than 400 of those games will be played in mechs that are arbitrarily handicapped in an attempt to get you to buy premium time.

So no, Wargaming.net’s games are not more grindy than MechWarrior Online for the average player. Basically, if you don’t get summers off, you’re going to have to spend money if you want to fill out your mech stable, and a lot of money, at that. I leave gawking at the prices as a trivial exercise for the reader.

Some anti-patterns

Another college pal, a fellow traveler in the world of the computer sciences, wrote up this list for me. Call him Shenmage, and encourage him to set up the contributor’s account I made him here, the lazy bum. -Fish

As a software engineer, I tend to come across code bases of various levels of quality. To make light of some of the oddities we encountered, myself and some of my co-workers started writing up the logical conclusions of the code structures. Each of these was encountered to some degree or another (though not necessarily as anything beyond a single method). Enjoy, and feel free to add your own to the comments section.

Walmart Pattern

  • Has everything you could ever want
  • You can’t find anything
  • Sometimes have to get multiple of something when you only want one

Sam’s Club Pattern

  • See Walmart Pattern
  • Can only get things in bulk (no single entities)

Titanic Pattern

  • Everything is well structured and coded to the dot
  • Only has a manual process for recovering the system (with potentially catastrophic consequences)

Power of Attorney Pattern

  • Pass SQL commands directly to a web service to get executed

Lottery Pattern

  • Retrievals are randomly generated
  • Only occasionally get what you want

Tardis Pattern

  • Alters history of an object
  • Does more actions than requested
  • Doesn’t do anything exactly as you request

Leaning Tower of Pisa Pattern

  • Perfectly structured but tightly coupled to outdated tech

Starbucks Pattern

  • More identical web services than you need
  • Expensive computationally

Monopoly Pattern

  • Multithreaded, but one thread eventually eats up all available resources

Glass House Pattern

  • Security on a system was completely ignored on critical components

Magic 8 Ball Pattern

  • Calls return only boolean values
  • ‘Maybe’ is included as a boolean value
  • Multiple ways to say each value

Speakeasy Pattern

  • Webservice is completely undocumented, so have to know precisely what to send where to use it

Narcissus Pattern

  • Class depends upon itself and uses itself to accomplish tasks

Lazy Inspector Pattern

  • Methods that should do tasks instead return true

Scorched Earth Pattern

  • Update method drops and recreates table

“What’s in the box?!” Pattern

  • Large, untyped, container objects are the only objects passed around the system

OCD ORM Pattern

  • Verifies all retrievals by retrieving again

Luchtburg Picks A Carbine

The Luchtbourgish procurement apparatus has been slothful over the last year—the members of the Procurement Board have made their preferences clear on a number of challenges, but the secretary has yet to finish typing up most of the proposals. He’s hammering one out right now for you.

The Luchtbourgish Individual Carbine Competition has a few extra constraints imposed by Luchtburg’s defense priorities. One: Luchtburg has a vast stockpile of 7.62x39mm ammunition left over from its time as a Russian client state. Any proposal for a weapon chambered for a different cartridge will have to account for the price of acquiring new ammunition stockpiles, and other new infantry weapons to replace other 7.62x39mm . Two: Luchtburg is a jungle country, and a heavy bullet is desirable1. Three: Luchtburg’s land army is small2, and so the price of an individual rifle is less important than it might be otherwise.

With those constraints in mind, I can easily eliminate 5.45x39mm and 5.56 NATO. Neither are bad cartridges, and both are perfectly acceptable choices. They are not, however, the right choice for us. Disposing of 7.62×39 to acquire a new cartridge would be expensive—conservatively, the cost of a modern corvette out to a ten-year horizon, and probably another FREMM over the 25-year lifetime of the Procurement Games. It would also violate another constraint: jungle effectiveness. I can also eliminate full-size rifle cartridges, for the same reasons that parvusimperator does. I’ll leave the takedown to his post. (Look back through the militariana tag for the post about SCHV rounds.)

So, that leaves me with the intermediate intermediate cartridges, if you will: 7.62×39, the great granddaddy of the field; the modern American contenders, 6.5 Grendel, 6.8 SPC, and .300 Blackout; and lesser-used wildcats. The latter class is right out on production scale grounds. We’ll be buying, at the least, several hundred million cartridges, and sorting out production at the same time as a new rifle is not something Luchtburg wants to do.

The modern American contenders present more interesting problems. I’m a huge fan of 6.5 Grendel based on its ballistics, and of .300 BLK and 6.8SPC based on larger bullets and similar magazine capacities to 5.56, along with specialty loadings for various purposes. The thing about .300 BLK and 6.8SPC is, I’m not sure that their main advantage over 7.62×39 is inherent. You could just as easily load 7.62×39 with a heavy, subsonic bullet for use with a suppressor, or load it with a lighter bullet and hotter powder for ballistics more similar to 6.8SPC. I don’t think it’s quite possible to match Grendel, which is much less a compromise round than the other two American contenders. Generally speaking, though, I don’t think that 7.62x39mm is less capable by design. It’s less capable by less development. The expense of developing new loadings down the line is offset by not having to buy new training ammunition, or new squad automatic weapons3.

You may have noticed that I’ve rather biased the contest toward 7.62×39, and may additionally have noticed that this seems not to leave me with many good options: old AK variants, the AK-103, and (questionably) the AK-124. Russian-built arms, dependable but not generally known for their accuracy are not a particularly good fit for Luchtburg’s well-trained, well-supplied, well-maintained professional army. Fortunately, there is another contender, and it is the victor.

Enter the Swiss Arms SG5 553R. A member of the SG 550 family, it’s based off the current issue arm of the Swiss military, a pedigree that carries weight in the halls of Luchtbourgish government6. SIG/Swiss Arms is a large conglomerate, no stranger to handling large contracts, and is not Russian—a point in its favor when it comes to support and services. The design has been in service long enough to work out its kinks. As a bonus, it accepts AK magazines, meaning Luchtburg can dip into its stock of those, too. The short version is very short, and with the folding stock, is suitable for issue to vehicle crews and others who work in cramped spaces.

It does have some downsides—for one, we’ll probably want to pay for a longer-barreled version. The extant ‘long barrel’ version only has twelve inches of barrel length. We’ll probably want 16″, or maybe even a 20″ (although whether the squad marksman will also use 7.62×39 depends mainly on how our cartridge development project goes7). For another, it’s a precision-machined Swiss masterpiece. That kind of quality comes at a price. It’s hard to find contract price figures, but I’d expect to pay north of $1500 per rifle. Finally, there’s very little data on the SG 550-series in the sorts of terrain we’ll be using it most often: jungle, seaside, and aboard ships, none of which feature heavily in Switzerland’s landscape. It’s possible that those rather harsh conditions will reveal some flaws not otherwise known.

With all that being said, though, it’s a gun with very little downside for us: it isn’t as thrown-together as a Kalashnikov, so it costs a pretty penny, but the nice thing about small armies relative to defense spending is that they can afford to be well-equipped. The SG 553R is a modern rifle with a fine pedigree, and it’s the thing to take Luchtburg into the next 25 years.

I admit, this one was something of a foregone conclusion, given the constraints I imposed upon my choices, but that, I think, is a lesson in itself: procurement choices are ordinarily dictated by factors other than the raw quality of the platforms. (Else I might have ended up with SCARs.) We merely continue in that long tradition8.

1. Undoubtedly parvusimperator will quibble about the effectiveness of a fast, small bullet, but penetration of a barrier to hit something directly behind it is a very different game from penetration of a barrier to hit something 50 yards behind it. That’s the story in a jungle.
2. 75,000 rifles would cover every front-line combat formation, including vehicle crew, with about 10,000 to spare. 200,000 rifles would cover every reservist as well, with plenty of headroom.
3. Modernized PKMs will serve for now.
4. Izhmash would certainly sell them to me, but it’s unclear whether they’re vaporware or actually in testing right now.
5. Formerly SIG, but they’re currently organized as separate manufacturers. I think. The web of firearms manufacturer acquisitions and spinoffs is dizzying to untangle.
6. Several of the generals on the procurement board carry surplus K31s as hunting rifles.
7. You need speed and ballistic coefficient for that, and it’s hard to get both out of 7.62×39 at the same time.
8. Although my frigate choice was a lot more wide-open, as was parvusimperator’s carbine choice. It’s probably also true that your headline capabilities are the ones where you get to be a bit choosier.