In the past, I’ve written about the problem of tafl AI mainly in terms of its theoretical challenges. Those challenges remain, but can be mitigated with clever algorithms. (That work is in progress.) The theoretical challenges, however, are only half of the issue—and, I’ve been thinking, they may not even be the big half.
Consider go AI. Go is widely regarded as computationally intractable for traditional AI techniques. As such, go AI fits one of two patterns: probabilistic and machine-learning approaches, and knowledge-based approaches. The former is interesting in its own right, but doesn’t bear on our problem. OpenTafl is not currently built with an eye toward probabilistic or machine-learning techniques1. Rather, we’ll look at the more traditional school of go AI. The knowledge-based approach takes expert knowledge about go and applies it to traditional AI techniques. Rather than exhaustively searching the game tree—something which becomes intractable very quickly, given the branching factor in go—a knowledge-based AI applies heuristics and rules of thumb at each board state to prune uninteresting branches, and reduce the branching factor sufficiently so that the pruned tree can be explored to a useful depth.
How does one get this knowledge? One plays go. Knowledge-based AIs are generally written by good go players who are also programmers. The player/programmers use their domain knowledge to design sets of guidelines for deep exploration which work well, and to decide which sorts of starting states each guideline can be applied to. Though chess AIs deal with a much more tractable tree-search problem, they also use some knowledge-based augmentations to increase depth around interesting series of moves. Chess knowledge is a little different than go knowledge, however; there are formal answers to a large number of chess questions. Piece values are well-known, as are the pawn-equivalent values of certain important board control measures like pawn structure and rooks on open ranks.
Both of the best-studied games in AI, then, use some domain knowledge to push the search horizon deeper, whether by encoding the sorts of heuristics a certain player uses, or by encoding formal knowledge about the game itself. Chess goes further by packing in databases of solved endgames and good openings, which increases the effective search depth at interesting parts of the game2. Let’s think about applying these tactics to tafl.
Can we work like we might in go, using expert player knowledge to inform an AI? Yes, technically, but there’s a huge obstacle: tafl games are a tiny niche in the global board game market. My rough estimate is that there are perhaps a few thousand serious players. I wouldn’t be very surprised if fewer than ten thousand people have ever played a full tafl game, whereas I’d be highly surprised if more than a hundred thousand have. Writing about tafl tactics is therefore in its infancy5. There’s very little in the way of reading material that an average player like me could use to develop more in-depth knowledge, and, as far as I know, there are no other tafl players with a computer science background who are working on the problem of tafl AI.
Can we work like we might in chess, then? The answer here is simple: a categorical no. The amount of formal knowledge about tafl games is even smaller than the amount of non-formal knowledge about them6. Endgame databases infeasible7>, and nobody has carried out study of good openings for various taflman arrangements on boards of various sizes. Beyond that, we can’t formally answer questions about even the most elementary pieces of tafl strategy. In chess, I can say with some certainty that a rook is worth about five pawns. Given that favorable board position and pawn structure are only worth about a pawn or a pawn and a half, if I’m going to trade my rook for good board position, I should try to get at least a minor piece (a knight or bishop) and a pawn out of the bargain, too. Tafl is a much more positional game, which makes questions like this harder to pose, but an analogue might be, “Is it worth it to move my king from c3 to make a capture and set up another, or should I hold the good position?”
Here’s another question we don’t have a good answer to: what is the value, in terms of win percentage, of an armed king captured on four sides (a strong king) against an armed king captured on two (a weak king)? Certainly, we know that an armed king is an extremely powerful piece in material terms. The king’s side has a nearly 2-1 disadvantage in material, but in many common 11×11 variants with a strong armed king, he wins at about a 2-1 rate in games between expert players8. On the other hand, 9×9 Sea Battle, a variant with an unarmed strong king escaping to the edge and no hostile central square, is still biased about 2-1 in wins toward the king’s side. On the other other hand, 9×9 Tablut, the variant on which all other variants are based9, features an armed strong king with a corner escape and a hostile central square, and is approximately as evenly balanced as chess. We can’t even answer questions about a king’s worth, or about the shift in balance provided by various rules tweaks.
So, the formal approach, for now, is out, and we’re left with knowledge-based approaches. This presents me with a variant of the chicken-and-egg problem. I have one chicken (a tafl engine), but I lack a rooster (a strong AI) with which to produce a viable egg (answers to interesting questions). The task before me is this: use my mad scientist’s toolkit (my brain, my passing familiarity with tafl games) to cobble together some sort of Franken-rooster that get my chicken to lay eggs, which will in turn produce less monstrous roosters.
Leaving aside tortured metaphors, it’s relatively easy to get an AI to a moderate level of play. All you have to do is apply some well-known algorithms. On the other hand, getting it past that moderate level of play does require knowledge, and the knowledge we have is not yet sufficient to do that. Hopefully my work on this topic will push us further down that road.
1. I’ve been writing OpenTafl with the intention that it be relatively easy to swap in different kinds of AI, both so that I can continue my research into tafl AI, and so that others can contribute. I suppose it may also find a place as a teaching tool, but that seems more unlikely.
2. Consider: at the start of the game, you can pick a common opening line, assume the other player will follow it to the end, and search from there, until the other player goes off-book. At the end of the game, you can search down to positions with a number of pieces—chess programs usually store a database of all endgames with five pieces, these days3—from which point you’re assured of perfect play. In either case, your effective search depth is your actual search depth plus the amount of information in the database.
3. Storage is the limiting factor. A five-move endgame database in the widely-accepted, heavily-compressed standard format is about 7 gb. There’s a chess program out there called Shredder which manages to pack it into 150 mb, two percent of the standard format, by removing any information besides whether the position is eventually won or eventually lost, and playing along those lines4. A six-piece database takes 1.2 tb in the standard form, which would be 24 gb in the Shredder format. The seven-piece tables come to between 100 tb to 150 tb, the range I’ve seen across a few sources, which would be take between two and three terabytes. Not really the sort of thing you’ll find in Chessmaster 2016.
4. It seems to me that without a native ordering, this method may take more moves to come to a winning position, but I suppose you can impose some order on them by adding ‘how long’ to the won or lost evaluation function, without taking up much more space.
5. I’m aware of only one place with any real detail, Tim Millar’s site.
6. Which isn’t a value judgement, mind. Go knowledge is nonformal, and in many ways seems largely aesthetic, but does eventually lead to wins, which makes it valuable, if hard to use from a game theory perspective.
7. Calculating the chess endgames with 7 pieces on an 8×8 board took a significant amount of time on a supercomputer in Moscow in 2012. On a 9×9 or 11×11 board, a tafl endgame with 7 taflmen is trivial—games usually end well before the amount of material on the board gets down to the amounts where enumeration of endgames is possible.
8. I’m reminded of my introduction of tafl to my residence hall my senior year of college. The initial feeling was that the besieging side had a massive advantage, but as we developed as players, we realized the advantage was much more on the king’s side.
9. Linnaeus, on his expedition to Lapland, observed the game being played and recorded the rules, thus saving tafl games for the ages.