Today I came across a thread in the board game design subreddit which posed an interesting question: “Is it possible to make a board game AI with a hidden goal?” I posted a brief answer there, but I’d like to dig in a little deeper.
The challenge is to develop a cardboard AI that is capable of goal directed behaviour, but that a player can execute without knowing what that goal is.
This is meaningful in a game that contains a lot of counterplay or the opportunity to execute denial strategies. For instance if I’m playing 7 Wonders I would like to construct my wonder using cards that other players need for the strategies they’re pursuing. If they build a Scriptorium that could imply they’re going for a lot of science points and I should bury science cards or it could mean they intend to play a low resource game relying on the free build progressions, so I shouldn’t waste my time on trade cards.
If the AIs strategy is known it’s not hard to have it take actions in pursuit of it. “If the AI has the “science” strategy out so it drafts a tech card whenever it has the chance” is an implementable and executable rule. Writing a rule to let a player decide which move the AI takes if the player is not allowed to interrogate the AIs strategy sounds impossible.
Let’s talk about ways to do it.
One option is to allow the player to “see” the AIs strategy decision, but to have it be presented obtusely enough that the player isn’t able to interpret it.
Consider a code wheel, in which the player lines up two icons and then reads something from an underlying set of words through a window on the wheel. In the 7 Wonders example the player may match a players (different each turn) current highest scoring category to the AIs current highest scoring category and then read through the windows an order of priorities for drafting.
A different wheel would represent a different strategy, where the underlying text is more likely to tell the AI to build particular things, while still having some degree of reactivity to the current game state. However unless a person was taking the wheel apart to carefully check it, it’d be tough for a human to glance at it and go “Yup, that’s the tech strategy wheel”.
The player sees the strategy in that they can see which wheel they’re using for the AI this game. However since they’re all decorated identically and on the surface are the same (Two rows of things to match and underlying text) does not comprehend it – which is as good as keeping it hidden.
There are plenty of ways to achieve this other than code wheels, but if I must have the Monkey Island tune stuck in my head so do you.
Another option is to cheat. Not cheating at the game or routing around the problem by using an app to have a real AI run the cardboard AI. This will be fairly transparent, benign cheating that the player will notice as soon as they start running the AI, but for some players that won’t be a big deal. AIs are expected to use shenanigans.
The reason to hide an AIs strategy is so that it can make moves that are ambiguous between a few different strategies in order to keep the player guessing and to make games in which predicting your opponent and countering their actions is meaningful.
However if a move supports two strategies (Let’s call them A and B) why is it necessary to know which the AI had “in mind” at the time of that move?
Imagine a game where there are AI cards for a dozen different strategies laid out at the start of the game. On the AIs turn one of these is chosen at random and it determines which move the AI makes. So far so good, here’s the twist: When the AI makes a move you then flip all of the strategies that were not supported by that move face down.
The consequence of this is at the end of the game the AI will have one strategy card showing and all of its moves will have been in support of that strategy – however during the game the AIs strategy is ambiguous as several potential strategies (all supported by the AIs moves so far) remain showing.
Of course the player will always know that the AI didn’t “really” choose that strategy all along – but then how many human players set their strategy in stone on turn one anyway?
A third option might be to use a probabilistic strategy. In this model the AI doesn’t commit to a particular strategy and use it continuously, instead it’s operating on a combination of strategies, flavouring some over others. Obviously this is more suited to some games than others.
Suppose that the AI has a deck of behaviour cards shuffled together and each turn one is drawn. The composition of the deck decides the AIs overall strategy and it would be eminently possible for a player to assemble that without having interrogated the entire deck.
It would even be possible for the AI to switch strategy mid game by discarding half of the deck and replacing it with other cards, should a change in strategy be something that is in some way desirable.
There are echoes of this in the Dark Souls boardgame, in which a boss that is wounded increases its difficulty by randomising the order of its moves. This randomisation essentially creates a change in strategy in which a natural language description of its behaviour might change from “Then I pounce on them and perform a double attack to make sure I kill my target” to “Then I pounce on them before spinning around to make sure they weren’t leading me into a trap.” Of course the change isn’t “intelligent” in any way and the boss may well be changing to a worse strategy – but from the point of view of the player it’s almost always harder if only because you don’t know what it is.
There are likely many other AI approaches to this problem, but the main thing is that with a little creativity the seemingly impossible is achievable. Cardboard AI has been getting better and better, but it’s still something that’s not had broad formal development and as such is an area where a designer has the opportunity to try something new. Good gaming and try not to lose too many games to inanimate tree pulp 😉