How should AI cheat?

Last week I was writing a cardboard AI for a game and reached the conclusion that in order to make the game fun and challenging it would have to be allowed to cheat. In this case by starting with more resources than the human player(s). An algorithm that could equal human play would be too complicated to execute and bog down the game, so a simpler one was required to make the game fun and it had to cheat to make the game interesting.

cheat

The word cheating has very negative connotations – which are well earned. I use the word deliberately because it helps me to bear in mind how players might feel about playing against an AI that cheats. It’d be easy to generate feelings like “I played well, but only lost because of the cheating AI.” But on the other hand if the game has enough depth to be interesting then you probably can’t write a simple set of instructions that play as well as a human player can. So what’s the solution?

The goal is to make the game close and interesting without upsetting the player, let’s look at those components separately in terms of how a cheating AI undermines it and what it brings to the table.

A close game in which victory and defeat both feel possible is desirable. It makes the player’s actions feel meaningful. The closer the game, the more each decision feels like it could be the difference between victory and defeat. A cheating algorithm undermines this when it allows the AI to stomp the player despite roughly equal levels of play. It contributes positively where it brings the AI up to the player’s level. Generally this is the easiest bar for a prospective cheating cardboard AI to pass, as it’s the reason that it is being implemented in the first place. However, not all players are equal, so there’s some motivation to try to scale the shenanigans to the difference between the AI and this player, rather than the AI and players in general.

blueshell

This isn’t really something that should be necessary, as runaway effects exist with human players too, so a good game should have some approach to dealing with it. In principle it should just be looking to ensure that whatever the cardboard AIs advantage is, it doesn’t suppress the game’s catch up mechanic – or any other mechanic aimed at keeping the gameplay experience smooth.

Again, “interesting” is something that should be baked into the game itself, but meaningful decisions often exist because of a very fine balance that’s easily upset. Here it matter what sort of cheating the AI is doing and how that interacts with the game’s core mechanics. The trick is to ensure that the bonus that the AI gets doesn’t undermine a core strategy. For instance if your game encourages “Block resource generating squares” as a potential strategy then beefing up the AI with “Gain a resource every turn” undermines it. The player now has one fewer strategy that they could pursue and consequently fewer meaningful decisions to take.

A good cheating AI will have an advantage that makes opposing it harder, but that doesn’t circumvent any particular approach. In general that will express itself as a bonus to something that a player in the AIs position could already do, rather than the capacity to do something that it normally could not. This can be harder than it seems, as something as simple as tweaking a number can create an undesirable effect – for instance “The AI gets 3 extra gold to start the game” sounds safe, but it might move “Dig pit on turn one” from impossible, to possible, which in turn makes some approaches to the game inaccessible to the player.

pittrap

Avoiding upsetting the player is probably something that could use a little more definition. A game cannot possibly hope to avoid having anything that will upset any of the seven billion people that there are. This point is more about avoiding having the AIs advantages produce a negative emotional reaction to the player.

All of the usual caveats apply here. Some game effects are simply boring and wind up being punishing in an out of game sense. The big bad here is missing turns (and bear in mind that in a two player game one player getting an extra turn is functionally identical to the other player missing a turn). Others exist, but the rule of thumb is that if you wouldn’t be happy for a human player to do it to another human then it’ll be twice as bad if your AI is allowed to break the rules in order to do it.

Moving beyond that, people like to feel a sense of control over what happens to them. The feeling of losing because a random factor came up badly at the wrong moment is particularly sharp – it is preferable for the AIs advantage to be predictable. There’s a difference between knowing the AI gets +1 to its combat checks and knowing it will win 1 in 5 regardless of the odds for no reason. Or even knowing that it’ll win every 5th fight for no reason. A predictable advantage is a challenge to the player, giving them something to strategise around and try to defeat. An unpredictable one is much more jarring.

So what does it look like when we put all of this together?

jigsaw

Well, the cop out answer is that it depends on the game. Looking for an open, predictable advantage that doesn’t undermine any catch up mechanic and allows all existing strategies to remain viable will generate different results depending on the game. However there are some commonalities in where to start looking.

Overwhelmingly the start game state is a great place to make changes. Simply having an AI with more resources is something open, that the player can predict and that retains most of the strategies available for most games.

The end state also offers opportunities. If the AI simply requires fewer points to win (or is granted some bonus points in a point salad) the tenants of good play do not change throughout the game and the players choices are still just as valid as they’d be against another human. They’re simply required to do better. However this can be a little weaker on the avoiding negative emotional responses level, as scoring more highly than your opponent every turn, only to be told that you’ve lost at the end isn’t a great experience.

There’s also a lot to be gained from allowing a player to select their own difficulty level. That allows the advantage to scale to the skill of the player (assuming that they have any level of self insight, which may be a dangerous assumption). It also means that a significant advantage feels less like “The AI got a huge advantage for no reason.” and more like “Well, I asked for this.”

jensen

While I’ve been talking about grand AI, in terms of an algorithm intended to replace a player wholesale, it’s got applications on a smaller scale. The humans in 404: Law Not Found and the monsters in Wizard’s Academy both draw on this sort of thinking. The limitations on their actions are generally very similar to the limitations that apply to the players in the same situation, adjusted slightly for the capabilities of different sorts of creature. That helps to make the opposition feel “fairer” even though they’re not strictly entities in direct competition.

How this thinking applies will vary between games and subsystems, but I find “How does this cheat and how does that interact with closeness, interest and emotion” a useful perspective to consider.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.