# Math Makes Games

## Math Makes Games

Occupy Math is going to look at how math helps us design and play games.
Occupy Math has been writing articles for a new journal Game and Puzzle Design which seeks to bridge the gap between professional game designers and academic game researchers. You can order issues here. This is a unique niche and fills a real need. Professional game designers know how to make a game fun and interesting; the academic game designers can apply math and algorithms to solving problems that the professional game designers find during the design process. The academics also study the general space of all games and try and figure out how to classify games. A key point for Occupy Math is this.

Making better games helps us discover new math. Games can build interest in math.

The first games that Occupy Math performed research on were those in game theory which are not actually games in the usual sense. Rather, they are systems for understanding cooperation and conflict. These are mostly applied in economics and politics. At the same time Occupy Math was an active developer of role-playing game rules as a hobby. The primary tool Occupy Math used to do something new in game theory was Evolution — but evolution can do far more than train agents to play a game like Prisoner’s dilemma or simulate political strategy. In fact, evolution can optimize almost anything. One new thing Occupy Math did was get evolution to create a whole variety of dungeon maps for fantasy games. It also turns out that math lets you play traditional games incredibly well.

This part of academic research on games is the creation of computer opponents or game-playing algorithms. We already know that algorithms are better at chess than any human. Recently, the more difficult game of Go now has an algorithm as the best player. You may ask in what sense Go is harder — there is a simple definition that may not be entirely meaningful for humans. If we look at the space of all possible moves a computer must search to play the game, that space for Go is about 100,000,000,000,000,000,000 times larger than the corresponding space for chess. Since humans probably don’t consider the whole space, Go may not be ten-to-the-twentieth-power times harder than chess for humans. This brings up an interesting question.

People don’t play games the way chess or Go programs do: so how do they play?

A key fact is this. If we had a chess grand master play the best chess algorithm in the world at Texas hold’em, the chess master would do far better — the chess algorithm does not know and cannot even learn the rules of poker. On the other hand there are world class poker algorithms. Occupy Math — like many people — knows how to play dozens of games and can learn new ones quickly; he and his collaborators even make up new games (there is an example below). This suggests a course of action that is already well started.

The general game playing organization is studying how humans play games and developing artificial intelligence that can learn new games by example. The DeepMind technology — which was also involved in the champion Go algorithm — has learned multiple Atari video games with access to the controls and a view of the screen. There is also an effort to create general video game AIs which has a yearly contest where the AIs compete to play games they have never seen before. All of these efforts are attempting to mimic the human ability to learn new games with general purpose intelligence.

There is still a big gap. Algorithms designed for one game become world champions — general purpose algorithms have real trouble matching human performance.

This make a nice place to mention one of the funniest things about AI research: the researchers themselves keep declaring failure. What happens is this. Without any real understanding of how intelligence works, the researchers will declare a task, like playing chess really well, to be something that absolutely would require intelligence. Then, after building algorithms that render the best human players toast, they declare failure because they understand the algorithm and, in their judgment, it is not using intelligence. The computational intelligence society (Occupy Math is a member) avoids this mental trap by assessing performance without worrying a whole lot about the presence of “true intelligence” in the mixture.

There are two radically different issues here: understanding intelligence and duplicating the capabilities of intelligence. The first is a really hard problem, the second naturally breaks up into thousands of problems, some easy, some hard. Let’s look at some examples of applications of computational intelligence to games. One of Occupy Math’s articles from Game and Puzzle Design uses computational intelligence to design a new variation on Risk. The research question is this. Suppose that two giant monoliths are unearthed; they create a teleportation gate that connects two countries on the Risk game board that were not connected before. Which two countries would have the biggest impact on strategies if the monoliths were found in them? Depending on the exact methods (the paper explores several), the answers are Peru and New Guinea or Brazil and Eastern Australia. For experienced Risk players this isn’t a huge stretch — but it does show you some of the less earth-shaking things you can do with computational intelligence. This article also uses Occupy Math’s beloved tool, graph theory.

What about designing game boards with computational intelligence?

Occupy Math’s former student Andrew McEachern invented the game CliqueR and he and Occupy Math found they needed computational intelligence to help design boards. CliqueR is not too difficult to learn. You start with a set of dots on a piece of paper. Players take turns connecting dots with straight line segments — but you cannot cross another line. Each line you draw is worth one point. Each triangle you complete is worth three points. If you can make all the connections between four dots, then you get five points. If one line completes multiple objects, then you get the points for all of them. Not too hard, eh? Here’s where things get interesting:

If four points are arranged as they are on the left (tetrahedral) you can make all the connections. If four points are arranged as they are on the right (quadrilateral), you cannot — two lines have to cross. This means that there is an interesting design problem. Let’s examine that design problem in the context of a seven-point CliqueR board. What happens if we place seven points on a circle?

With the points forming a convex polygon like this, the high scoring configuration that completely joins four points never happens. There are 35 different ways to choose a set of four points out of the seven points and all of them are examples of the tetrahedral configuration. One triangle is shown so you can see it has no point inside it to form a tetrahedral configuring. This board for CliqueR is easy and low scoring — it has a fairly simple strategy. What if, instead, we created as many high-scoring configurations as possible? The next board starts with the triangle 1-2-3 and then adds points in alphabetical order, putting each new point inside as many triangles as it can. The lines are not CliqueR moves — they are there to let you see the triangles that are driving the design.

If you erase the lines and just save the dots, this board has a lot of scoring potential and quite a bit more strategy than the convex polygon board. That board — without the guide lines — looks like this:

The configurations for a CliqueR board go from no tetrahedral configurations to many — and the strategy for a board gets deeper the more of these high-scoring tetrahedral configurations there are. What Occupy Math did was create an algorithm to evolve boards that have a specified fraction of the possible tetrahedral configurations. This means that computational intelligence not only designs boards, it designs them to have a specifiable level of difficulty. You can use computational intelligence to balance games, to control game difficulty, and to try to make the game more interesting — something we will look at in a future post.

Below is a picture of a maze. Occupy Math created it when someone criticized one of his computational intelligence techniques by saying “I bet it doesn’t scale”. In research, them’s fightin’ words. In English the statement is that, if you made the size of the maze you were trying to design bigger, then the computer would take too long designing it. In fact, Occupy Math can design an appallingly large maze quickly — probably a good deal too large to be fun. The next step is not larger, it’s to make the maze more fun by putting things in it — another task that computational intelligence can do.

This week’s post comes back to one of the points Occupy Math keeps trying to make. Math applies to all sorts of interesting problems — most of which never ever show up in high school or even required university math classes. The current educational establishment take an interesting and empowering discipline and buries all the good stuff. This post shows you some of the other ways math — and its ally computer science — let us create and play games. Would you like to see a book of evolved puzzles and mazes (not like the one above)? Do you have a game idea that needs computational intelligence to finish? Comment or tweet!

I hope to see you here again,
Daniel Ashlock,
University of Guelph,
Department of Mathematics and Statistics