
Today’s post is another exposition of research, this time on a game that my colleague Joseph Alexander Brown created for teaching artificial intelligence. The game is about the foraging behavior of moose, very much simplified, and is a good example for demonstrating how to make a strategically interesting game. The basic idea is simple. We have two moose that can choose between three different fields where they might forage. The fields have plants in them that grow back after being eaten, fast at first and then slower as the plants get back to full size. Each morning, the moose each choose a field. If they choose different fields, then they get a score equal to the forage in that field. If they choose the same field, they trumpet and threaten and tear up the field a bit but get no forage. Sounds simple, but there are some subtleties.
If there were two fields to choose from, then there is an obvious optimal strategy: pick a field and stick with it. The third field, however, sets up a strategic dilemma. The third field, the one not chosen by either moose, spends the day adding forage, becoming the best field. One of the simplifications of the model of nature used in this game is that the plants grow back implausibly fast, but an accurate modelling of moose and plants is not the goal. The moose are a story that allows students to empathize with what might otherwise be a fairly dry mathematical model to be optimized.
The moose game contest
My colleague, Dr. Brown, has run a contest in his AI class two times now. The students submit moose, in the form of programs that can choose which field to go to, based on knowledge of which field they and the other moose went to earlier. Occupy Math has used this sort of contest in the past, based on the game prisoner’s dilemma and it was a very popular activity. There was careful planning, treachery, and skullduggery.
Skullduggery? One year a student got a group of other students together to submit a strategy that could recognize and work with only itself. The idea was that this would cause a tie for first place and Occupy Math awarded points so that a first place tie would give the largest number of points. The student that organized this conspiracy submitted a strategy that roasted the strategy he got others to submit — he got first by a large margin and the conspiracy tied for second. Good times in the classroom.
Dr. Brown did a video presentation on this work as well, if you would like to take a look.
What does an AI for playing the moose game look like?
This research explores how to create small artificial intelligences that set the foraging strategy for a moose. Each AI is the “brains” of a moose that, knowing which field it foraged in and which field the other moose foraged in, decides which field to forage in on the next turn. There are actually many different ways we could code up these small AIs. This time around we used the structure shown below, called a finite state machine.

Looks complicated, doesn’t it? The one arrow from nowhere, in the upper left, says the moose will go to field 2 on its first move and be in state A. There are three arrows coming out of state A. They are labelled 1/2, 2/3, and 3/2. They mean “if the other moose went to field 1, go to field 2”, “if the other moose went to field 2, go to field 3”, and “if the other moose went to field 3, go to field 2”. Each of the arrows also points to the state, one of A, B, C, D, that the moose AI will be in next time in needs to play. A finite state machine can specify a pretty sophisticated strategy for playing the moose game. This machine has four states — but you can have as many states as you want.
Finite state machines look like an insane wiring diagram, but they are actually just a list of how you respond to the opponent’s move for each state and also that one arrow that says what you do first. As such they are simple structures to program with digital evolution. What we do is have a population of moose (finite state machines) all play the game with one another. The quality of a moose brain is the average amount of forage that the moose gathers against all the other moose. The two-thirds of the moose with the most forage are permitted to have children. The states of their finite state controllers are mixed-and-matched (simulating sex) and a transition arrow or two have their labels, or maybe their destinations, changed (simulating mutation in biology). This simulation of biological evolution produces highly effective moose game strategies fairly quickly. If you would like e-copies of the two papers on the moose game, send a request to dashlock@uogeulph.ca.
Why bother evolving strategies for the moose game?
The first paper about the moose game was a report on the contests run in Dr. Brown’s AI class. Occupy Math chipped in by creating the moose AI evolution system and, by running evolution and then looking at the outcomes, demonstrated that the moose game has a large number of different strategies (at least thousands) which do variably well against one another. In other words, evolution demonstrated that the problem was a rich one and so an appropriate choice for contests.
The second paper characterized how the moose evolution system reacts to changes in the biological model of plant growth. We used fields with more available forage — not all the fields, just some of them — which changed the strategic equation. One of Occupy Math’s students thought that if the plants grew faster that would reduce conflict (moose going to the same field). He was mostly right. Increasing plant growth a little causes the amount of conflict to vary more between different runs of the moose evolver. Increasing the growth rate a bit more did cause conflict to drop, sharply. Having one richer field caused an increase in conflict, while having two rich fields and one normal one caused a large drop in conflict.
The second paper demonstrated that the moose game had many variations that favored different moose strategies. This opens the possibility of testing artificial intelligence for the ability to generalize — in this case challenging them with a variety of different versions of the moose game and seeing if they can learn, just from their foraging results, which version of the game they are in and follow that up by playing well.
Of course there is a lot more we could do. Increase the number of moose or fields. Place the moose on a map with a network of fields, and on and on. The moose game, and its simple generalizations, form a large family of games with different strategies that can all be worked with inside a fairly simple computational framework. This sort of research is actually quite a lot of fun when you get into it. We conclude with a list of the previous articles in this series on what mathematicians get up to.
I hope to see you here again,
So remember to get your Covid vaccination!
Daniel Ashlock,
University of Guelph
Department of Mathematics and Statistics