This week we delve again into the evolution of virtual robots. Last time this topic came up, the robots were symbots with continuous moves. This week we have the amoeba-like morphbots, like the one shown above. These robots are controlled by evolved computer code, very different from the simple neural nets that control symbots. We are going to look at how to correctly set up digital evolution and also at how the skill of the robots progresses the longer they evolve.
How on earth do you evolve computer code? When we use evolution, members of the population can have “sex” with one another. In biology, this means you get genes from your mother and genes from your father. The digital version, used to make morphbot programs, takes some program statements from one parent and some from the other. The program statements in a control program are like the DNA in a living creature. The children are also mutated, which means instructions in the code are changed randomly. Mutations can help or hurt, but since we are using a selection filter to keep the better morphbots, the helpful mutations are the ones that get retained. If we did this in a standard programming language, like python or java, the results would be bad. Even if the parent programs were functional, the children would not even run. At all.
It turns out that what you need is a computer language in which all possible programs run. For the morphbots we use a list of instructions of the form
If(test) then attempt action
The supervisor program supplies the information to make the tests and check if the actions are possible before letting them happen. We also make the program circular — if you execute the last test/action pair then you go back to the first one. This type of evolvable computer language is called a linear genetic program.
Linear means the commands come one after the other in a line and the “genetic” means we are intending to write the program with evolution. The actions are either one of the twelve morphbot actions (listed below) or a jump to a different part of the program. This jump instruction makes the structure into a real program, because it can change the order in which it does its instructions, based on information from its inputs. This structure was invented by Mark Joenks and is called an If Skip Action (ISAc) list.
What do morphbots do?
Take a quick look at the animation at the top of a post. Morphbots start as a 1×1 square. They can try to move up, down, left, or right; grow one square in each of those directions (up to 4); contract one square in each of those directions. This gives them twelve moves. For each of these moves they have the test “can I make that move?” and also the test “is every square in (direction) white?” To understand the significance of “white”, we need to explain the score that we give morphbots.
A morphbot is supposed to visit every white square. When they visit a white square, they change it from white to gray and their current area is added to their score. This means they want to be as big as possible — they can be as big as 4×4 — but they need to squeeze through openings to get to white squares they have not yet visited.
This conflict between getting better scores if you are big and needing to be small to navigate makes the problem of finding the best sequence of moves for a morphbot pretty hard. For the board shown in the animation above, the morphbot show is allowed 120 moves and gets a score of 714 which is close to the best possible.
What factors control evolution?
It turns out that the number of individuals in the evolving population is important, the number of mutations you apply to each new morphbot is pretty important, the length of the programs is important, and the amount of time you let the morphbots evolve is critical. Don’t let them evolve long enough and they don’t really solve the problem. Let them evolve too long and you’re wasting computer power. Once you know what “long enough” is, you use any time you have to run more experiments — evolution is a filtered random process and so running it again will often give different (sometimes better) results.
The images below show what happens when you allow different amounts of time for evolution. The animations shown are the fitness tests of the best morphbots, from thirty independent runs of the evolutionary training algorithm. From left to right evolution is run for 10 generations, 100 generations, 1000 generations, and 10,000 generations. As you can see, even though we are evolving computer programs — something that sounded crazy at first — the robots are improving as they evolve.
The last example is the one from the top of the post — it looks like 10,000 generations (about three minutes on Occupy Math’s research computer) is enough time, at least for this board. The program had 90 instructions in it, so even with the simple if(test)then action format, its potentially a pretty complicated program.
How much does the length of the program matter?
Occupy Math tested 15, 30, 45, 60, 75, and 90 instructions and more instructions were better, except that 75 and 90 were pretty similar. The animation below is a 15 instruction program, the best from a population evolved for 10,000 generations. This one gets a score of 213 — pretty sad — but it did expand to full size in the center of the board, which is a good move. This morphbot has a weird, run-in-a-circle sort of motion. It’s trying moves that would fill a large open area, over and over. Not enough brains available!
Working with other boards
One of the nice things about the morphbots is that each different board is a different problem, so if we are using the morphbots as a way of testing how we should evolve programs, we can experiment with problems of different sizes and, based on where we put the barriers, different levels of difficulty. Here is a slightly easier problem than the one we used above, and an animation of a good solution.
Notice that the solution above gets close to the best score — it is only smaller than it needs to be a couple of times and is good at getting maximum fitness on the 4×4 open regions, but it is not a “rational” solution; it wastes some time and clearly is using moves that worked in one place that really don’t work in another.
Next we give the morphbots a board with more open space, but a more complicated topology for planning their moves. The left two animations are controllers that tied for the best score of 810 while the one on the right was the worst, scoring 401. These three morph-bot control programs were the best, in 10,000 generations of evolution in three of thirty independent evolutionary runs performed.
The animations in this post are a simple, visual technique for reporting what behavior evolved in each of the morphbot training runs. It is interesting that people, seeing these animations, assign motives to the virtual robots. Occupy Math’s editor called the first two robots, above, greedy and the third one Neville Longbottom.
Similarly, decades ago, virtual robots appearing in a virtual reality simulation that were controlled by five numerical parameters nevertheless got nicknamed by most people that viewed the simulation. “See-ya bots” ran off suddenly. The “Taz bot” spun up like the Tasmanian devil in the cartoons. “Scardy bot” (evolved in a world with an electric fence at the edge) was very slow and cautious.
If you see intent or feel emotion when looking at the animations, feel free to tell Occupy Math about it at firstname.lastname@example.org. The only use to which such comments will be put is (i) anecdotes in a future Occupy Math and (ii) designing future experiments with the morphbots. The most extreme reaction Occupy Math has gotten is “you can’t shut down the computer! They are alive!” This was a freshman who had arrived a bit early for office hours.
The “genetic” in genetic programming is a bit inappropriate.
A problem in genetic programming, and its parent field evolutionary computation, is that a lot of terms were borrowed from biology in a pretty haphazard fashion. Genetic programming, for example, means programming via techniques derived from evolution, not reprogramming the genes of some living creature. Occupy Math and his wife have developed a very useful technique called single parent genetic programming. The initial idea arose during dinner at a pub and some of the other diners cast very concerned looks at us while we were working out the idea. In general, a biologist who blunders into a conversation among evolutionary computation researchers ends up in a state of cognitive dissonance.
Since the idea of evolving code was first thought up by John Koza and John Rice at Stanford in the early 1990s, the research community has thought up a bunch of different ways to evolve code. The key step is creating a language where all programs run — which itself can be done in a lot of ways. Of course a lot of random programs run, but still do nothing useful — so a great deal of care is needed designing the way the evolutionary algorithm works. Since blundering around does okay on the morphbot problem, it is an easy one.
Occupy Math uses digital evolution for a whole lot of things, from making maps for fantasy games to making art Other people use it for topics as diverse as designing analog electronics, figuring out the layout of rooms in a building, or figuring out where the transistors go on a microchip. Do you have any questions about how to set up digital evolution or use it? Comment or tweet!
I hope to see you here again,
University of Guelph,
Department of Mathematics and Statistics