Last week’s blog looked at the presentations that Occupy Math made at the World Congress on Evolutionary Computation – brief summaries that tried to give a sense of what the papers were about. This week Occupy Math takes an in-depth look at the techniques inspired by Darwin’s theory of evolution that are the core of his research. The key point? Evolution can solve math problems in ways humans cannot. Sometimes this means evolution is just a faster technique, but other times evolution pulls off stunts that human effort cannot match. Some people might find this concerning but
Occupy Math studies evolution as a tool for amplifying human capabilities.
This week we look at cellular automata. These are simple mathematical rule-sets that say how to update a collection of cells. The image below uses the following two rules:
- If exactly one of the three cells above left, above, and above right of a cell are blue, it becomes blue.
- All other patterns become white.
If we start with exactly one centered blue cell and update the cells seven times it does this:
This picture is a time-history of the automata. The initial state is the beginning of time and each line of the automaton is the next time-step. If there was enough space, this automaton would go on forever. Occupy Math has worked on finding automata that grow as large as possible, but then turn themselves off. These are called apoptotic cellular automata because they mimic the ability of biological cells to know when they have finished building a structure. The following automata, turned sideways to fit, starts [blue][green][blue] and generates a pretty complicated pattern. Time for this automata starts at the left and runs right.
One reason cellular automata are interesting because they can be configured to do any computation at all – a fact that was proved by creating a general purpose computer using an automata rule called the game of life. While this was a huge theoretical breakthrough, it didn’t have direct applications to programming. Why? Even if we build hardware that directly runs Life, or some other cellular automaton rule, it is very hard to write even a simple program as an initial set of cell states. The initial cell states in the first example above is “middle cell blue”, but doing something like simulating the stock market or playing battleship is just too hard to figure out.
Programming automata is really hard – but evolution can do it!
An evolutionary algorithm is any algorithm that uses some version – usually very much simplified – of the theory of evolution as its core principle. The huge automata pictured above has a rule located by directed evolution. The directing principle is that you get more reproductive rights the larger you are, except that growing too long or wide is instantly lethal. The algorithm starts with a population of 1000 randomly generated rules – most of which died – but after many generations of blending copies of pairs of rules, as well as changing the rules a bit, the rule that makes a gigantic complex picture arose. Cool, no? The blending of rules mimics sexual reproduction, the small changes are inspired by biological mutation. Both of these analogies are quite weak – and much simpler than what happens in actual biology.
Another advantage of using evolution is that it does not need to find just one answer. The picture below is a contact sheet for the best results of thirty runs of evolution, each driven by a different set of random numbers, but otherwise identical. There are a huge number of self-stopping automata rules and Occupy Math (well, really his software servants) have found thousands of them. It’s a bit like being able to say “Igor! Fetch me a brain!”, except not in a horror movie.
It gets better. Once you use biologically inspired programming techniques, the rest of biology becomes available as a resource.
Usually, when you use digital versions of evolution, you just start over with new random numbers each time you want to look for something new. You don’t have to, though. Occupy Math has tried selective breeding techniques on his cellular automata. Once an intriguing rule is located, it is not difficult to breed more like it. The pictures below come from rules with a common ancestor that was bred back into the population over and over. Compare the four rules below with the thirty above. This represents a pretty high level of artistic control! Notice the system found one rule twice. Actually it found 24 different rules in 30 tries, but that would take a lot of space.
This also demonstrates the way you can go through all of biological science for ideas for building better software.
Occupy Math gets a lot of publications out – and this ability to leverage the work of a million smart biologists is a big part of the reason. There is a price – biology is written very differently from mathematics. It’s a bit like learning another language. It’s also worth mentioning that this is how interdisciplinary research is done. There are multiple gold-mines in the interdisciplinary hills, but the time and effort needed to reach out are seldom expended. Occupy Math’s father was a biology professor at the University of Kansas, something that helped him a lot.
This post claims that evolution can program cellular automata by finding good rules – which is actually really weird. Instead of finding a program that does what we want, we start with the program (the initial state) and write a programming language (the automata’s rule) that does what we want. Other researchers have used evolution to find starting states – but evolving the rule gives you a lot more leverage.
The larger concern is that the “programming” Occupy Math is doing does not yield a precise input-output structure specified in a requirements document.
Instead the system devised provides a large number of solutions to a somewhat vague set of requirements: “get big but not too big”. It is possible to tighten the requirements – but in general evolution gives us a whole new type of programming where the requirements are aspirational rather than proscriptive. This is not a good thing or a bad thing – it’s a thing and, for some classes of problems, a useful thing. It also requires a bit of shift in attitudes about what a computer program is.
Occupy Math hopes you have enjoyed this journey into the world of digital evolution. Creating the pictures in this post by traditional mathematical techniques would have been far more difficult – maybe impossible. This leads to some philosophical issues – who gets credit for the evolved programs? The team that designed the evolution software? The inestimable Charles Darwin and his colleagues that explained evolution to us? Are the evolved programs an invention or a discovery? If you have an opinion or an insight, please comment or tweet!
I hope to see you here again,
Daniel Ashlock,
University of Guelph,
Department of Mathematics and Statistics
Are you familiar with Stephen Wolfram’s work with automata? He describes it in his book, “A New Kind of Science.” He finds automata that very closely model a number of natural phenomena. (Most intriguing to me, one that shows turbulence arising as an epiphenomemon of an automaton, and exact matching of patterns generated by automata with coloration patterns of molluscs.)
LikeLike
Intriguing post, but it’s not clear how traditional CA rules can determine when a structure is “too big.” Also, you don’t include your evolutionary software code. So students can reproduce what you describe. More details would be welcome. Thanks.
LikeLike