AI Apocalypse worried? Collaborate with you Computer Instead!

In a number of Occupy Posts, we’ve looked at fractals. A long time ago, in the Goldilocks information post, we looked at the problem of having too much or too little information. Today’s post reveals one of Occupy Math’s secrets: how to let the computer look for interesting fractals on its own. The word “interesting” is chosen carefully because the fractals located this way are not beautiful or elegant (yet). They are just interesting in a very specific way. The use of this is to turn a berjillion fractals, most of which are not that good, into a short list that a human can select from or even brush up a bit. This is an example of a type of computer code where the computer is a computational collaborator.

In the Goldilocks information post, Occupy Math alluded to the fact that Claude Shannon had devised a method of computing the amount of information contained in something and that the unit of information was bits (the same sort a computer uses). How on earth do you get the number of bits from something?

Finding the information content of a fractal

Each fractal is defined by a function that you apply over and over to points to see how they will behave. There are two possibilities: the point sticks around, moving a little bit, or it moves a long way away. If the latter happens, we save the number of steps it took to move far away. This number of steps (for all the points we check) is what we use later to compute if the fractal is interesting.

You then compute the chance, across the collection of points tested, that a point took a particular number of steps to escape. Not escaping also has a chance of happening. The information content of the average pixel in the picture is the average value of the negative log (base 2) of all those chances. Why this is the information content of a pixel is something we might cover in a senior level math or a junior level computer science course. For now Occupy Math asks you to trust that he has conveyed Shannon’s idea correctly.

Why crunch all these numbers? The information content is the largest when the number of points that took each possible number of times to escape (or not escape) is more-or-less even. If this information content measure on the average pixel is large, then points in the fractal were doing a lot of different things. This is the sense of interesting that Claude Shannon discovered. It is now the “picture is worth a thousand words” point for this post. Here are nine fractals, each with its per-pixel information content printed near the center.

image

The most complex image has the highest entropy!

Once a numerical measure of the information content of a fractal is available, we can set the computer looking for interesting fractals by generating lots of examples and asking the computer to save ones with highest information content. There are more clever ways of searching the space of fractals. Occupy Math does research on evolutionary computation which is using Darwin’s theory of evolution as a search tool. In particular, if we grant breeding rights among fractals based on their per pixel complexity, we have a really interesting tool for searching a space of fractals. Here are some examples of well-bred fractals.

fractals

Computational Collaboration

The fractals above were all located by autonomous evolutionary code — and then Occupy Math searched for and rendered interesting features inside the original fractals supplied by evolution. Computers are diligent and tireless. Humans are intuitive and clever. As we move farther into the realms of artificial intelligence and deep learning, both humans and computers are smart. At least potentially. Digital evolution is a kind of computational intelligence and Occupy Math uses it as a tool to augment his abilities.

This is one reason, among many, why becoming computer literate is a good idea. Some crazy people think computers are labor saving devices. In the hands of a sane species, they might be labor saving, but humans use them as capability enhancers. Rather than doing the same amount of work with less effort, humans use computers to do vastly more work (at least the humans that can work with computers). It is early days and we have lots of mis-use and abuse of the power granted by computers. We are still trying to get a handle on the addictive nature of social media, for example.

This post both introduces the idea of information content as an interest measure (there are a lot of other places to apply it!) and it also introduces the idea of the computer as a useful collaborator. Good software tools make you more effective at getting things done, bad ones may make you want to throw the computer out the window. As we get better at this, we will become better at working well with our machines. Do you have examples of computational collaboration to share? Please comment or tweet!

I hope to see you here again,
Daniel Ashlock,
University of Guelph,
Department of Mathematics and Statistics

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s