A charge that math teachers have to answer all the time is that their discipline is not useful for anything in the real world. In this post, Occupy Math will use mathematics to show that capricious or chaotic government help gets you a worse economy than no help at all. As Occupy Math has noted before, it is untrue that math does not apply to the real world. In fact, convincing people that math is useless is a tool of oppression. Sun Tsu observed that “To subdue the enemy without fighting is the acme of skill.” If you are sapped of your will to even look at or understand quantitative evidence, then you are half-way to helpless in this complex modern world. In this post, Occupy Math will report on one of the papers he submitted to the 2017 IEEE Congress on Evolutionary Computation that shows how math can be used to make simple, informative models of economic policy.

**One of the big results is that unreliable government subsidies are worse than no subsidies. **

The tool Occupy Math is using is a game called *divide-the-dollar*. Divide the dollar is a game invented by John Nash. The game models making a deal (or not making a deal). Two players bid and, if their bids total no more than a dollar, they get their bids; otherwise they get nothing. Occupy Math and his collaborator Garrison Greenwood specify a family of games that include divide the dollar. Our first paper is visible here. In this first paper we showed that, if the government offers a subsidy for people that make a close-to-fair deal, then the fraction of deals that actually happen goes up. We establish this by evolving agents to play the game well, rewarding those that make the most money with the right to reproduce.

You may ask *evolving agents?* Occupy Math is using one of his main tool here, evolutionary computation, which is any computer algorithm based on Darwin’s theory of evolution. In an earlier blog we used this tool to find beautiful cellular automata. The reason for using this tool is that it lets us create agents — in this case, players for divide the dollar — with no tool beyond being able to tell how well they play the game. This creates the ability to let the agent-training software function without a lot of built-in bias and find all sorts of different ways to play, as we will see later in the post.

In the new paper we ran experiments using three players whose bids still need to total at most a dollar. Experiments were done with no subsidy, reliable subsidies, a subsidy that is abruptly cancelled 80% of the way through evolution, and subsidies that are funded year-by-year and only happen three-quarters of the time. The results are interesting. The *coordination fraction* is the fraction of the attempts to make a deal in which a deal was actually made — in other words, in which the bids totaled no more than a dollar.

The picture below shows the way the coordination fraction is distributed when:

- there is no subsidy (first three box plots),
- reliable subsidies of increasing size (second through fourth groups of three box plots) and
- in the experiments where the subsidy abruptly stops (last three groups of three box plots) .

The first (blue) box in each group of three is the rate of making deals for untrained agents, the second (green) is the average rate of deals before the subsidy is withdrawn, and the last (red) is the rate of deal making in the time after the subsidy is withdrawn. Remember that in the first four experiments the subsidy is never withdrawn.

**Withdrawing the subsidy does a lot of damage — look at the last three red boxes! Stable government policy is important.**

The picture above uses box plots to give a summary of how thirty independent runs of each experiment went. It is also interesting to look at pictures from individual experiments. The following pictures show the average value, across 36 participating deal makers, over 250 generations of training, showing the average amount of money made per deal and rate at which deals were made. The money should average about 0.33 (one third of a dollar) and coordination, or rate of deal making, is near one when things are working right. Here is a good outcome for a no-subsidy run. The agents learn how to make deals. The red-and-green plot is money, the blue one tracks fraction of deals that work.

Here is the same type of picture where the subsidy is reliable for 200 training generations and then is abruptly removed. The agents whose behavior appears in the first picture never really recover; those in the second picture climb out of the hole, but it takes a while.

The simulations where the subsidy is funded randomly each generation — about 75% of the time — is even more interesting. The agents pictured on the left learned to pretend the subsidy would always be funded. When it isn’t, that generation goes very badly, but when the subsidy shows up they are fine. The agents represented by the picture on the right stay in chaos for the entire training period. They *never* learn to exploit the subsidy — but they keep getting in trouble by trying to exploit it.

**The big news for the randomly funded subsidy is that the total amount of money made is lower even though the subsidy is pumping in money! When people don’t make a deal. then the money for the deal and the subsidy are both lost.**

The material in this blog is from an unpublished (but submitted) scientific paper that Occupy Math recently finished. It speaks to current events and, generally, to government policy. Lets put the conclusions that this study supports together in one place.

- An unreliable subsidy is worse, on average, than no subsidy.
- Abrupt cancellation of a subsidy does huge amounts of damage.
- Companies become addicted to subsidies that are always there, they make them part of their base budget.
- Since they encourage deals, reliable subsidies of the sort simulated do increase the total income — beyond the tax dollars needed to pay for the subsidy.

Is there meaning for this research in current events? The wild, capricious behavior of the current American government — threatening long standing treaties, trying to impose “better” deals without considering the consequences, and generally acting in a completely unreliable fashion — *cannot* create a benefit, not even for one side. The losses due to the chaos — like the money Occupy Math’s agents lost when they failed to make deals — will far exceed any possible gains.

Another important take-home point is the capabilities of agent-based models. Occupy Math’s model, using evolving agents, exhibited a broad variety of different outcomes. A few agent populations adapted well even to the unreliable subsidies — most did not. A few agent populations never used the subsidy money and were not affected when it was abruptly withdrawn. Recovery after subsidies were cancelled followed many different paths from rapid recovery to no revovery. Standard economic models, built around differential equations, find it very hard to capture this remarkable variability of outcomes.

Occupy Math hopes both that you have found this trip through an agent-based model interesting and that you find the subject topical. There are a few things missing — Occupy Math avoided explaining how the agents are encoded because its long, complicated, and pretty dry. If you are interested, an e-mail to *dashlock@uoguelph.ca* requesting a copy of the draft paper will be honored. Are there other natural targets for this kind of modeling? Are you interested in the difference between agent-based models and other types of models? Please comment or tweet! Occupy Math likes taking his directions from readers.

I hope to see you here again,

Daniel Ashlock,

University of Guelph,

Department of Mathematics and Statistics