The post was inspired by the grading of a recent midterm by Occupy Math. Consider the following matching test. It has an odd property: you can complete it with very little actual knowledge about Sally, Thomas, cars, or cooking. If you are not from an English speaking background, the useful tidbit that Thomas is a male name and Sally is a female one may evade you, but mostly everyday knowledge carries the day on this test.

**What kind of thinking is this and is it useful in math?**

A physics teacher might say this type of thinking is very close to *unit analysis* or *dimensional analysis* in which you can get a lot of information about a problem by looking at what the units or dimensions of the answer are supposed to be. Imagine, for example, that a story problem ends “… then what is the speed of the car?” If your answer is 312.2 kilograms, then you have a problem — no car built by humankind has ever had its speed measured in kilograms.

The thinking used to deduce, at least in the opinion of the person writing the matching problem, that Sally is the smartest person in the class is called *type analysis* (not typological thinking, which is totally different). It rests on the fairly obvious principle that a necessary condition for an answer to be correct is that it must *at least* be the right type of object.

**Why is Occupy Math droning on about this obvious commonplace?**

Because people make this “type” of error all the time (joke!) Occupy Math is in the midst of grading a midterm examination and has already taken off over a thousand points (from several different papers) because people:

- reported a two-dimension vector when the answer was a number,
- used a point in space as if it were a direction vector,
- reported a formula with “x” and “y” in it when the answer was a number (11.66),
- reported 31.764 as the answer when the answer was a collection of three different equations,
- and forgot to put their name on their paper.

Okay, that last one is not a type error, it is just an error, but we still have the problem that many students turned in answers that absolutely had to be wrong because they were not even the correct type of answer.

There are several causes for this storm of errors. The first is that Occupy Math’s course is now presenting multivariate calculus, so there are now far more types of objects. Up until this semester we’ve had numbers, formulas, and points. We just added vectors, direction vectors, gradients, system of formulas, dot products, and some new operators like the gradient operator ∇ (“nabla”), which has a pretty funny sounding name.

Another more troubling cause of these errors is that a lot of the problems that arise naturally in multivariate calculus have two, three, or even four steps. Many of Occupy Math’s students have done — almost exclusively — one-step problems before starting this, their first year of university. Many students did the first step of a problem correctly and then stopped. You can pass the course on partial credit, but not with grades that will get you a scholarship or make it possible to get into medical school, law school, or graduate school. It also means you will miss out on having an enhanced level of clue, a precious commodity in any case.

A last and most troubling cause of these type errors is that the students **have never been asked to look critically at their own work**. It’s not hard to realize that a car does not get a mileage of 312.2 kilograms (not even “per gallon”, just kilograms) but students routinely plunge in and do computations without stopping to consider the meaning of a question.

**This is substantially our fault as mathematics educators for not teaching math as a tool for critical thinking.**

What can be done? The first step is to start asking students to consider the question “is that a reasonable answer?” or even “is a car’s speed really measured in kilograms?” Remember that a question is always better than a correction and that mocking a student is usually a very bad idea. A lot of this can be done by us as individual math educators. Changes to curriculum — making self criticism an explicit goal — would also help. Occupy Math would be over the moon if the students walking in the door of his class were all immunized against the kilogram car velocity effect. He also works on this himself — but at the university level it is often too late to completely cure people of kilogram car velocity syndrome.

When Occupy Math was in graduate school one of his friends was working on a Ph.D. in physics and, in the course of that, was assigned to teach a course in estimation. He would sometimes be grading papers while having lunch and would ask Occupy Math his opinion of a given piece of work. Among the questions and answers were the following gems.

- Asked to estimate how long it would take the eraser of a pencil balanced on its point to hit the table, answers varied from a time so small that the eraser was moving an average of 110% of the speed of light to an eraser that was taking several millennia to get to the tabletop.
- When asked to estimate the time for half the air in an empty coffee cup to exchange with air outside of the cup, one student got a very, very large answer. Occupy Math was asked to comment and noted “in that intermediate step the air is denser than plutonium”.

For the most part the students were using formulas that, when correctly applied, could yield a reasonable answer — but they completely lacked the ability to ask “is this a reasonable answer?”

Occupy Math thinks type analysis — the simple act of asking “is my answer of the correct type” — should be a routine part of math classes. It is also worth mentioning that unit analysis saved Occupy Math’s bacon on several undergraduate tests. If you know the answer has to be in “kilogram meters per second squared” and it is currently not, this can lead you right to your error. Both of these formal methods of considering if your answer has any hope of being correct are valuable. There is a name for problems that help with this sort of thinking and the associated training: Fermi questions. There is even an online Fermi question generator that you (or your students) can play with. If you have other similar tips and tricks for keeping students from turning in absolute rot for answers by accident, do not be shy: please comment or tweet!

I hope to see you here again,

Daniel Ashlock,

University of Guelph,

Department of Mathematics and Statistics