Tag Archives: Math

Math: Useful & Over-Used

Paul Krugman:

Noah Smith … on the role of math in economics … suggests that it’s mainly about doing hard stuff to prove that you’re smart. I share much of his cynicism about the profession, but I think he’s missing the main way (in my experience) that mathematical models are useful in economics: used properly, they help you think clearly, in a way that unaided words can’t. Take the centerpiece of my early career, the work on increasing returns and trade. The models … involved a fair bit of work to arrive at what sounds in retrospect like a fairly obvious point. … But this point was only obvious in retrospect. … I … went through a number of seminar experiences in which I had to bring an uncomprehending audience through until they saw the light.

Bryan Caplan:

I am convinced that most economath badly fails the cost-benefit test. … Out of the people interested in economics, 95% clearly have a comparative advantage in economic intuition, because they can’t understand mathematical economics at all. …. Even the 5% gain most of their economic understanding via intuition. .. Show a typical economist a theory article, and watch how he “reads” it: … If math is so enlightening, why do even the mathematically able routinely skip the math? .. When mathematical economics contradicts common sense, there’s almost always mathematical sleight of hand at work – a sneaky assumption, a stilted formalization, or bad back-translation from economath to English. … Paul[‘s] … seminar audiences needed the economath because their economic intuition was atrophied from disuse. I can explain Paul’s models to intelligent laymen in a matter of minutes.

Krugman replies:

Yes, there’s a lot of excessive and/or misused math in economics; plus the habit of thinking only in terms of what you can model creates blind spots. … So yes, let’s critique the excessive math, and fight the tendency to equate hard math with quality. But in the course of various projects, I’ve seen quite a lot of what economics without math and models looks like — and it’s not good.

For most questions, the right answer has a simple intuitive explanation. The problem is: so do many wrong answers. Yes we also have intuitions for resolving conflicting intuitions, but we find it relatively easy to self-deceive about such things. Intuitions help people who do not think or argue in good faith to hold to conclusions that fit their ideology, and to not admit they were wrong.

People who instead argue using math are more often forced to admit when they were wrong, or that the best arguments they can muster only support weaker claims than those they made. Similarly, students who enter a field with mistaken intuitions often just do not learn better intuitions unless they are forced to learn to express related views in math. Yes, this typically comes at a huge cost, but it does often work.

We wouldn’t need as much to pay this cost if we were part of communities who argued in good faith. And students (like maybe Bryan) who enter a field with good intuitions may not need as much math to learn more good intuitions from teachers who have them. So for the purpose of drawing accurate and useful conclusions on economics, we could use less math if academics had better incentives for accuracy, such as via prediction markets. Similarly, we could use less math in teaching economics if we better selected students and teachers for good intuitions.

But  in fact academia research and teaching put a low priority on accurate useful conclusions, relative to showing off, and math is very helpful for that purpose. So the math stays. In fact, I find it plausible, though hardly obvious, that moving to less math would increase useful accuracy even without better academic incentives or student selection. But groups who do this are likely to lose out in the contest to seem impressive.

A corollary is that if you personally just want to better understand some particular area of economics where you think your intuitions are roughly trustworthy, you are probably better off mostly skipping the math and instead reasoning intuitively. And that is exactly what I’ve found myself doing in my latest project to foresee the rough outlines of the social implications of brain emulations. But once you find your conclusions, then if you want to seem impressive, or to convince those with poor intuitions to accept your conclusions, you may need to put in more math.

GD Star Rating
loading...
Tagged as: ,

Inequality Math

Here is a distribution of aeolian sand grain sizes:

Here is a distribution of diamond sizes:

On a log-log scale like these, a power law is a straight line, while a lognormal distribution is a downward facing parabola. These distributions look like a lognormal in the middle with power law tails on either side.

Important social variables are distributed similarly, including the (people) size of firms:

and of cities:

In these two cases the upper tail follows Zipf’s law, with a slope very close to one, implying that each factor of two in size contains the same number of people. That is, there are just as many people in all the cities with 100,000 to 200,000 people as there are in all the cities with one million to two million people. (Since there are an infinite number of such ranges, this adds up to an infinite expected number of people in huge cities, but actual samples are finite.)

The double Pareto lognormal distribution models this via an exponential distribution over lognormal lifetimes. In a simple diffusion process, positions that start out concentrated at a point spread out into a normal distribution whose variance increases steadily with time. With a normal distribution over the point where this process started, and a constant chance in time of ending it, the distribution over ending positions is normal in the middle, but has fat exponential tails. And via a log transform, this becomes a lognormal with power-law tails.

This makes sense as a model of sizes for particles, firms, and cities when such things have widely (e.g., exponentially) varying lifetimes. Random collisions between grains chip off pieces, giving both a fluctuating drift in particle size and an exponential distribution of grain ages (since starting as a chip). Firms and cities also tend to start and die at somewhat constant rates, and to drift randomly in size.

In the math, a Zipf upper tail, with a power of near one, implies little local net growth of each item, so that size drift nearly counters birth and death rates. For example, if a typical thousand-person firm grows by 1% per year (with half growing slower and half growing faster than 1%), but has a 1% chance each year of dying (assuming no firms start at that size), it will keep the same expected number of employees. Such a firm has no local net growth.

Interestingly, individual wealth is distributed similarly. More on that in my next post.

GD Star Rating
loading...
Tagged as: ,

Fixing Election Markets

One year from now the US will elect a new president, almost surely either a Republican R or a Democrat D. If there are US voters for whom politics is about policy, such voters should want to estimate post-election outcomes y like GDP, unemployment, or war deaths, conditional on the winning party w = R or D. With reliable conditional estimates E[y|w] in hand, such voters could then support the party expected to produce the best outcomes.

Sufficiently active conditional prediction markets can produce conditional estimates E[y|w] that are well-informed and resistent to biases and manipulation. One option is to make bets on y that are called off if w is not true. Another is to trade assets like  “Pays $y if w” for assets like “Pays $1 if w.” A basic problem this whole approach, however, is that simple estimates E[y|w] may reflect correlation instead of causation.

For example, imagine that voters prefer to elect Republicans when they see a war looming. In this case if y = war deaths then E[y|R] might be greater than E[y|D], even if Republicans actually cause fewer war deaths when they run a war. Wolfers and Zitzewitz discuss a similar problem in markets on which party nominees would win the election:

It is tempting to draw a causal interpretation from these results: that nominating John Edwards would have produced the highest Democratic vote share. …The decision market tells us that in the state of the world in which Edwards wins the nomination, he will also probably do well in the general election. This is not the same as saying that he will do well if, based on the decision market, Democrats nominate Edwards. (more)

However, this problem has a solution: conditional close-election markets — markets that estimate post-election outcomes conditional not only on which party wins, but also on the election being close. This variation not only allows a closer comparison between candidates’ causal effects on outcomes, but it is also more relevant to an outcome-oriented voter’s decision. After all, an election must be close in order for your vote to influence the election winner.

To show that conditional close markets estimate causality well, I’ll need to get technical. And use probability math. Which I do now; beware.

Continue reading "Fixing Election Markets" »

GD Star Rating
loading...
Tagged as: ,

A Model of Extraordinary Claims

Last week I claimed that the saying "extraordinary claims require extraordinary evidence" is appropriate anytime people too easily make more extreme claims than their evidence can justify.  Eliezer, however, whom I respect, thought the saying appropriate anytime people make claims with a very low prior probability.  So I have worked out a concrete math model to explore our dispute.  I suggest that if you are math averse you stop reading this post now. 

Continue reading "A Model of Extraordinary Claims" »

GD Star Rating
loading...
Tagged as: ,