Heads In The Sand

The end of a Boston Globe article on The future of prediction:

But the real question, when it comes to predicting the future of forecasting, may not be whether we can or can’t forecast accurately — it’s whether we want to. Robin Hanson, an economist at George Mason University and a pioneer of prediction market design, thinks that what’s holding back our ability to predict is not technology or a lack of ingenuity. He believes companies and governments already have much of what they need to be a lot better at predicting the future, and that the reason they’re not taking more advantage of it is that in many cases, having accurate predictions in hand makes managers, CEOs, and government officials accountable in a way that lots of them don’t want to be.

That’s because knowing the future can be a scary thing: It means genuinely answering for the costs of our decisions, confronting the likelihood of failure, seeing that arrows point down as often as they point up. When we’re offered a look into the crystal ball, it may in fact be human nature to turn away.

“We’re two-faced,” Hanson said. “We like to talk as though we wanted better forecasts, but often we have other agendas. When the opportunity to know the future presents itself — as, increasingly, it will — we may end up discovering that we’d rather stay in the dark.”

When projects fails, project managers like to say “No one could have foreseen that. We did the best we could.” This strategy doesn’t work so well when prediction markets or other credible methods create clear public track records showing consensus estimates of a high chance of failure, and perhaps also what could have been done to reduce that chance.

GD Star Rating
Tagged as: ,
Trackback URL:
  • Nice mention.

    So how to prevent people from making disastrous policy decisions that were obvious at the time when they don’t suffer adverse consequences proportionate to the harm their decisions cause?

    • Faul_Sname

      Alternatively, how do we reformulate the system such that good but not impressive policy decisions are rewarded in a manner that correlates with how good they were, rather than how good they appeared outwardly. I suspect that it is easier to reward than to punish in this case.

      • Seasteading 🙂

        And on a more serious note for Jeffrey Soreff, possibly. Hanson thinks that we limit the ability of corporate raiders to remove bad management. That would leave open the possibility of starting a new firm with a more accurate decision-making mechanism and sweeping away the competition, but perhaps barriers to entry stop that. And there’s also the possibility that all attempts at internal corporate decision markets would be futile because the staff would rebel, a la the morale preserving effect of downwardly-sticky “efficiency wages”.

      • A problem is that policies and events with a long lead time take a long time to mature. Until that time has passed, how successful (or not) the policy was is still in dispute. It often remains in dispute long after that time because people are unwilling to admit they made bad policy.

        Often it is the people who made and supported the policy that evaluate its success and simply lie about it. For example “Mission Accomplished”. There are many partisans who still can’t admit (even to themselves) that the original action was not a good idea even though it has cost 50x what it was estimated at.

        AGW is also a good example. There is no scientific dispute about CO2 in the atmosphere causing warming. Pretending CO2 does not cause global warming can’t be good policy, yet that is what many are doing.

        Pretending that greenhouse gases don’t affect the climate is bad policy. Yet it is that bad policy that is rewarded by those who benefit from selling fossil fuels now. One might disagree about the economics of switching to alternatives to fossil fuels. But pretending that there is no need to even consider it is clearly bad policy.

  • KPres

    At the same time, being able to predict the future presumably means we can calculate the likelihood of failure, giving administrators a demonstrable excuse, ie, the can say “well, everybody knew there was a 40% chance of failure when we started.”

    Usually it’s the critics screaming loudest from the peanut gallery though they bear no responsibility, sit on the sidelines and say things were obvious after the fact yet overlook the countless times their criticisms were way off the mark.

  • Preferred Anonymous

    I had written a very long involved response here…until I realized that this is a bunch of opinion, speculation, and hogwash based upon someone’s personal belief and a distorted view of project management and governmental competence.

    I’d sooner suspect any lack of foresight had to do with elected stupidity rather than any purposeful luddite behavior towards foresight.

    We cannot see the future, only predict it (if you can’t understand the distinction, please deign not to reply). Invoking psychological arguments doesn’t change basic logical principles over cause and effect.

    • Faul_Sname

      “I’d sooner suspect any lack of foresight had to do with elected stupidity rather than any purposeful luddite behavior towards foresight.”

      There is a limit to the stupidity I am willing to assume in our elected leaders. If the goal of our leaders was to make the best decision using all the information, I would expect any reasonably intelligent leader to use prediction markets and more experimental policies. There are three ways I can see that I could be wrong. First, I might be wrong about the effectiveness of prediction markets. In that case, leaders simply decided they were not effective enough to be worthwhile. Second, I could be mistaken about the intelligence of our leaders: it could be that what seems obvious to me does not even cross their minds. This strikes me as unlikely, as politicians tend to be quite intelligent and perceptive. The third alternative, of course, is that politicians are more interested in reelection than in providing reliable metrics of their performance. This is not at all outlandish, as we use a system that selects for electability rather than performance. The two are correlated, but not perfectly. A high-performing politician who used prediction markets and had a documented record of failure would be less likely to be elected. Unless I am mistaken, that is the main point professor Hanson was trying to make.

      “We cannot see the future, only predict it… Invoking psychological arguments doesn’t change basic logical principles over cause and effect.”

      Assuming you mean that we cannot predict the future with as much clarity as we can see the past, you are entirely correct. We (probably) can, however, predict it with higher accuracy than our current politicians do.

      I am honestly not seeing what you are trying to say on the psychological aspect of your point. Perhaps you should post the long, involved response you came up with before you decided that this was “a bunch of opinion, speculation, and hogwash based upon someone’s personal belief and a distorted view of project management and governmental competence.”

  • Jeffrey Soreff

    That’s odd. Are you saying that marketplace competition
    between firms fails to replace firms which use a poor
    technology for prediction with firms that use a better one?

    • Jeffrey Soreff raises a vital point. Hanson is talking about an instance of market failure.The assumption that humans are (or can be turned into) rational economic agents is deeply flawed. Who would want to trust anything really important like major decisions to the irrationalities of human market behavior?

      The relevant question is whether there’s a way to organize production to increase predictive accuracy. This, it seems to me, might involve *lowering* the incentives to predict correctly, since the incentives create self-serving biases. (This solution would not appear possible under capitalism.)

  • steve

    I used to work for an electronics company doing research for the DOD and DARPA. Certain projects (not all but enough to constitute a trend) were foregone failures at the start, we already knew the answer. Yet, that didn’t matter at all. What mattered was billing those hours. Any mention of the obvious was met with near universal peer pressure to modify ones attitude actively instigated by management. They defenitely were not interested in accurate predictions.

  • PG

    So the answer could be making the prediction tools somehow anonymous? Surely then market competition will make them spread like fire..