12 Comments

So the answer could be making the prediction tools somehow anonymous? Surely then market competition will make them spread like fire..

Expand full comment

Jeffrey Soreff raises a vital point. Hanson is talking about an instance of market failure.The assumption that humans are (or can be turned into) rational economic agents is deeply flawed. Who would want to trust anything really important like major decisions to the irrationalities of human market behavior?

The relevant question is whether there's a way to organize production to increase predictive accuracy. This, it seems to me, might involve *lowering* the incentives to predict correctly, since the incentives create self-serving biases. (This solution would not appear possible under capitalism.)

Expand full comment

I used to work for an electronics company doing research for the DOD and DARPA. Certain projects (not all but enough to constitute a trend) were foregone failures at the start, we already knew the answer. Yet, that didn't matter at all. What mattered was billing those hours. Any mention of the obvious was met with near universal peer pressure to modify ones attitude actively instigated by management. They defenitely were not interested in accurate predictions.

Expand full comment

Robin, when are you going to join PredictionBook?

Expand full comment

A problem is that policies and events with a long lead time take a long time to mature. Until that time has passed, how successful (or not) the policy was is still in dispute. It often remains in dispute long after that time because people are unwilling to admit they made bad policy.

Often it is the people who made and supported the policy that evaluate its success and simply lie about it. For example “Mission Accomplished”. There are many partisans who still can't admit (even to themselves) that the original action was not a good idea even though it has cost 50x what it was estimated at.

AGW is also a good example. There is no scientific dispute about CO2 in the atmosphere causing warming. Pretending CO2 does not cause global warming can't be good policy, yet that is what many are doing.

Pretending that greenhouse gases don't affect the climate is bad policy. Yet it is that bad policy that is rewarded by those who benefit from selling fossil fuels now. One might disagree about the economics of switching to alternatives to fossil fuels. But pretending that there is no need to even consider it is clearly bad policy.

Expand full comment

Seasteading :)

And on a more serious note for Jeffrey Soreff, possibly. Hanson thinks that we limit the ability of corporate raiders to remove bad management. That would leave open the possibility of starting a new firm with a more accurate decision-making mechanism and sweeping away the competition, but perhaps barriers to entry stop that. And there's also the possibility that all attempts at internal corporate decision markets would be futile because the staff would rebel, a la the morale preserving effect of downwardly-sticky "efficiency wages".

Expand full comment

That's odd. Are you saying that marketplace competitionbetween firms fails to replace firms which use a poortechnology for prediction with firms that use a better one?

Expand full comment

Alternatively, how do we reformulate the system such that good but not impressive policy decisions are rewarded in a manner that correlates with how good they were, rather than how good they appeared outwardly. I suspect that it is easier to reward than to punish in this case.

Expand full comment

"I’d sooner suspect any lack of foresight had to do with elected stupidity rather than any purposeful luddite behavior towards foresight."

There is a limit to the stupidity I am willing to assume in our elected leaders. If the goal of our leaders was to make the best decision using all the information, I would expect any reasonably intelligent leader to use prediction markets and more experimental policies. There are three ways I can see that I could be wrong. First, I might be wrong about the effectiveness of prediction markets. In that case, leaders simply decided they were not effective enough to be worthwhile. Second, I could be mistaken about the intelligence of our leaders: it could be that what seems obvious to me does not even cross their minds. This strikes me as unlikely, as politicians tend to be quite intelligent and perceptive. The third alternative, of course, is that politicians are more interested in reelection than in providing reliable metrics of their performance. This is not at all outlandish, as we use a system that selects for electability rather than performance. The two are correlated, but not perfectly. A high-performing politician who used prediction markets and had a documented record of failure would be less likely to be elected. Unless I am mistaken, that is the main point professor Hanson was trying to make.

"We cannot see the future, only predict it... Invoking psychological arguments doesn’t change basic logical principles over cause and effect."

Assuming you mean that we cannot predict the future with as much clarity as we can see the past, you are entirely correct. We (probably) can, however, predict it with higher accuracy than our current politicians do.

I am honestly not seeing what you are trying to say on the psychological aspect of your point. Perhaps you should post the long, involved response you came up with before you decided that this was "a bunch of opinion, speculation, and hogwash based upon someone’s personal belief and a distorted view of project management and governmental competence."

Expand full comment

I had written a very long involved response here...until I realized that this is a bunch of opinion, speculation, and hogwash based upon someone's personal belief and a distorted view of project management and governmental competence.

I'd sooner suspect any lack of foresight had to do with elected stupidity rather than any purposeful luddite behavior towards foresight.

We cannot see the future, only predict it (if you can't understand the distinction, please deign not to reply). Invoking psychological arguments doesn't change basic logical principles over cause and effect.

Expand full comment

At the same time, being able to predict the future presumably means we can calculate the likelihood of failure, giving administrators a demonstrable excuse, ie, the can say "well, everybody knew there was a 40% chance of failure when we started."

Usually it's the critics screaming loudest from the peanut gallery though they bear no responsibility, sit on the sidelines and say things were obvious after the fact yet overlook the countless times their criticisms were way off the mark.

Expand full comment

Nice mention.

So how to prevent people from making disastrous policy decisions that were obvious at the time when they don't suffer adverse consequences proportionate to the harm their decisions cause?

Expand full comment