31 Comments

Not sure this sheds light but I'm reminded of the anecdote from "Surely You're Joking Mr. Feynman" in which IIRC Feynman ends up seated next to a royal family member at the Nobel dinner. Royal asks him what he does, Feynman says physics, she says "oh physics, no one knows anything about *that*" by way of dismissing the topic, and Feynman responds something like "on the contrary, people *do* know something about physics, which is precisely why people find it so difficult to discuss," earning Feynman a memorably icy royal glare..

Expand full comment

For example, the most dramatic successes of prediction markets, i.e., where correct market forecasts most differ from official forecasts, are for project deadlines.

Seems you'd have to give one hell of a subsidy to get people to bet on project deadlines. It's bad news for prediction markets if the most useful applications involve the most boring topics.

Expand full comment

Good point. I have considered that possibility. My clients obviously self-select so this is not a random subset of the population of all decisions. Perhaps it is only a problem for decisions managers already recognize as problematic. Of course, even if the Measurement Inverstion were limited only to complex, difficult decisions, that would be bad enough. But I wonder how much bigger the impact could be if it were an aspect of mundane, routine and extremely frequent decisions. That would be even scarier, I think.

Doug Hubbard

Expand full comment

Generally, yes. Of course, if they hire us they feel bit of a commitment to follow through with our recommendations.But don't take for granted how "obvious" this seems to them. The high information value measurements are routinely a surprise, but sometimes obvious only in retrospect.Of course, the high uncertainty variables are more likely to be high-information value, depending also on how sensitive the model is to that variable. Part of the problem - which I think I might add to the next edition - is that there appears to be an assumption that if there is a lot of uncertainty, you need a lot of data to measure it. Mathematically speaking, just the opposite is true. They may presume that if something is highly uncertain, they need even more effort to measure it when, in fact, it is in the case of high uncertainty where just a few observations offer the biggest uncertainty reduction. If you know almost nothing, almost anything will tell you something.But I don't claim to know all the reasons why it occurs. These are just my hypothesis. What I have observed is that 1) the measurement inversion exists and 2) when you compute information values there tends to be a high level of acceptance of the need for the measurement (at least among my clients). Further social science research in this would be interesting.Doug

Expand full comment

If you explain this seemingly obvious truth to them and they understand your explanation, do they stop resisting novel measurements and seek guidance on how to make them? Is it that easy? If not, social science needs a further explanation of their resistance, although the explanation may be irrelevant in your role as methodologist. (But it might be relevant in your role as advisor.)

Expand full comment

You have an initial state of uncertainty prior to a measurement Measurement further reduces this state of uncertainty. You compute the value of the additional uncertainty reduction while still in the prior state of uncertainty. That actually is the standard decision-theory method for computing EVPI and EVI.Doug

Expand full comment

Yes, that is one of the explanations I offer. They don't measure the most uncertain variables because they don't know how to measure them - and they are uncertain because they haven't been measuring them.Doug Hubbard

Expand full comment

But they keep measuring what they already know more about - because they know how to measure it. So its not obvious to them. That's the whole point.Doug Hubbard

Expand full comment

The hypothesis on offer for why people are hostile to measuring the most informative variables is that they don't like making unfamiliar measurements, despite the familiar ones having little added value due to their previously having been measured. This doesn't completely explain the hostility to informative measurements, but it suggests an explanation: human discomfort with the unfamiliar and attraction to the familiar. ( http://en.wikipedia.org/wik... )

Expand full comment

I believe I grasp the concept of information value. What I didn't get is how you could determine what a variable is worth if you had no measurement of it. The answer is that it was possible to determine the measurement after the fact. This does raise the question of whether there might be other unmeasured variables that would have been predictive but whose magnitudes are unavailable later (though I realize the latter was not what you were studying).

Expand full comment

So when you talk about the info value of a parameter, you are talking about the value of knowing that parameter 10% more precisely? In that case I guess it seems obvious that once you know a lot about a variable there is little to be learned by knowing it a little better.

Expand full comment

Chip,

The value of additional information is zero once the quantity is known exactly. Information has value only while you still have uncertainty. How much would you pay me to know the orange harvest if you already knew it exactly? Nothing. You would only pay me to measure it if you still had uncertainty that would affect some relevant decision for you. The value of information is a standard and well understood calculation in decision theory.

What I said in the book is that the items with the highest information value tended to be those items that they would not have measured otherwise. I ask them before I compute the information values what they would have measured or I look at evidence of what they had been measuring for earlier decisions. The high information values are often a surprise.Doug Hubbard.

Expand full comment

Robin,First, thanks for the review. I believe that the first and third items on this list can also be a systemic error. At no point do I or would I suggest that such a persistent pattern is a result of non-biased, random accident. Now that I've seen this same pattern on over 80 projects to date, I can confidently say the chance of this being the result of random error is virtually zero.In regards to the first item on the list, what they have historically been measuring tend to be those things that (no surprise) have less uncertainty at this point. If you've been measuring system downtime for years, you probably have a better benchmark for it when asked for an estimate than something you've never measured at all - say, the adoption rate of a new technology by users or the additional revenue from some new product feature. Familiarity with lots of historical examples of a measurement no doubt reduce uncertainty compared to measurements where even a sense of scale is lacking.The third item depends on the first. Given that X is something they haven't been measuring at all, they may have a lot of uncertainty about X and it may have a high information value for a given decision. But they may go on presuming that X is "too difficult" to measure becasue they have no sense of the relative value of measuring it compared to a potentially increased cost of measuring something unfamiliar. If we knew that the information value was $2 million, that might be enough to indicate to a manager that perhaps they should consider moving out of thier comfort zone and learn how to measure something different.Doug Hubbard

Expand full comment

Must there be one for cd to form the hypothesis?

Expand full comment

Cross-posting from an answer I gave to essentially the same question on Luke's post:

One possibility is that there are a very large number of things they could measure, most of which have low information value. If they chose randomly we might expect to see an effect like this, and never notice all the low information possibilities they chose not to measure.

I'm not suggesting that they actually do choose randomly, but it might be they chose, say, the easiest to measure, and that these are neither systematically good or bad, so it looks similar to random in terms of the useful information.

Expand full comment

I guess this is "Non-Solution Bias". People tend to measure what is easy to see and understand. Hubbard is not consulted if this works.

He sees the cases where this didn't work. Clients were measuring the wrong things and didn't understand their real problems.

Expand full comment