14 Comments

There's another side of this that isn't about information, but who the students are getting the information from. Similar to the Milgram Experiment, the subjects in all of these experiments were *told* this information, they did not appear to learn it of their own volition. We learn information all the time, but maybe there's something special about signalling information *from those with power* that predisposes us to make a call?

The way to test this is going to be similar to how the Milgram experiment was replicated with different levels of authority. Everything from a real doctor(who's cultivated a persona of authority over a lifetime of practice) wearing a real lab coat telling you #misleading_G0348 to fellow fellow students, to non good looking, western, young educated, industrialized, rich, from democratic countries, etc. We should predict if this is a signalling thing that this bias should correlate with the level of authority or at least not correlate to the inverse.

Similarly...moderating whether or not there was any information given at all or not(and whether they were conscious of the fact) could be done, too. is "THE EXPERIMENT MUST GO ON" enough?

Expand full comment

Eliezer, whatever our cognitive process to decide how much to pay to gather how much info, it will have some knobs associated with how eager to be in various situations. If on average people seem to gather too much info, then we could suspect those knobs to have biased settings, and seek an explanation for that. My impression is that people tend to gather too much, and I offered an explanation, but of course I'll defer to a more careful data analysis.

Expand full comment

Robin, my inspiration here is coming from an AI view of "relevance" - algorithms such as D-separation in causal graphs, or Explanation-Based Learning. Rather than conspicuous consumption of information, I would suspect a cognitive mechanism which says something along the lines of: "If this information appears to play a causal role in my thought processes, and I can afford to gather it, then gather it before making my decision." As opposed to, "Imagine the decision I would make with (many possible values of?) the information, weighted by my expectations of finding that particular information; and the decision I would make without the information; and their probable consequences conditional on that information; and gather the information if the cost is outweighed by the expected differential."

Without instruction in probability theory, people don't realize that their expectation of a posterior probability should always equal their prior probability - so properly calculating the probability and weights of evidence we expect to see is not instinctive. On the other hand, people can readily answer the question "Did you think about X while making a decision?" They can't correctly assess the effect that knowing X had on their decision, relative to the decision they would have made if they didn't know X (e.g. hindsight bias). But people do know which thoughts play a causal role in their overall decision - the qualitative relevance, as it might be read directly off the links in a Bayesian network, rather than the counterfactual influence.

My suggestion is that people are assessing the need to gather information by reading, off an internal causal graph, the qualitative fact of whether the information appears "involved" in the decision; not assessing the Bayesian expected utility of the information even by counterfactual imagination. In other words, people will gather cheap information that makes no difference, so long as it impinges on their thought processes in making the decision - for example, a binary variable that affects the visualized consequence of an action, even though, on *either* value of the variable, and *either* resulting consequence, the actually chosen action is the same.

Or people might try to visualize their choices and calculate the expected utility of information, yet be systematically wrong in their self-assessment of what they will decide - but do people even ask that question? Or to put it another way, would someone who *didn't* know Bayesian decision theory, and who was not mathematically minded - like a hunter-gatherer - even dream of following the Bayesian procedure for deciding what information to gather? Try asking a student who's never taken a course on economics how they think one should decide which information to gather.

And again, remember that "Conspicuous consumption" is a rather complex hypothesis, which involves people, not only attending to social feedback, but anticipating that feedback in advance - thinking for some fleeting moment of how informed they might look in front of others. That's complexity that needs to be justified.

Expand full comment

Eliezer, the examples in the post were categories of situations where one can reliably see people spending too much on info, relative to its decision value given what they knew. Iraqi WMD is a single case, and even there it is not clear that too little was spent on info collection given what they knew. The Wason selection test might show people picking the wrong info, but that is different from getting too little info overall.

Expand full comment

Decisions based on too little info? How 'bout Iraqi WMDs? It shouldn't be hard to find decisions based on too little info - and in any case, the postulated adaptation is one that says, "If it feels important and you can gather the info, then decide after you've gathered it," not an adaptation that decides how much info needs to be gathered. In other words, the bias is in what "feels important" not being calculated on the basis of the expected utility information, but instead on the basis of some other heuristic of relevance. If so, we might find that people also fail to gather relevant information - such as, for example, people who fail to turn over an important card in the Wason Selection Test.

In other words, I'm not clear on how you decide: (1) that we don't have clear examples of "Too little info"; (2) that this is a failed prediction of the "Decide after gathering" hypothesis, which necessitates a rather more complex "Conspicuous consumption" hypothesis (in a case where I see no obvious social benefits for appearing well-informed).

Expand full comment

i.e. people pretend to want to become foxes while hiding their inner-hedgehog...

There could also be a "nuclear deterrence" effect, which ends up eliminating conflict (debate).

Expand full comment

Eliezer, yes of course we could explain a few examples of too much info as due to heuristic errors. The question is whether one can as easily find examples of too little info as of too much info. I sure can't.

Expand full comment

Reading the article shows that, of the nurses told positive results, more than two-thirds agreed to donate the kidney. Also, bear in mind that the students wanted to wait for their test results, too.

I would tend to interpret the above results as showing that, just as humans are not Bayesian truthseekers, humans are not Bayesian expected utility maximizers who calculate the expected utility of information. Instead we have a curiosity instinct, and an instinct to wait on planning until we have the information we're curious about. Three cheers for natural selection.

I think that postulating "conspicuous consumption", a desire to appear informed in front of others, is over-explaining. Especially when you consider the students waiting to purchase plane tickets on their test results, or the nurses wanting to take the compatibility test. It's easier to see it as a simple adaptation that occasionally goes wrong: gather information that comes up as a query when you're wondering what to decide, before you make the decision.

Expand full comment

The nurse experiment can be interpreted very differently. Its seems that saying you would take the test is a way of signaling some level altruism (I might donate if the circumstances were right and the need was great) without committing oneself to any course of action. Saying that you would donate a kidney signals even more altruism but at a higher cost of commitment.

Expand full comment

Aaron,

You don't have to be required to share the results of the test, precisely because incompatibiltiy provides an excuse. Anyone interested in helping will take the test, and provide a donation or share the negative test result. A selfish impression-manager would provide a negative result, but would not reveal that he had taken the test in the event of a positive result. The structure of incentives means that failing to share your result effectively communicates it.

Expand full comment

Getting a compatability test, I don't believe, requires you to share your result. Also, being non-capatible gives the would be donar a noble exuse. Atleast they tried.

Expand full comment

Information assymmetries confer advantage on the person with the privileged information. In most walks of life, that kind of information can be used as leverage - whether in straight economic negotiations, or in more complex political games. I don't know if it's provable empirically, but prima facie it is plausible that being an information junkie would be evolutionarily adaptive in an increasingly socialised human community.

Expand full comment

Carl, yes, another reason managers prefer other mechanisms is that they are more fudgeable.

Expand full comment

I wouldn't characterize the nurses' decisions as conspicuous information consumption: refusing to take the test clearly signals the boundaries of your altruism (indicating that you would not donate in the event of a positive result), but choosing to undertake the test will allow one to conceal this in the event of a negative result.

On the prediction market front: I suspect that one perceived disadvantage is that their independence, in addition to preventing conspicuous analytical displays, hinders data-fudging. One 'expert substitute' for corporate management is the use of external consultants, which seem to actually serve a CYA function, e.g. "I won't get fired for implementing the McKinsey recommendations, which happen to match up with my initial guidance to the Engagement Manager." The inanity of investment banking 'fairness opinions' further illustrates the point. A non-robust financial model that can be tweaked to give any desired result with minor assumption shifts is far more useful to management seeking to empire-build at the expense of shareholders than a technically superior one.

Better targets for marketing effective expert substitutes ideal targets will be those whose positions involve little politicking and whose compensation is more outcome dependent. Prediction markets might be an easier sell for companies controlled by private equity funds, where the pitch can be made directly to activist directors, rather than the management rendered partially superfluous.

However, even in the absence of principal-agent problems juror biases and the threat of lawsuits might create resistance to expert substitutes: do juries treat errors caused by deferring to expert substitutes more harshly than human errors of judgment, or the reverse? Looking at the malpractice records of doctors going against expert system recommendations and vice versa would be valuable here.

Expand full comment