17 Comments

The neutral observers are not neutral. They are trying to manipulate the questioner into behaving "fairly".

Expand full comment

Perhaps at least some of the people being experimented on were not initially thinking it through whether they were being fair or unfair, rational or unrational, but rather had a thought upmost in their minds when deciding which task to do: "why didn't the experimenter make the choice for me?" Irritation with the experimenter may make people feel defensive and selfish. Even if they thought it unfair that had to make the choice of what task to do, they can still evaluate whether their choice was fairest, (at least in retrospect) and sometimes tell the truth about that self-evaluation.

Also, do people, at the time they are making a choice, know that if they choose the easy task are acting unfairly towards later arrivals, or is this a judgement they can make only in retrospect? In the real world, people learn from their moral choices, and may consciously act differently when a similar situation presents itself.

Expand full comment

Anyone looked at the actual study more closely? More to the point, anyone have any idea if one can use this effect on themselves to help debias?

ie, if I suspect I may be rationalizing something, and I'm having trouble directly driving right through my biases, would trying to distract myself be a useful technique for improving my judgement?

Expand full comment

There is an advantage in appearing better than you are and even in believing that you are better than you are. In some cases, it translates as overconfidence, in others as subconscious hypocrisy. The trick is that "better" in appearance and belief refers not to what is good for you, but to what is good for those who observe your behavior. Thus, you deceive yourself into thinking that you are more helpful to others (or your "tribe") than you really are. If you are generally helping people, you think that you are helping them more than you do, and if you secretly steal their resources, you think that you hurt them less than you do. Biases are divergences of reasoning from facts, but they are usually expressed in the behavior, which is not just about facts but also goals. Distortion of the reasoning process with another task makes boasting adaptation less able to influence people's decisions.

I expect that if people who receive a harder task as a result of actions of our participants were presented as from "another tribe" or "enemies", the bias would disappear. In fact, a slightly opposite bias may appear.

Expand full comment

But did they remember to invert the results for mathematicians?

:-)

Expand full comment

My thought is that traditional morality always assumes an action is forbidden in the hypothetical, but frequently engages in whatever serves personal interests, justifying it in situational terms.

Expand full comment

What is the moral weight of taking the activity you prefer vs taking the easy one?Personally, I think I'd vastly prefer a mental geometry exercise to a hunt through pictures, so would it be fair of me to claim my prefered activity for myself? Perhaps it is moral to sacrifice my prefered activity, but without knowing the other party's preference this is likely to be useless self-flagellation.What about the fairness of the "first come first served" rule of thumb? Is it immoral to punish latecomers?

There is probably a lot more entering the subject's judgement, on various levels, than simply making themselves appear fair.

Expand full comment

It seems to me that the natural follow-up question is: can you extract both system 1 & system 2 answers, or do you memoize which ever comes out first? System 1 probably relies on memoizing more than system 2, so going to 2 first probably won't work, but if they asked the subjects again, after letting them flush the list, would they stick with the claim of unfairness?

Expand full comment

BTW, this is easily the most fascinating bias I've heard about this whole month.

Er, make that, "over the last 30 days".

Expand full comment

Jadagul and Michael Vassar raise a very interesting point. Our intuitions about fairness as a virtue are typically justified from a consequentialist perspective, but this seems to be a case where the "unfair", self-interested action was not inherently unethical. However this is a very contrived argument, and disregarding intuitive moral rules is nevertheless problematic.

My hunch is that Eliezer is right and this result will hold up even if replicated in a less ambiguous situation.

Expand full comment

I think I like Michael Vassar's claim. I can see how you would claim that the participant is acting 'unfairly,' but only insofar as I don't think 'unfair' is actually a bad thing. I certainly don't think that assigning yourself the easy task is doing anything wrong--I do things like that all the time.

Expand full comment

If System 2 intelligence is being used to defeat itself, as in rationalization, and System 1 tends to produce good answers on its own, then distracting deliberation will yield better results. If intelligence is being used productively, trying to memorize a string of numbers at the same time will make you do worse. That's the first general rule that comes to mind.

Expand full comment

Oops, I hadn't seen the article where the interpretation I gave is given as their explanation too. It is a completely expected and unremarkable result to me, but still worthwhile to have done, of course. Is anybody *really* surprised by this result?

As for a rule of thumb for when to distract yourself, I don't think that is necessary or beneficial if rationalization is a fairly conscious activity. In that case, you can choose to do it or not, and if you can choose to distract yourself to prevent yourself rationalizing something that you have a vested interest in, it would be even easier just not to rationalize in the first place. For complex judgments, some serious thought might be required, and so distraction wouldn't work since it would prevent coming to a judgment. If it turns out not to be so conscious a process, one could try the heuristic that a simple moral judgment that one has any personal interest in at all calls for some distraction (assuming one prefers not to let the rationalization happen).

Expand full comment

My interpretation of the results is that when the individuals have the mental ability to self-rationalize, they probably will, and they are probably somewhat aware that they are doing it as well. The key point about the intervention in my mind is that they were asked to *retain* the list of numbers, implying that they were constantly pre-occupied with keeping the list of numbers in working memory in order not to forget them. Under these circumstances, most people are simply not capable of coming up with an explanation for why their choice was not unfair while they are simultaneously keeping in working memory a list of numbers. They realize that they will forget the numbers if they stop thinking about them for more than a split second, and they are not comfortable saying "it was fair" without having a rationalization in mind.

I'd predict that if they actually had a real stake in whether they answered "it was fair" or "it was unfair" that they would consciously choose to dump the memory task and focus on rationalizing and they would be somewhat aware that they were doing that. I'd also predict that any intervention that does not tax the executive functions would not show the intervention effect.

Expand full comment

My sense on this is that in this case people are favoring themselves over others but not in a negative sum manner so they are acting ethically (by practically any standard I know of, though least clearly that of Smith's neutral observer) but not fairly. Most people lack the mental vocabulary to make such distinctions however, and those who do make such distinctions rationally don't expect the experimenters to do so, but when not distracted they are vaguely aware of something like this distinction, at least, they are so aware when their self interest motivates them to put more than typical effort into the evaluation. They were legitimately unwilling to judge their behavior or that of their teammates as unethical but not inclined/able to conveniently express that it was unfair but not wrong so they glossed over the details and claimed fairness. Basically this sort of problem will come up whenever affect heavy words that have meanings distinct from the word's affect are used.

Expand full comment

Could it be that the bias is actually on the part of those who were judging the experiment but not participating in it?

My first thought was that it's not morally wrong since you're participating in an experiment where you do not know how long it will take or how difficult it will be. Therefore choosing the easy option doesn't cause the late comer any distress and is not immoral. Now, when you describe the experiment to people and ask them if it's immoral, their perspective might bias them to consider it immoral because they might attribute the global knowledge they have to the late comer who they think is being harmed. When you're participating in the experiment, however, the correct rationalization might come easier to hand (unless you're distracted).

Expand full comment