More Moral Wiggle Room

A new lab experiment confirms results reported a year ago:  people prefer to not know how their actions effect others, when such knowledge would induce them to sacrifice to benefit others. 

In the baseline version, each subject chose between five pairs of numbers (x,y), where x is how much money he gets and y is how much money some other subject gets.  In each pair (x,y,) each number was drawn randomly from the set {1,1,4,4,7}.  Here 40 of 63 subjects appeared to put heavy weight on benefits to the other person in making their choices.

In the other treatment, each subject was shown only the x value for each of his five pairs, but could at no cost choose to see the y values.  Of the 40 subjects who in the baseline version heavily weighted benefits to others, only 10 of them chose to see the y values.  The others just picked the best option for them. 

"If only people knew how bad things are here in Z-land, they’d do something."  Yes, and maybe that is why they do not know. 

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • http://profile.typekey.com/halfinney/ Hal Finney

    Clicking on the “Digitalisat ansehen (pdf)” link at the right of the linked page leads to the paper itself. It is an interesting experiment although it seems to have been complicated by some extraneous elements beyond what is described here, for example subjects may be informed of how much they were given by some third party X who supposedly made a similar decision where the subject was the “other” party. This element seems to be completely ignored in the analysis. Also, the order in which the options were presented is a little more complicated from what might have been expected; phase 1 gave participants the option of uncovering other payoffs, then in phase 2 the other payouts were revealed automatically, then phase 3 was a repetition of phase 1, where the revelation choice was available. The paper does not discuss whether there was any difference between results from phase 1 and 3, they seem to just be batched together.

    So there are 3 classes of people: the “pro-selfs”, 23 of the original 63 who chose selfishly even when they saw the payoffs for the other; the “genuine pro-socials”, 10 of the 63 who chose to see the other payoffs when they were optional; and the “ignoring pro-socials”, 30 of 63 who chose to keep the other payoffs hidden. (“Ignoring pro-socials” are defined as people who “did not uncover in either period 1 or 3″; does that mean they never uncovered in any response in those periods, or they failed to uncover at least once in those periods? The phrasing is ambiguous but it might make quite a difference in the results.) The paper shows some statistics about results from the groups’ responses on a psychological questionaire to elicit their emotional reactions to the experiments.

    Both pro-selfs and genuine pro-socials were alike in not feeling conflicted in making their choices, unlike the ignoring pro-socials who did feel conflict. This is the advantage of avoiding hypocrisy and self-deception: both true altruists and true selfists can make choices from consistent premises. It might be said to be the motivation for those who are interested in the program of this weblog.

    On another dimension, feelings of guilt were distributed differently and rather oddly. Genuine pro-socials reported feeling guilty, while both pro-selfs and ignoring pro-socials, both of whom chose to benefit themselves, felt relatively free of guilt. Perhaps this points to the motivations of the genuine pro-socials, that it is their feelings of guilt over their good fortune in being the one to decide the distributions that impels them to insure that their partner is taken care of.

    On yet a third dimension, pride, the distribution took the other alternative. Pro-selfs report feelings of pride, while both ignoring pro-socials and genuine pro-socials did not. This also seems a bit confusing, why the selfish choice would be associated with feelings of pride. Perhaps it too points more to the nature of people who make selfish choices, that they have inherent psychological feelings of self-pride that give them the strength to make socially disapproved choices.

    It’s possible that I am misinterpreting their data, since they don’t reveal the specific questions and I can come up with an equally compelling story to explain results that were the opposite of what is reported above, at least for pride and guilt! Also the “Change” row of their table seems inconsistent; it records whether subjects changed their choices between periods 1 and 2 (when the option was and wasn’t present to keep the other payoff hidden), but it seems to show both ignoring and genuine pro-socials as positive on that measure, while genuine pro-socials should not be recorded as having changed. Either I am totally misunderstanding them or this is an error in the table.

  • Ryan J. McCall

    I see a contradiction: “but could at no cost choose to see the y values” and “only 10 of them chose to see the y values” both about the second treatment. What am I missing?

  • anonymous

    Sorry to quibble, but I think you mean “affect” in the first sentence where it says “effect”

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    This explains a lot.

  • John

    I just realized that ignoring how much the other person would get would be perfectly rational. I’d rather donate money to a university psychology experiment than some random stranger.

  • Pseudonymous

    Possibly the worlds least surprising result…

  • Michael Sullivan

    While I think most people will treat this experiment as though the money is being created or is in some sense manna from heaven, I’m not absolutely certain that’s the case.

    It’s possible that we’d see a different result if the experiment was done in a way that made explicit that money not given to one of the participants would be *destroyed* in the name of science. Of course, in practice, if they burn the money, that doesn’t actually destroy any wealth, it merely transfers it by raising the value of all dollar denominated assets by some tiny amount to exactly balance. But most people upon hearing that money will be burned, will probably think of that as something of intrinsic value being destroyed. (i.e., “the researches have budgeted a certain amount for each trial, whatever doesn’t get paid out to you or your counterparty in any given trial will be burned” for effect, they could burn some actual dollars after each trial so people know they are serious). If that doesn’t change the result, that would be a valuable additional piece of information.

  • http://hanson.gmu.edu Robin Hanson

    Hal, thanks for looking at the paper in more detail than I did.

    Ryan, you are missing the obvious.

    John and Michael, yes most subjects seem to ignore where the money goes if not to one of the experimental subjects.

  • http://dl4.jottit.com/contact Richard Hollerith

    Nit: it is conventional in computer science to refer to e.g. {1,1,4,4,7} as a bag rather than a set.

  • http://dl4.jottit.com/contact Richard Hollerith

    Well now I am in an awkward place because I want to retract what I just published, but what I just published was a nit and by retracting it I distract even more from the very worthwhile and important original post. So here goes as concisely as I can: replacing “the set {1,1,4,4,7}” with “the bag {1,1,4,4,7}” makes the sentence mathematically rigorous but loses because most readers do not know what a bag is, do not have a ready way to find out and do not need to know. Replacing with “the list [1,1,4,4,7]” however wins (and makes the sentence rigorous because if the only operation the experimenter performs on the object is picking an element at random then it does not matter if the object is a bag or a list).

  • David J. Balan

    That is a pretty powerful result.