Is Loss-Aversion Far?

Survey data show that subjects positively discount both gains and losses but discount gains more heavily than losses. This holds for monetary and non-monetary outcomes. These results do not confirm the findings of two earlier studies [L87,LP91] about negative time preferences for non-monetary outcomes. … We increased sample size to 190 and also looked at median instead of mean answers. L87 involved 30 US undergraduates and LP91 involved 95 Harvard undergraduates. L87 and LP91 conducted their study in the US while this study was conducted in Italy. In the about 20-years period between the original experiments and our study, we have not found a published replication of it. (more)

Perhaps we are more loss-averse in far mode.  Could this help explain why folks focus so much on the possible negative consequences of  future technologies?

GD Star Rating
Tagged as: ,
Trackback URL:
  • Construal-level theory predicts negativity toward future technologies if (and only if?) risks are construed at a high conceptual level. This implies that risk aversion should be replaced by risk preference when the benefits are typically characterized at high level, risks at low. This prediction is interesting in its counter-intuitiveness.

  • Losses are easier to understand; gains hard to understand. Imagine telling someone in the ’60s about all the downsides of the Internet like everyone having trivial access to porn, and then trying to really convince them it was a net benefit to humanity by pointing to things like Wikipedia.

  • Ray

    What immediately comes to mind is my father. He had the proverbial Dickenseque poor childhood and rose to some minor prominence in our hometown, but he still seems fixated on lesser opportunities that he missed along the way.

    Is that the same thing? He’s discounting what he gained more than what he missed out on, even though the missed things are easily seen to have been of less value?

  • One way to reassure people about risk is to demonstrate safety. However, it is hard to demonstrate safety convincingly – when the risk is in the far future. Only a few worry much about the far future, though.

  • Nikki

    You need to discriminate here in what you mean by ‘risks of future technologies’. Some risks are very obviously risks. Runaway nanotechnology, democratized genetic engineering, conscious, self-replicating AI. No matter how you cut it, these technologies represent real danger, where insight into how we assess risk does not change the reality of the risk. Focusing on them seems to come more from a survival instinct than a mishap in our perceptual lens. We actually need to be more worried about future nanotech than perhaps we are.

    However, for those more subtle, more subjective ‘risks’ that future technologies present (risk of losing some important ‘human’ connection with one another that sustains community because in the future we will meet less and less in the flesh, for instance), I would agree with what you have written. This is a nice way to explain the persistence of pessimism with regards to future tech in the face of evidnce that the future will be better than past, as discussed by Matt Ridley in ‘The Rational Optimist’. Here one might argue that we are motivated by ‘preservation’, in some ways, too. Where the issue is also in part an instinctual reaction to change.

  • kebko

    I agree with gwern. I think one problem with global warming conversations is that if we imagine the world 100 years from now, it’s easy to imagine droughts, floods, and displacements. We have experience with those things. But, it’s impossible to imagine a world where the median income is $250,000. The bad stuff has been around for centuries, but the good stuff isn’t even in our imagination.