Rank-Linear Utility

Just out in Management Science, a very simple, general, and provocative empirical theory of real human decisions: in terms of time or money or any other quantity, utility is linear in rank among recently remembered similar items. There is otherwise no risk-aversion or time-discounting, etc. This makes sense of a lot of data. Details:

We present a theoretical account of the origin of the shapes of utility, probability weighting, and temporal discounting functions. In an experimental test of the theory, we systematically change the shape of revealed utility, weighting, and discounting functions by manipulating the distribution of monies, probabilities, and delays in the choices used to elicit them. The data demonstrate that there is no stable mapping between attribute values and their subjective equivalents. Expected and discounted utility theories, and also their descendants such as prospect theory and hyperbolic discounting theory, simply assert stable mappings to describe choice data and offer no account of the instability we find. We explain where the shape of the mapping comes from and, in describing the mechanism by which people choose, explain why the shape depends on the distribution of gains, losses, risks, and delays in the environment. …

People behave as if the subjective value of an amount, risk, or delay is given by its rank position in the context created by other recently experienced amounts, risks, and delays. … To summarize the above studies, people behave as if the subjective value of an amount (or probability or delay) is determined, at least in part, by its rank position in the set of values currently in a person’s head. So, for example, $10 has a higher subjective value in the set $2, $5, $8, and $15 because it ranks 2nd, but has a lower subjective value in the set $2, $15, $19, and $25 because it ranks 4th. …

Rather than supporting a change in the shape of a utility, weighting, or discounting function, or a change in the primitives which people process, our data suggest that the whole enterprise of using stable functions to translate between objective and subjective values should be abandoned. …. There is no method which gives, even with careful counterbalancing, the true level of risk aversion or the true shape of a utility function. In any given situation, one can observe choices and infer a shape or level of risk aversion. But as soon as the context changes—that is, as soon as the decision maker experiences any new amount—the measured shape or level of risk aversion will no longer apply. (more; ungated;also)

GD Star Rating
Tagged as: ,
Trackback URL:
  • stevesailer

    “Rather than supporting a change in the shape of a utility, weighting, or discounting function, or a change in the pirates which people process,”

    Insert pirate joke here.

    • Should be “primates”: so, instead insert monkey joke.

      • stevesailer

        The contemporary auto-correct spellcheckers seem to lead to generate a lot of these kind of mistakes. Rather than simply flag a misspelling, the program takes a SWAG at what it thinks you meant and often changes your easy-to-decipher misspelling into something wildly wrong.

      • Handle

        Insert AI joke here. Spell-check stagnation is reality, but then again, is it worth the money to make it better?

  • Ilya Shpitser

    Very interesting paper, thanks Robin!

  • Tobias

    Note that the ungated version they submitted to Econometrica was a very different paper from the one that was just published in ManSci. Even the various curves they get out of their econometric models seem to look very different between the two versions.

  • Tobias
  • Jason Young

    So, is the secret to happiness adopting a meta-attentional rule that biases your attention away from unattainable options that make your attainable options look bad by comparison? Convenient!

    Interesting paper. Thanks for the link.

    Off-topic, but when can we expect a review of Transcendence? I know it’s getting poor reviews, but it’s a movie about uploading consciousness starring movie star Johnny Depp. Seems worthy of comment.

    • After seeing it, and seeing other comments, it doesn’t seem worthy of further comment.

    • Jayson Virissimo

      “So, is the secret to happiness adopting a meta-attentional rule that
      biases your attention away from unattainable options that make your
      attainable options look bad by comparison?”

      At least two of the Stoics’ psychological techniques (Negative Visualization and Dichotomy of Control) aim to do precisely this.

  • Noumenon72

    But where do these contexts come from? When I think about how much to save for retirement, do I pull up an image of me at 45, 65, 85, and then spend some time thinking about living 200 years to motivate myself to save more?

    I prefer the nice, math-friendly hyperbolic discounting model Tomasz Wegrzanowski wrote up here.

    • brendan_r

      Their paper isn’t relevant to your framing. What is to be chosen between in your hypothetical?

      Choice depends on subjective value rankings. Their idea is that subjective value assignment of some thing doesn’t depend on that thing’s amount, risk and delay in an *absolute* sense, but rather, a) compared to easily remembered comparisons, and b) with weightings determined by *relative* rank. People frame choices myopically, and scale insensitively.

      Context is easy to see when you set it up right. Choice between mutual fund’s A and B will be influenced by their respective promised returns and volatilties compared rankwise to some mental baseline which may be influenced by choices presented 5 minutes prior.

      I think the math of their idea is unimportant (unless you’re a modeler in search of stable utility functions; in which case their lesson is “give it up”). It’s unimportant because formalizing the “context” that creates the subjective value rankings is impossible.

      But the ideas are useful. The fact that choices and expectations are formed myopically is obvious. Investor expectations of future stock returns soared in the late 1990’s even as earnings and dividend yields plunged to all-time lows. Expectations were driven by what had happened recently. 12% sustained annual returns would’ve disappointed the typical investor in 1999. This is the sort of real world context that influences subjective value assignment.

      • People frame choices myopically, and scale insensitively.

        I haven’t yet read the whole article, but it seems the phenomenon discussed is related to the availability heuristic, although the authors don’t seem to make the connection. I wonder why (if my cursory observation is correct).

        One thing that troubles me about the claims is that they seem hard to explain in construal-level terms (of course, that’s my bias). The myopic aspect suggests near-mode, but rank ordering suggests far-mode (near-mode is capable of ratio scaling). This suggests the results may be confounding near- and far-mode judgments (depending, again, on how much credence you give to CLT).

        Another question about the paper: why did psychologists writing about a theoretical subject publish in Management Science, rather than Psychological Review or at least something like JPSP?

      • brendan_r

        OK, so their paper is an application of the availability heuristic to utility functions.

        The availability heuristic is usually framed in terms of biasing predictions.

        But they show it also biases the value people assign to *objectively defined* events, i.e. people choosing payoffs with fixed amounts and probability distributions.

        It’s not surprising that judgments of the questions of “will X happen” (prediction), and “if X happens will it make me happy” (subjective value) are biased by the same factor; stuff related to X that I’ve seen recently.

        Maybe we’re misunderstanding their paper.

        Or maybe, for originality’s sake, they didn’t want to call their paper: Scale Insensitivity and Recency Bias applied to Utility Functions.

        *CLT hasn’t clicked for me- no clue.
        ** Where is Eric Falkenstein? This is right up his alley.

  • Joshua Brulé

    I’m not entirely convinced, but this model sounds very plausible. It seems to imply that human beings intuitively understand order, but do not intuitively understand magnitudes.

    Assuming it’s correct, what do you think is the better way of ‘hacking’ happiness? Trying to bias yourself into ignoring impractical options that make your current options look bad, trying to ‘invent’ worse options that make your current options look better, and/or trying to convince yourself to be happy with subjective utility magnitude instead of mere rank?

    • The problem with using this information directly to “hack happiness” is that happiness isn’t all that we care about. We also care about meaningfulness. The narrowing of horizon that might make one happier might also make life seem meaningless.

  • arch1

    How can behavior such as this be expected to persist in the face of simple modifications which could objectively do much better over the long haul?

    • IMASBA

      Apparently it was good enough for our ancestors to survive and become the dominant species on the planet. In any case there are older psychological experiments that seem to support this theory of utility (this theory probably isn’t the end-all, but does highlight important aspects of the human mind): it was known before that a person could prefer oranges to pears and pears to apples but also prefer apples to oranges and bombarding people with small or large numbers would make them think differently about spending certain monetary amounts, these experiments show it gets even worse than this rank linear utility: the ranks of different orders of magnitudes aren’t even part of a consistent rank ladder and we seem to have difficulty connecting abstract numbers to real world quantities (when we have $1000 our brain seems to intuitively treat “1000 units of $1” differently from “all of our money”), although going into far mode does alleviate these problems.

    • Curt Adams

      What would be objectively better? A consistent utility function is an enormous computational task – totally beyond calculation for the theoretical version allowing for risks, and pretty massive even for relatively simple functions. It’s not like it’s a surprise – we’ve known for decades that people have highly inconsistent utility functions.

      That said, I don’t think their model is really all the story or even most of it; I wouldn’t think it accounts for the differing risk preference for losses over gains.

  • VV

    All their experiments short timespans, consistent with the subjects retaining all the presented cases in their working memory. Thus this seems to be a type of short-term priming effect.

    These results are consistent with humans using approximate expected discounted utility maximization, and this rank effect being just a short-term bias.

  • Pingback: Basing Decisions on Rank-Order Framing | askblog()

  • Pingback: Basing Decisions on Rank-Order Framing()

  • There might be some other people who wanted to learn more on how are they going to use this one. They will then promote some effective things that would gather their learning into something that is beneficial on their part.