21 Comments

There might be some other people who wanted to learn more on how are they going to use this one. They will then promote some effective things that would gather their learning into something that is beneficial on their part.

Expand full comment

All their experiments short timespans, consistent with the subjects retaining all the presented cases in their working memory. Thus this seems to be a type of short-term priming effect.

These results are consistent with humans using approximate expected discounted utility maximization, and this rank effect being just a short-term bias.

Expand full comment

What would be objectively better? A consistent utility function is an enormous computational task - totally beyond calculation for the theoretical version allowing for risks, and pretty massive even for relatively simple functions. It's not like it's a surprise - we've known for decades that people have highly inconsistent utility functions.

That said, I don't think their model is really all the story or even most of it; I wouldn't think it accounts for the differing risk preference for losses over gains.

Expand full comment

OK, so their paper is an application of the availability heuristic to utility functions.

The availability heuristic is usually framed in terms of biasing predictions.

But they show it also biases the value people assign to *objectively defined* events, i.e. people choosing payoffs with fixed amounts and probability distributions.

It's not surprising that judgments of the questions of "will X happen" (prediction), and "if X happens will it make me happy" (subjective value) are biased by the same factor; stuff related to X that I've seen recently.

Maybe we're misunderstanding their paper.

Or maybe, for originality's sake, they didn't want to call their paper: Scale Insensitivity and Recency Bias applied to Utility Functions.

*CLT hasn't clicked for me- no clue.** Where is Eric Falkenstein? This is right up his alley.

Expand full comment

"So, is the secret to happiness adopting a meta-attentional rule that biases your attention away from unattainable options that make your attainable options look bad by comparison?"

At least two of the Stoics' psychological techniques (Negative Visualization and Dichotomy of Control) aim to do precisely this.

Expand full comment

The problem with using this information directly to "hack happiness" is that happiness isn't all that we care about. We also care about meaningfulness. The narrowing of horizon that might make one happier might also make life seem meaningless.

Expand full comment

People frame choices myopically, and scale insensitively.

I haven't yet read the whole article, but it seems the phenomenon discussed is related to the availability heuristic, although the authors don't seem to make the connection. I wonder why (if my cursory observation is correct).

One thing that troubles me about the claims is that they seem hard to explain in construal-level terms (of course, that's my bias). The myopic aspect suggests near-mode, but rank ordering suggests far-mode (near-mode is capable of ratio scaling). This suggests the results may be confounding near- and far-mode judgments (depending, again, on how much credence you give to CLT).

Another question about the paper: why did psychologists writing about a theoretical subject publish in Management Science, rather than Psychological Review or at least something like JPSP?

Expand full comment

Apparently it was good enough for our ancestors to survive and become the dominant species on the planet. In any case there are older psychological experiments that seem to support this theory of utility (this theory probably isn't the end-all, but does highlight important aspects of the human mind): it was known before that a person could prefer oranges to pears and pears to apples but also prefer apples to oranges and bombarding people with small or large numbers would make them think differently about spending certain monetary amounts, these experiments show it gets even worse than this rank linear utility: the ranks of different orders of magnitudes aren't even part of a consistent rank ladder and we seem to have difficulty connecting abstract numbers to real world quantities (when we have $1000 our brain seems to intuitively treat "1000 units of $1" differently from "all of our money"), although going into far mode does alleviate these problems.

Expand full comment

Their paper isn't relevant to your framing. What is to be chosen between in your hypothetical?

Choice depends on subjective value rankings. Their idea is that subjective value assignment of some thing doesn't depend on that thing's amount, risk and delay in an *absolute* sense, but rather, a) compared to easily remembered comparisons, and b) with weightings determined by *relative* rank. People frame choices myopically, and scale insensitively.

Context is easy to see when you set it up right. Choice between mutual funds A and B will be influenced by their respective promised returns and volatilties compared rankwise to some mental baseline which may be influenced by choices presented 5 minutes prior. (Surely recent market experience forms part of that context, too.)

I think the math of their idea is unimportant (unless you're a modeler in search of stable utility functions; in which case their lesson is "give it up"). It's unimportant because formalizing the "context" that creates the subjective value rankings is impossible.

But the ideas are useful. The fact that choices and expectations are formed myopically is obvious. Investor expectations of future stock returns soared in the late 1990's even as earnings and dividend yields plunged to all-time lows. Expectations were driven by what had happened recently. 12% sustained annual returns would've disappointed the typical investor in 1999. This is the sort of real world context that influences subjective value assignment.

Expand full comment

How can behavior such as this be expected to persist in the face of simple modifications which could objectively do much better over the long haul?

Expand full comment

I'm not entirely convinced, but this model sounds very plausible. It seems to imply that human beings intuitively understand order, but do not intuitively understand magnitudes.

Assuming it's correct, what do you think is the better way of 'hacking' happiness? Trying to bias yourself into ignoring impractical options that make your current options look bad, trying to 'invent' worse options that make your current options look better, and/or trying to convince yourself to be happy with subjective utility magnitude instead of mere rank?

Expand full comment

But where do these contexts come from? When I think about how much to save for retirement, do I pull up an image of me at 45, 65, 85, and then spend some time thinking about living 200 years to motivate myself to save more?

I prefer the nice, math-friendly hyperbolic discounting model Tomasz Wegrzanowski wrote up here.

Expand full comment

After seeing it, and seeing other comments, it doesn't seem worthy of further comment.

Expand full comment

So, is the secret to happiness adopting a meta-attentional rule that biases your attention away from unattainable options that make your attainable options look bad by comparison? Convenient for me. Thanks for the link.

Off-topic, but when can we expect a review of Transcendence? I know it's getting poor reviews, but it's a movie about uploading consciousness starring super famous persons Johnny Depp and Morgan Freeman. Seems worthy of comment.

Expand full comment

Insert AI joke here. Spell-check stagnation is reality, but then again, is it worth the money to make it better?

Expand full comment

This is a better ungated link: http://www.ucl.ac.uk/lagnad...

Expand full comment