Beware Consistency

A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines. Emerson

O.M.G.:

Experimental choice data from 881 subjects based on 40 time-tradeoff items and 32 risky choice items reveal that most subjects are time-inconsistent and most violate the axioms of expected utility theory. These inconsistencies cannot be explained by well-known theories of behavioral inconsistency, such as hyperbolic discounting and cumulative prospect theory. … Time-inconsistent subjects and those who violate expected utility theory both earn substantially higher expected payoffs, and these positive associations survive largely undiminished when included together in total payoff regressions. Consistent subjects earn lower than average payoffs because most of them are consistently impatient or consistently risk averse. … Controlling for the total risk of each subject’s risk choices as well as for socio-economic differences among subjects, time inconsistent subjects earn significantly more money, in statistical and economic terms. So do expected utility violators. Positive returns to inconsistency extend outside the domain in which inconsistencies occurs, with time-inconsistent subjects earning more on risky choice items, and expected utility violators earning more on time-tradeoff items. The results seem to call into question whether axioms of internal consistency—and violations of these axioms that behavioral economists frequently focus on—are economically relevant criteria for evaluating the quality of decision making in human populations. (more; HT Dan Houser)

If your jaw isn’t in your lap yet, you aren’t paying attention:

Ask a behavioral economist what we learn from behavioral economics in applied work aimed at educating the public or designing institutions, and you will likely hear calls to help error-prone, biased, or irrational humans overcome the systematic pathologies built into their brains. And yet, very little evidence exists linking violations of axiomatic rationality to high-stakes differences in real people’s lives. … Calls to use behavioral economics as a prescriptive basis for institutional design, such as … to tax potato chips and subsidize carrots, or … changing defaults in savings plans, organ donation rules, and the positioning of dessert on the buffet line, naturally raise controversy. What seems clear, however, is the need for … investigating whether the normative measures we use are relevant to the economic problems we face.

These results seriously question the relation between winning and easy-to-observe local measures of rationality.  In at least two important contexts, people whose actions seem more locally consistent, consistently lose.

Such results also suggest we face a “dark decisions” problem, analogous to the dark matter and energy problems in physics, or the dark brain in neuroscience. Clearly the processes behind our inconsistencies aren’t just random errors, and aren’t very close to expected utility; simple-minded attempts to make them more consistent seem to make them worse.

Added 27Nov: On reflection, I wasn’t thinking straight; this is just the sort of result one should expect from simple random error.  When “rationality” makes you avoid big risks and future payouts, random error can indeed get you paid more on average, in the future.

GD Star Rating
loading...
Tagged as: ,
Trackback URL:
  • Phil

    I don’t get it. Aren’t they just saying that people who are *sometimes* risk-averse do better than people who are *always* risk-averse?

    Or, in general terms: it’s better to be inconsistently dumb than consistently dumb.

    • http://juridicalcoherence.blogspot.com Stephen R. Diamond

      They’re concluding that the search for consistency can undermine accuracy. A fair conclusion, although there will of course be challenges to methodology.

      The comments and Robin’s initial jaw-dropping characterization indicates it’s hard for “rationalists” to see how these results could conceivably true.

      How can striving for consistency undermine truth-finding? I think it can most simply be put this way: a given person may be incapable of finding a “good” basis for consistency (one not inherently risk averse, for example) in some realms. Now generalize: we all may be incapable of finding a “good” basis for consistency in some realms.

  • Robert Koslover

    I’m unfamiliar with most of the social-science terminology here, so please forgive me if the answer to my question should be obvious: Is the above essay more or less the same thing as saying that there is solid evidence to support the notion that it is often generally advantageous to change your mind, your strategy, or your approach (i.e., to be “time-inconsistent”) when seeking to advance in your position and/or acquire wealth, etc? If so, then how is this so amazing? For example, it has long been a common recommendation in the business world that company-hopping and career-changing can often be (though not always) the single-most effective way to get ahead in your career. Similarly, one should occasionally consider moving to a new place or trying entirely new hobbies. Or… did I perhaps miss the main point entirely (yes, it wouldn’t be the first time)?

  • http://www.hopeanon.typepad.com Hopefully Anonymous

    I get it, and it seems to have a mother-of-all-counterintuitive-discoveries feel to it.

    I’m curious about a few factors that may make this less “shocking”.
    (1) Is there a rentier element to this? Do systems themselves actually perform better with inconsistently axiomatic rational agents than with consistently axiomatic rational agents (and then also factoring in role optimization -a system would do better for its soldiers and cops to approach personal safety differently than its empirical macroeconomists)?
    (2) Similarly, how much is this capturing the difference between popularity entrepreneurs vs. efficiency entrepreneurs, to take a different rentier angle?

    I think if rentiers advantages are being exposed here and non-axiomatic rational behavior is just signalling, it’s not exactly an earthquake to the social sciences.

    BUT if “dark rationality” is being exposed here, then we live in interesting times.

    I think it’s also an argument for diversified experimentation and having a bayesian respect for outcomes of those experiments in the absence of strong theoretical underpinnings.

  • Pingback: Tweets that mention Overcoming Bias : Beware Consistency -- Topsy.com

  • Tim Tyler

    I didn’t like this one very much. The abstract reads like a teaser, the paper is lenthy, and not very well arranged – and it didn’t succeed in teaching me anything.

    The authors put me off early with their discussion of “Vasectomy Reversal”. Vasectomy just isn’t irreversible – and it is OK to not want babies at one point – and then to want them later – just as it is OK to not want an apple at one point – but to want one later. Such preferences are not much of a mystery.

    The study is based on questionaires of the form: “Do you prefer $65 today or $80 one year from today?” Humans need to estimate the chances of that $80 actually arriving to choose sensibly – and that information is not included in the question. So the answers may be distorted – and depend on things like the reputation of the test administrator.

  • http://www.uncrediblehallq.net/ Chris Hallquist

    If your goal is to believe as many true propositions as possible, and as few false ones as possible, being consistently right is the best you can do. But being often right with some inconsistencies is better than being consistently wrong.

    Could something similar be going on here? Was the nature of the questions such that the best outcomes would require being consistent? If so, I’d rate these results fairly unsurprising.

  • Curt Adams

    What’s jaw-dropping about this? A casual attempt at actually grinding through the complex Bayesian calculations required for consistency immediately reveals that they’re very complex, tedious, and prone to NP-completeness problems with multiple items. Of course a consistent actor in a vaguely real-world situation would be at a disadvantage, because they’d waste too much time and effort on being consistent.

    My a priori guess for this experiment would be that nobody was consistent. The actual result is that only people with extremely simplified (and thus dysfunctional) preferences could manage to be consistent. I’m mildly surprised that even people with such oversimplified preferences can manage to be consistent, but it’s not jaw-dropping. And I don’t find it jawdropping that, in order to be able to be consistent with your preferences, you have to have such simplified preferences that you can’t function effectively in a real-world situation.

  • Anonymous from UK

    To rephrase: your goal is to maximize utility to the best of your ability, not to be consistent to the best of your ability. Duh.

  • Zvi Mowshowitz

    Phil seems to have this one right: The people who were *consistent* in the context of this study were people who were consistently wrong. It’s intuitive that being right more often, and wrong less often, is more important than being consistent. That doesn’t mean that consistency is bad, or that inconsistency isn’t a clear sign that something could be improved.

    I also agree with Tim that these questions are not as easy as they look; after all the logistical issues involved I’d rather have $60 now than $85 a year from now if it means having to actually get and cash a check in a year.

  • Constant

    …you will likely hear calls to help error-prone, biased, or irrational humans overcome the systematic pathologies built into their brains.

    And yet this blog is called Overcoming Bias. I think this is Hanson acknowledging limits to the value of this blog’s project (to help error-prone, biased, irrational human beings to overcome their pathologies) as originally conceived. The blog has moved on from the original concept, of course.

    I’m a bit stunned by all the commenters saying this is obvious. It’s a bit like all the commenters at Daily Koss acknowledging that it’s obvious that George W. Bush is a good and great President.

  • blink

    This is indeed surprising, but maybe not as much as at first glance. Why lower payoffs? “Consistent subjects earn lower than average payoffs because most of them are consistently impatient or consistently risk averse.” It sounds like saying a risk averse person could gain more on average by taking some risks (i.e., being “inconsistent) or that a impulse shopper could do better by occasionally putting off a purchase (again, being “inconsistent”).

  • Sark

    Eliezer would say something here about the distinction between consistency and accuracy.

    I interpret these findings as the sheer complexity of our preferences not being amenable to our correct but inefficient tools for rational decision making. We oversimplify when interpreting our preferences for our decision making models.

    I’m sure these findings doesn’t apply to all decision making. There are surely areas where we are clear about our preferences and where rational decision making gives the largest leverage. We should really try to empirically measure the successes and failures of our attempts at rational decision making in various domains.

  • Buck Farmer

    Glarg. I’ve always had an itch in the back of my mind that worried this might be true.

    Need to read the paper, think on this, and decide whether this means rationality is a bum deal.

  • George

    I’d be interested to see if there were IQ differences between the consistent group and the inconsistent group.

  • Curt Adams

    More evidence for consistency being valuable but too expensive to maintain in general:

    with time-inconsistent subjects earning more on risky choice items, and expected utility violators earning more on time-tradeoff items.

    IOW, people who don’t work hard on time consistency do better with time-independent risk; and people who don’t work hard on risk consistency do better with time discounts.

  • JoshINHB

    Perhaps it’s time for some economic school of thought to abandon a priori reasoning and instead observe the economic decisions of actual humans ant then devise theories that explain that observed behavior and has predictive ability.

    Nah, that’s just crazy.

    • http://timtyler.org/ Tim Tyler

      Re: why not “observe the economic decisions of actual humans”…?

      That’s what the field of “behavioural economics” is all about.

  • Pingback: Robin Hanson makes a strong push to win the internet « Blunt Object

  • Jonas

    Very interesting. My jaw kinda dropped..
    I not sure if I got it though in terms of understanding the topic of consistency. I mean, Aristotle said that something can either be A or B but never A and B simultaneously or NotA and NotB simultaneously.
    But this paper seems to describe statics (German: Statischheit [?]) in a dynamic way over time. If it is determinism or probabilities in terms of causality, direction of time, symmetrie of time I dont know.

    But from a Popperian (?) perspective this seems to be very interesting. But what about other fields of research to relate this to? I mean from a relative perspective (theory of relativity)?

    60 or 85 dollars later is one choice to make
    sport or education another
    I mean basketball is a team sport and education as well right? Or at least that is the way I see it right now 🙂
    how do you relate this time-inconsistent behavior to other fields of research? And how do you falsificate a theory like this?
    If you argue for example that this theory is under discussion right now is the systematic knowledge of the physical world
    gained through observation and experimentation, usually beginning with a hypothesis or what some may call an estimation.
    record your results from a series of tests and what you are left with is a theory at best.

    this seems to be a logical tautology seen from a static perspective but dynamics over time and consistency and inconsistency are interesting and a well worth research topic and field of research.

  • http://juridicalcoherence.blogspot.com Stephen R. Diamond

    I think you got it right the first time. Your latest argument proposes a different explanation than provided by the study, and without demonstrating your premises (e.g., that rationality is confounded with using a minimax criterion), it isn’t a fair criticism of the study’s findings, as the claim defines irrationality using a consistency criterion, not a particular maximization function.

    But compared to the main point, that’s minor. I would contend that anyone who understands the dynamics of the choice of the most reliable truth-finding method would have expected the result and would know that it’s substantive claim is true: in some domains, reliance on the automatic “unconscious” sometimes called intuition comes out better. Why should it be otherwise? How could it be otherwise, with what we know concerning conscious decision-making? [See the link for something touching on this.]

    Putting the matter metaphorically, reason and intuition form a unity when we sincerely think we know something. Conflict between the two should cast suspicion on each course of action or belief. If we must act before we can unify reason and intuition, then the choice of whether reason or intuition guides better is the most critical choice we make; fundamentally, it cannot itself be made by means of reason, and its outcome will surely not always favor reason .

    Apologies if this is too terse for anyone’s comprehension. It’s hard to challenge a worldview in a few paragraphs.

  • http://entitledtoanopinion.wordpress.com TGGP

    Thanks for the link on the “dark brain”. I’m surprised you haven’t discussed that before regarding whole-brain emulation. Or maybe I just forgot.

    H.A, I’m having a hard time understanding what you’re saying about “rentiers”. This is my best guess: you are suggesting that it may be individually advantageous for someone to be inconsistent, though it is socially sub-optimal. You also seem to be saying something about behavioral economists vs old school neoclassical modeling of utility-maximizing homo economicus. What exactly you are saying I don’t know.

    Constant, I don’t think Hanson is rejecting the goal of avoiding bias. He’s saying that consistency is not a great way to do that. A somewhat related LessWrong post is Reason as memetic immune disorder.

    Though I agree with some who think behavioral econ is often over-hyped (with the claim that we need to study “ecological rationality” being particularly bolstered by these results), I’d like to take a moment to defend their focus on consistency which some seem to think is obviously silly. Standard economics gives a great amount of leeway for preferences, making it harder to falsify claims that people act rationally given such preferences. Consistency is one of the few ways to demonstrate that people violate rules of rationality. It establishes that either you were wrong when you made one decision or you were wrong when you made another (you are even vulnerable to a dutch-book money pump, and eliminating inconsistency is not sufficient to establish rationality because it could make you stick to a wrong decision. These psychological studies have demonstrated that behavior is not so simple, even if it often adds up to rationality. It suggests that we should subsequently investigate why it is that we are adapted to behave inconsistently rather than simply always following the perfectly rational course.

    • Constant

      [Hanson is] saying that consistency is not a great way to [avoid bias].

      Let’s look at what he quoted:

      … you will likely hear calls to help error-prone, biased, or irrational humans overcome the systematic pathologies built into their brains. And yet, very little evidence exists…

      The quote specifically names bias among the three examples of systematic pathologies which there are calls to overcome. So, in particular, you will likely hear calls to help people overcome error, overcome bias, and overcome irrationality. What was that second one? I’ll repeat: overcome bias. What is the name of this blog? Overcoming bias. We have the exact words of the blog title coming right out of that quote. And then the quote pours cold water on the project: “And yet, very little evidence exists…”

      Now let’s look at the very first paragraph of the first blog entry of Overcoming bias:

      How can we better believe what is true? While it is of course useful to seek and study relevant information, our minds are full of natural tendencies to bias our beliefs via overconfidence, wishful thinking, and so on. Worse, our minds seem to have a natural tendency to convince us that we are aware of and have adequately corrected for such biases, when we have done no such thing.

      He says, “our minds are full of natural tendencies to bias our beliefs”. It sure sounds as though the very first paragraph of the very first blog entry of this blog is talking about systematic pathologies built into our brains.

    • http://www.hopeanon.typepad.com Hopefully Anonymous

      I was going off the post, not the links, so what I wrote may make more sense in that context.

      I’d explain more but out of time right now.

  • laocoon

    The seeming superiority of irrational behavior can arise in the following way, in a broad variety of situations.

    Suppose people are playing rock-paper-scissors. The “standard game theory solution” is to play each with independent 1/3 probability. The average payoff, against ANY other strategy, is zero. And no strategy can win more than zero (on average) against it.

    However, if the other side is not playing SGTS to rock-paper-scissors, then you can play a strategy which exploits them and does better. Thus, the “irrational” player does better than the “rational” player, because he uses more accurate information about the situation. That is, he plays against his specific opponent, rather than a hypothetical opponent.

  • Steve

    It’s funny how the behavioral economists are starting to rediscover what the Austrians have been saying for over half a century…which is that the ‘economy’ is the sum of millions of individual decisions, none of which are constrained by rationality or any kind of utility functions.

    Yes, Mises and co. used funny terms like ‘praxeology’, but I still find it bizarre that no credit is given to them when they’ve been right longer than Keynesianism has existed.