Error Is Not Simple

At her Rationally Speaking podcast, Julia Galef talked to me about signaling as a broad theory of human behavior.

Julia is smart and thoughtful, and fully engaged the idea. Even so, I’m not sure I convinced her. I might have had a better chance if we’d dived quickly into a detailed summaries of related datums. Instead we more talked more abstractly about her concern that signaling seems a complex theory, and shouldn’t we look to simpler theories first. For example, on the datums that we see little correlation between medicine and health, and that people show little interest in private info on medicine effectiveness, Julia said:

Like the fact that humans are bad at probability and are pretty scope insensitive, and don’t really feel the difference between a 5% chance of failure versus an 8% chance of failure. Also the fact that humans are superstitious thinkers, that on some level, it feels like if we don’t think about risks, they can’t hurt us, or something like that. … It feels like that I would have put a significant amount of weigh even in the absence of signaling caring, that people would fail to purchase that useful information.

Yes, the fact that we follow heuristics does predict that our actions deviate from those of perfect rationality agents. It predicts that instead of spending just the right amount on something like medicine, we may spend too much or too little. Similarly, it predicts we might get too much or too little info on medical quality.

But by itself that doesn’t predict that we will spend too much on medicine, and too little on medical quality info. In fact, we see a great many other kinds of areas, such as buying more energy efficient light bulbs, where people seem to spend too little. And we see a great many other areas were people seem too eager to gain and apply quality info; we eagerly consume news media full of info with little practical application.

As I said in the podcast, but perhaps didn’t explain well enough, we are often tempted to explain otherwise-puzzling behaviors in terms of simple error theories; the world is complex so people just can’t get it right. This won’t explain why we tend to do the same things as others who are socially near, but that we often like to explain as social copying and conformity; we try to do what others do so we won’t look weird, and maybe others know something.

But even conformity, by itself, won’t explain the particular choices that a group of socially adjacent people make. It doesn’t predict that elderly women in Miami tend to spend too much on medicine, for example. It is these patterns across space, time, group, industry, etc. that I try to explain via signaling. For example, relative to other products and services, people have consistently spent too much on medicine all through history, especially in rich societies, and for women and the elderly.

I’ve offered a signaling story to try to simultaneously explain these and many other details, and yes it takes a few pages to explain. That may sound more complex than “its all just random mistakes”, but to explain any specific dataset of choices, that basic error story must be augmented with a great many specific ad hoc hypotheses of the form “and in this case, the particular mistake these people tend to make happens to be this.”

The combination of “its just error” and all those specific hypotheses is what makes that total hypothesis actually a lot more complex and a priori unlikely than the sorts of signaling stories that I offer. Which is why I’d say such signaling hypotheses are favored more by the data, at least when they fit reasonably well and are generated by a relatively small set of core hypotheses.

GD Star Rating
Tagged as: ,
Trackback URL:
  • lump1

    I thought that Julia did an admirable job holding up her part of the discussion, and I think it led to a revealing conversation.

    I think she did an especially deft job of getting you to admit that you’re selling what is in essence a theory of unconscious motives. At that point, I would have pushed the similarity between that theory and psychological egoism, the view that behind every single human action is a (often unconscious) self-serving motive. For example, seeming acts of charity are re-interpreted as attempts to gain admission to heaven, or relief from guilt, or vanity, or simply done because you’re the sort of person who really enjoys helping.

    In fact, the more I think about it, the more your signaling theory seems to be a special case of psychological egoism. It certainly inherits many of the problems of psychological egoism. Since psychological egoism is so dead that its epitaph can be found in almost every textbook of freshman ethics, I think it would be valuable to see you explain why signaling theory can resist the objections that sunk psychological egoism.

    One that I specifically want to see addressed is the question of falsifiability. I’ll ask it this way: What specific empirical observation would be logically inconsistent with signaling theory? Julia suggested an obvious candidate: If medicine is about signaling care of others, then the signaling theorist should expect people not to buy it for themselves, at least not to the point where it actually harms them.

    But then you claim, with your dubious Valentines analogy, that in fact, signaling theory can be stretched to accommodate that behavior as well. So I want to hear: What is some (hypothetical) behavior that it cannot be stretched to accommodate? (If a psychological theory can accommodate all possible behavior, then it sheds no light on actual psychology.)

    • RobinHanson

      Falsifiability is just not a very useful concept in social science. Really.

      • Pablo

        Are you saying this as a dismissal of social science or is there a deeper justification for falsifiability not being useful in social science? I can think of several reasons why falsification may not be used much in the social sciences, but they tend towards practical matters (e.g. the ethics of doing experiments of certain types, or the question of whether we have done enough research on subject matter X to determine how to frame the question of what we’re falsifying, et cetera.)

      • RobinHanson

        Even the best theories tend to have a lot of noise. Falsifiability doesn’t work so well with lots of noise.

      • Stephen Diamond

        Well, confirmability doesn’t work better, does it?

      • SlenderMan

        We humans are suffering mutations all the time. We are not a system with stable characteristics, we are always evolving, different than inorganic systems, that tend to be a lot more stable, so it is natural that the social and psychological theories have a tendency to lose accuracy with the passage of time and can’t have universal domain (applicability to all the humans), because the greater the number of humans the greater the number of mutations in every level if the human system (and other organic systems).

      • lump1

        But even if all social science theories are ultimately unfalsifiable, there must be some method for picking which unfalsifiable theory is best, a way to make the case for why your favorite deserves to be in the lead. I’m having trouble formulating such a method.

        For my part, I think that even noisy theories can be falsified, because they predict and rule out certain signals in the noise. I prefer to think that the leading social science theories are actually falsified, just not too badly falsified in comparison to their predictive power.

      • Stephen Diamond

        there must be some method for picking which unfalsifiable theory is
        best, a way to make the case for why your favorite deserves to be in the
        lead. I’m having trouble formulating such a method.

        The mainstream story these days is that we use the rule of Bayes.

    • Stephen Diamond

      Could you provide a few clues as to what’s wrong with psychological egoism? What’s the alternative? (Are you saying it’s the absence of an alternative (falsifiability) that’s problematic?)

      • IMASBA

        Psychological egoism requires religious-like notions of “true/real” motivations, with a definition of altruism that always moves the goalposts once you get near them (like “free will” in several religions). It is not falsifiable, not even in theory (while any variant of CLT is, at least I hope that Robin meant PRACTICAL falsifiability is not a very useful concept in social science, as in you’d need a whole isolated society to perform economics experiments on, which is not practical, but it could be done in theory). One can make predictions based on variants of CLT that would actually differ in some measurable way from other the predictions made by other theories. Of course as long as variants don’t get properly delineated there is the danger of using CLT to try to explain everything.

    • Stephen Diamond

      If medicine is about signaling care of others, then the signaling theorist should expect people not to buy it for themselves, at least not to the point where it actually harms them.

      I think the Valentine analogy not bad. What I wonder is why Robin seems unconcerned with the analogy’s implication. Just as the recipient of a Valentine is more likely to have received it from another rather than from herself, so the recipient of unnecessary medical care should be more likely to be one where someone else pays the bill. And this should be true even when you control for the general effects of demand elasticity (judged by comparable markets): it should be a striking effect.

      This analysis could confirm or falsify the theory, shouldn’t it. (Of course, declining to take “falsification” as absolute.)

    • Philon

      On the point about psychological egoism: no one who appeals to the theory of evolution (as Robin does) can be a psychological egoist, for he is presupposing the drive to reproduce and to protect one’s offspring (and perhaps collateral relatives)–different objectives from one’s own well-being.

  • Julia Galef

    Thanks for following up, Robin!

    Indeed, as Stephen Diamond noted, I wasn’t objecting to Signaling for being too complex. I was acknowledging that “Signaling” *does* seem simpler than “ignorance + several dozen biases & heuristics + …”, but I felt that the virtue of simplicity alone isn’t enough to elevate the Signaling theory above other models which are more complex, but which are also based on more well-established facts.
    Upon reflection, it seems like our different approaches to explaining some case, X, might boil down to:

    JULIA: First apply all the factors that we already *know* would predict someone’s failure to do X. For example, ignorance (if the person didn’t know X was good, that would explain why she didn’t do it). Or avoidance of pain (if the person believed X would cause her pain, that would explain why she didn’t do it). Etc.
    If we check all of our “known” explanations and we still can’t explain why the person did X, then we are left with a mystery in need of a solution. Only at that point do we ask: Would Signaling predict the person would fail to do X? If so, then it is a possible explanation.

    ROBIN: Would Signaling predict X? Then it’s a possible explanation.

    I understand why [what I’m taking to be] Robin’s approach produces a simpler model. But if you already *know* that explanations 1, 2, and 3 respectively explain cases A, B, and C, it just seems like folly to ignore that knowledge in favor of using a speculative explanation 4 to explain all of A-C.

    For example, let’s take two examples you have used in the past:
    1. People prefer voting electronically (even though it’s less secure)
    2. People buy health care for themselves

    You can (and do) explain both of these with a single model (“Signaling”). But from my perspective, neither one needs explaining — we shouldn’t expect people to know that voting electronically is less secure, and we should expect people to want to not be sick. And even though those two explanations I just gave (“ignorance” + “aversion to sickness”) are technically more complex than your one explanation (“signaling”), my two still seem like they should win, since they are both so well-established.

    • RobinHanson

      But the mere fact that people don’t know that electronic voting is less secure is NOT enough to predict that they prefer it. Similarly, the mere fact that people don’t want to be sick is NOT enough to predict that people spend too much on medicine. You also need to add in something that specifies the particular mistake people tend to make.

      • Julia Galef

        Oh, indeed (I was using shorthand — perhaps too much so).

        What I meant to say was that the signaling explanation for X gains relative strength the more inexplicable X is under already-known, non-signaling explanations. If you assume people know electronic voting is less secure, then their preference for electronic voting is more mysterious and signaling gets more “points” for being able to explain that preference.
        Whereas if people don’t know electronic voting is less secure, then their preference for it is less mysterious, and more likely to occur whether or not Signaling is happening.

        But probably to resolve this, I’d just need to look at some non-cherry-picked cases, and see whether we can predict them significantly better with Signaling than without.

      • RobinHanson

        Perhaps the problem is that when our priors predict either too much or too much of something, they seem to “nearly” explain that the actual amount of that something. After all, they need only roughly one more bit of predictive power to get it right. And what’s one bit between friends? But the more different things we want to predict, the more those bits add up to a big explanation deficit.

      • arch1

        Robin, Julia,
        Isn’t it true that the priors Robin describes as predicting “either too much or too [little] of something”, actually predict *any value* of that something, i.e. they *really* predict nothing at all? If so, the “existing priors vs signaling” comparison appears to be informationally a comparison of “0 bits vs 1 bit”, not “n bits vs n+1 bits”.

      • Stephen Diamond

        The null hypothesis is always false. Yet, 2-tailed tests aren’t completely uninformative.

    • SlenderMan

      In nature, we have random mutations, so it is natural that we have probably random thoughts and acts, but by the same way we are evolving by millions of years, so it is probable that we have a set of behaviours that maximize our survival. So some interesting questions are:

      Signaling in some situations that we are loyal help us survive?
      Signaling in some situations that we are important and so it is useful for others to be our allies help us survive?
      Signaling in some situations that we have good genes and many resources help us survive?
      Signaling something in some situations help us survive?

      If yes, then a tendency is that the signaling behaviours get more and more ingrained in the species with the passage of time. We can even predict that probably the same thing happens with alien species.

      If you want to be more objective, then first you need to see that we are all biased in the choice of the “right” theory, then you can see many ways that a theory can be prefered, like for example what one authority or one group of authorities prefer or what is the oldest hypothesis or the last-and-more-understood hypothesis.

      2 More Examples:

      1 ) If someone wants to use the occam’s razor, they will choose the theory that have the last number of assumptions (the simplest) because of probability theory (each assumption have a probability that is probably lesser than 100%, so when we add an assumption, we lower the probability of the entire theory).

      2 ) If someone have a theory that explain/predict more possibilities than 2-3 other theories and in a more precise way, so we have a superior theory, and that superior theory can be more or less complex than the other 2-3 theories. If that theory is more complex than the other 2-3, then the other 2-3 still have some uses for them because in some situations they are practical (their errors are irrelevant). An example is the theory of relativity (thathave a greater domain and is more precise) is superior to the theory of universal gravity but that older and simpler theory still have its uses.

      We all need models to understand the world, and an easier and faster way to construct the models is by simplifying, so we probably always are desconsidering many factors, but if that factors have a small influence, we don’t need to worry much and we can construct a simpler model that is fairly accurate. It is better than have a model with all the factors that we can consider but have to make massive calculations to have any output.

      The signaling theory is both simple and have great “domain” (space of situations that it is applicable), so it is very appealing, but could be wrong in many situations, as could any theory. One example is in the situation where someone suffered a brain damage and is making strange things. One good explanation in this case is made probably by the addiction of the output of the different parts of the brain that are still online and by the way that the person is coping with the damage (traumatized? depressed? etc.).

    • Stephen Diamond

      You’re of course correct that, if we already have well-established theories that adequately explain something, a new additional theory is otiose. Does Signaling of care explain data that resist other explanations? This is ultimately an empirical question that you may be trying to settle on a purely conceptual plane.

      • SlenderMan


        It is not otiose, it is an opportunity to advance our knowledge.
        Einstein’s work was not otiose, even if it needed more work than Newton’s work demanded to give a very similar answer, it still had a greater domain, so helped us advance and construct better things and manage our erros better.

        We have 2 theories or more that are both supported by the actual set of evidences? So:
        We can now formulate some type of experiment that would oblige one of the theories to fail, showing to us what of the theories is applicable to that increase-of-domain/set-of-evidences.

        That is how we advance science. Creating new hypotheses that are still supported by the actual set of evidences and experimenting to colect new evidences and see what theory endure the test of the real world, and not using biases like old/established things are better than new ones (or the inverse). Both theories must be considered neutraly. The time that they were made is irrelevant.

  • Silent Cal

    Robin’s first full paragraph is right on: to resolve the disagreement, you’ve got to go into the data.

    Julia thinks that the assorted known biases could predict all of Robin’s data in advance. Robin thinks they couldn’t, that is, that they could just as easily predict opposite outcomes. That’s the root of the disagreement.

    (As a bonus, lump1 says Robin’s signalling theory can’t explain all of the data in advance)

  • marshall bolton

    In the Red Corner we have Julia, who comes with all the right, hip fashion statements from the wonderful world of internet empiri. In the Blue Corner we have the maverick Robin, who is just trying to make up his own mind about things.

    If you had a problem – who do you go to to get help?

  • Stephen Diamond

    Here’s where I think you miscued the interviewer.

    J.: Before we delve into how thoroughly the signaling hypothesis explains these choices, maybe we can just spend a little more time on why you think the standard stories fail.

    She was calling for more on this! To which, you replied

    R: The first data point is to say school can’t really be about learning. Medicine can’t really be about health. Investment can’t really be about returns. Because we have all these pieces of data related to them that say, the way people seem to be trying to achieve these goals are just so inefficient and ineffective, that it’s just hard to believe that this is what they’re really doing.

    That’s not the fundamental data point! (Social coordination, after all, is hard; and individual irrationality is widespread.)

    If there’s a fundamental data point that would be convincing, it has to do, not with the absolute inefficiency of medicine but it’s relative inefficiency (compared to other coordination efforts of equal cognitive difficulty). [You mention this comparison later, but the miscue had already done its devious work.]

    Perhaps you intended that comparison with “these goals.” But “these” receives no serious emphasis in the sentence—because the middle of a sentence is low emphasis—and emphasis is of the essence in intellectual writing. See “Constructing sentences for precise emphasis: The fundamental principle of advanced writing” — )

  • F Gerard Lelieveld

    Came here through the podcast, was wondering about your picture there, but the carefully crafted title of your blog explains it 😉
    I bet your new ideas worry a lot of people who are going to have a risky medical intervention soon. Because yes, the whole enterprise developed from people trying to give comfort in desperate situations. Nurses are very annoying in that respect. Guess I am an intellectual of the scientific era. Just increase my chances of survival please, and tell the doctor…

    • F Gerard Lelieveld

      On July 4, 1940 (11 days before his death), during a professional appearance at the Manistee National Forest Festival, a faulty brace irritated his ankle, causing a blister and subsequent infection. Doctors treated him with a blood transfusion and emergency surgery, but his condition worsened due to an autoimmune disorder, and on July 15, 1940, he died in his sleep at the age of 22.

  • Gary Wilson

    Well done Robin great write up.