Error Is Not Simple
At her Rationally Speaking podcast, Julia Galef talked to me about signaling as a broad theory of human behavior.
Julia is smart and thoughtful, and fully engaged the idea. Even so, I’m not sure I convinced her. I might have had a better chance if we’d dived quickly into a detailed summaries of related datums. Instead we more talked more abstractly about her concern that signaling seems a complex theory, and shouldn’t we look to simpler theories first. For example, on the datums that we see little correlation between medicine and health, and that people show little interest in private info on medicine effectiveness, Julia said:
Like the fact that humans are bad at probability and are pretty scope insensitive, and don’t really feel the difference between a 5% chance of failure versus an 8% chance of failure. Also the fact that humans are superstitious thinkers, that on some level, it feels like if we don’t think about risks, they can’t hurt us, or something like that. … It feels like that I would have put a significant amount of weigh even in the absence of signaling caring, that people would fail to purchase that useful information.
Yes, the fact that we follow heuristics does predict that our actions deviate from those of perfect rationality agents. It predicts that instead of spending just the right amount on something like medicine, we may spend too much or too little. Similarly, it predicts we might get too much or too little info on medical quality.
But by itself that doesn’t predict that we will spend too much on medicine, and too little on medical quality info. In fact, we see a great many other kinds of areas, such as buying more energy efficient light bulbs, where people seem to spend too little. And we see a great many other areas were people seem too eager to gain and apply quality info; we eagerly consume news media full of info with little practical application.
As I said in the podcast, but perhaps didn’t explain well enough, we are often tempted to explain otherwise-puzzling behaviors in terms of simple error theories; the world is complex so people just can’t get it right. This won’t explain why we tend to do the same things as others who are socially near, but that we often like to explain as social copying and conformity; we try to do what others do so we won’t look weird, and maybe others know something.
But even conformity, by itself, won’t explain the particular choices that a group of socially adjacent people make. It doesn’t predict that elderly women in Miami tend to spend too much on medicine, for example. It is these patterns across space, time, group, industry, etc. that I try to explain via signaling. For example, relative to other products and services, people have consistently spent too much on medicine all through history, especially in rich societies, and for women and the elderly.
I’ve offered a signaling story to try to simultaneously explain these and many other details, and yes it takes a few pages to explain. That may sound more complex than “its all just random mistakes”, but to explain any specific dataset of choices, that basic error story must be augmented with a great many specific ad hoc hypotheses of the form “and in this case, the particular mistake these people tend to make happens to be this.”
The combination of “its just error” and all those specific hypotheses is what makes that total hypothesis actually a lot more complex and a priori unlikely than the sorts of signaling stories that I offer. Which is why I’d say such signaling hypotheses are favored more by the data, at least when they fit reasonably well and are generated by a relatively small set of core hypotheses.