It has been two years since I posted a summary of how signaling works; recent discussions suggest maybe I should try again. Be warned; this time I'll use more math. Consider authors who must choose a level of emotion
The point isn't so much to tell me the answers -- it's that until someone does the study, the model is an hypothesis, and gives no particular support to the reality of any conclusions one might want to draw from it.
Wei, it seems to me that you are just saying that your intuition disagrees with my model. Duly noted, but not exactly a stinging criticism.
Cyan, this wouldn't be much harder to test that most social science signaling hypotheses. But since you don't seem to know much social science, I don't see the point in outlining to you how I'd go about that if I had the funding and time to do so.
I continue to get this sort of flak when I post on social science; most commenters here seem to consider friendly AI theory more well established than social science.
Robin, my questions aren't rhetorical -- if your model has the value you seem to ascribe to it, then there should be good answers for each one. (E.g., PCA can be a sound way of arguing that massively multidimensional quantities can be summarized by a small set of scalars.) I'd be happy to have my queries answered because it would mean I could place some trust in your model to help me understand the world. As it stands, I can't.
...every model is "oversimplified" in the sense of neglecting relevant details.
The question is whether it's too simplified to be of any practical use.
I don't know how I gave you the impression that I have "blind reliance" on my model as an "accurate representation."
You feel it's accurate enough to support the point you're making in the penultimate paragraph. I'm not so sure.
Robin, your model has to compete with other tools that the reader has for understanding the world, including his or her own highly evolved native social intelligence. By "oversimplified" I meant that the initial model lacks enough relevant details that it seemed unlikely to perform well in practice relative to those other tools. If you were presenting the model as a starting point for further research or as a pedagogical tool, that would be one thing, but you were apparently applying it to a real world situation, and giving people (Eliezer) advice based on it.
As for economics modeling in general, I am more skeptical of it than I used to be. It's a lot of fun to do in the armchair, but can be pretty dangerous in the real world, where it's easy to miss a relevant detail with serious consequences, or have one's model misapplied by others in inappropriate settings. Generic disclaimer of course don't do much good (although given human nature I think they're still better than nothing). More useful would be specific disclaimers about what the model assumes, guidelines on when it is likely or not likely to give sensible results, and rationales behind any technical modeling choices such as the specific form of utility function and whether variables are continuous or discrete (e.g. whether there is reason to believe that the model is insensitive to these choices).
Cyan and Wei, your complaints seem to be against the very idea of economics modeling; every model is "oversimplified" in the sense of neglecting relevant details. I don't know how I gave you the impression that I have "blind reliance" on my model as an "accurate representation." Perhaps you wanted a generic disclaimer, of the sort that could go on any model, that it may not exactly correspond to reality?
Hal wrote: And just as readers complain about the dryness of technical writing, so readers will complain about excessive use of equations.
I think people are complaining that there is too little, not too much, math. Robin's post gave the impression of blind reliance on an oversimplified model, which people are understandably wary of, given the recent news stories about how oversimplified risk modeling was a major cause of the current economic crisis.
Robin wrote: Wei, I did not mean a zero chance of large noise.
Robin, in that case I'm not sure what you mean. You can clear this up by giving us the model you have in mind, with noise and repetition, and its equilibrium solutions.
But the complaints are not yet articulate enough to determine their source.
Can this math-esque model be tested experimentally? How could one measure e or p in the wild? How can reducing inherently massively multidimensional quantities to two scalars be justified? Is there any reason to expect that such a reduction would yield an accurate description of what goes on in human brains to even first order?
I'm not quite as skeptical as Brian Macker, but I can't uncritically adopt this math as an accurate representation for what actually goes on in the real world. Suggestive, sure -- conclusive, no. (Grant thinks it's not intended to be realistic, but I don't see anything in the post that would prompt that inference.)
Hal, yes, one must use math excessively to credibly signal competence and rigor. But the complaints are not yet articulate enough to determine their source. It is not clear to me whether my point could have been made effectively with less math, and since it seemed I should give a fuller explanation if signaling at some point, this seemed a good opportunity.
I think what Brian and other critics are implicitly suggesting is that having equations is a sign of competence and rigor; hence, it may be expected that authors will try to use equations to signal these qualities, even when they are not as fully present as the use of equations would suggest. Just as the math in this article suggests that writers will use less emotion than they would like to, for the same reason writers will use more equations than they would like to. And just as readers complain about the dryness of technical writing, so readers will complain about excessive use of equations. Therefore it could be argued that by complaining about the equations, critics are actually validating the argument presented in the equations!
Wow, more pseudo-math coming out of economics.
Oh. Hah, some how my brain edited the exponent out. Doh.
Paul, the first term says that a mismatch between the level of written emotion and the propaganda factor encoding the author's desire to persuade is costly.
The first term in the utility function seems to suggest that emotion is costly for the author? Why should that be?
It's true that the separating equilibrium does not depend on the distribution of types, but which states are equilibria does depend. When the proportion of productive workers is high, no schooling is also an equilibrium.
Wei, I did not mean a zero chance of large noise.
Douglas, that previous example equilibrium also depends only on the support of the distribution of types. That is standard for separating equilibria.
Grant,What's good about it? RH doesn't indicate why he bothered to write this post instead of just linking to old one. That was a greatly superior exposition of signaling, though this one may teach it other ways.
One thing that is valuable about this example, as a supplement to the previous one, is that the prior only enters through its maximum; the outcome of the previous model depends on the mix of productive and unproductive workers. Wei Dai and I (on another thread) were surprised at that. Models are particularly valuable for demonstrating that phenomena are possible.