5 Comments

Seems like the situation is actually slightly worse than this. You're assuming perfect info about who faces what incentives. But if there is uncertainty about this, then even more pairs get pushed to the costly-signals equilibrium, as long as receiver's rational estimate of *expected* harm from lies is large enough.

Expand full comment

As long as you're not talking about the Popular Peoples Front of Judea

Expand full comment

Not quite how it seems to me, there's an asymmetry on who faces the signals. Divide agents into 'biased' agent, who always want the same outcome, and 'unbiased' agaents, who want a different outcome depending on the stated of reality. Then I think you pay for signals if you are a biased agent with an unbiased agent anywhere up the chain from you, but if you're an unbiased agent you never have to pay for signals.

Expand full comment

Well, nowadays, the real Brian could just show his driving license. Very cheap signal.

The best signals are the ones that cost nothing to honest party but are entirely out of reach of the dishonest parties. For example, a very intelligent person has a track record of solutions to complicated problems, some of which can be straightforwardly ranked (inventing something that others would try to invent). The cost of this "signalling" is essentially zero or even negative, as such person gets paid.

The unintelligent person faces much worse situation. How do you signal that your project will succeed? You can diss a large body of typical projects and point at their flaws which your project shares. This is not very effective, and it can be very expensive, but it can convince some people.

If you are unfamiliar with this technique you may assume that the party in question must be able to generate a project which would pass their critique. But generation of such project is an operation that is very taxing on the intelligence; you may know full well the majority of reasons that makes a project fail yet be entirely unable to pick from the enormous space of projects a project that won't fail. A sufficiently malicious or non-self-critical agent will just generate a project that would be rejected by their own critique, and present it along with their critique of other projects similar to their own, in hope that some people would assume automatic and involuntary self critique.

Expand full comment

Is this a fair summary?: If you are part of a chain of communication, you might as well get used to having to use costly signals, even if you actually share preferences with people near you in the chain. Someone, somewhere else in the chain won't share preferences, and that it all it takes to make all of you use costly signals.

Expand full comment