Can a tiny bit of noise destroy communication?

If everyone knows a tenth of the population dishonestly claims to observe alien spaceships, this can make it very hard for the honest alien-spaceship-observer to communicate fact that she has actually seen an alien spaceship.

In general, if the true state of the world is seen as not much more likely than you sending the corresponding false message somehow, it’s hard to communicate the true state.

You might think there needs to be quite a bit of noise relative to true claims, or for acting on true claims to be relatively unimportant, for the signal to get drowned out. Yet it seems to me that a relatively small amount of noise could overwhelm communication, via feedback.

Suppose you have a network of people communicating one-on-one with one another. There are two possible mutually exclusive states of the world – A and B – which individuals occasionally get some info about directly. They can tell each other about info they got directly, and also about info they heard from others. Suppose that everyone likes for they and others to believe the truth, but they also like to say that A is true (or to suggest that it is more likely). However making pro-A claims is a bit costly for some reason, so it’s not worthwhile if A is false. Then everyone is honest, and can trust what one another says.

Now suppose that the costs people experience from making claims about A vary among the population. In the lowest reaches of the distribution, it’s worth lying about A. So there is a small amount of noise from people falsely claiming A. Also suppose that nobody knows anyone else’s costs specifically, just the distribution that costs are drawn from.

Now when someone gives you a pro-A message, there’s a small chance that it’s false. This slightly reduces the benefits to you of passing on such pro-A messages, since the value from bringing others closer to the truth is diminished. Yet you still bear the same cost. If the costs of sending pro-A messages were near the threshold of being too high for you, you will now stop sending pro-A messages.

From the perspective of other people, this decreases the probability that a given message of A is truthful, because some of the honest A messages have been removed. This makes passing on messages of A even less valuable, so more people further down the spectrum of costs find it not worthwhile. And so on.

At the same time as the value of passing on A-claims declines due to their likely falsehood, it also declines due to others anticipating their falsehood and thus not listening to them. So even if you directly observe evidence of A in nature, the value of passing on such claims declines (though it is still higher than for passing on an indirect claim).

I haven’t properly modeled this, but I guess for lots of distributions of costs this soon reaches an equilibrium where everyone who still claims A honestly finds it worthwhile. But it seems that for some, eventually nobody ever claims A honestly (though sometimes they would have said A either way, and in fact A happened to be true).

In this model the source of noise was liars at the bottom of the distribution of costs. These should also change during the above process. As the value of passing on A-claims declines, the cost threshold below which it is worth lying about such claims lowers. This would offset the new liars at the top of the spectrum, so lead to equilibrium faster. If the threshold becomes lower than the entire population, lying ceases. If others knew that this had happened, they could trust A-claims again. This wouldn’t help them with dishonest B-claims, which could potentially be rife, depending on the model. However they should soon lose interest in sending false B-claims, so this would be fixed in time. However by that time it will be worth lying about A again. This is less complicated if the initial noise is exogenous.

GD Star Rating
Tagged as: , ,
Trackback URL:
  • gwern0
  • Vaniver

    It seems to me that if the distribution of costs are known, then the actors can mostly figure out the true value from the distorted value. Suppose 60% of the population sees A, and 20% says A. Everyone hears 20% A, and should be able to work back to that 60% A was seen (in expected value), because they know about this effect.

    I wonder if there are cost distributions where the original level is not identifiable- where both 40% seeing A and 60% seeing A both result in 20% saying A. It seems like that will only be true for unlikely distributions, and then only true for part of those distributions.
    Another issue here is the decision problem doesn’t just depend on the cost- it also depends on the benefit of others knowing the truth *and* the probability that your message will change their belief! If I have a high cost for saying “A,” such that I’d like to switch to “B,” I need to model how switching my statement will affect the beliefs of others. If the distributions are such that I could reasonable expect this feedback, then I should take that into consideration when making my decision.

    Supposing that people just care about their personal message, and that they have some costs associated with that message, means that each person will have a different probability threshold, and not care about how their message effects others. Then, each distribution over thresholds and probability of seeing A should have an equilibrium point where no one changes their decision. (Hm, only one equilbrium? Maybe multiple, which complicates things even further. It looks like we can reduce it to one if we assume a particular updating strategy.)

  • Siddharth


  • This problem is worse when it comes to real-life conspiracies, where a few people who know what’s actually going on have an explicit interest in polluting public discourse.  With alien spaceships, noise generally comes from epistemologically incompetent sources, so just by being rational and articulate, you are likely to avoid being drowned out; it’s the (other) practical jokers who reduce your credibility there.

  • VV

    Human relationships form a “small world” network (the Facebook friendship graph, for instance, has an average of about four degrees of separation). This means that salient information can propagate to pretty much everybody in the world passing only through a very small number of intermediaries. Thus, unless noise per intermediary is high, it is usually not an issue.

    Moreover, human social communication is generarly very redundant: there are many short paths between any two nodes in the network. Hence, liars who introduce false claims pretending to be relying claims made by others are usually easy to detect.

    Regarding the value of making a claim, if you make a notable claim that has a low prior probability (e.g. some unexpected scientifically observable phenomenon) AND you can back it up with solid evidence, you will gain a large reward.
    On the other hand, if you can’t back it up with solid evidence (e.g. the typical UFO claim), usually you will not be believed by most people and you will incur in a social cost (unless your claim triggers widespread cognitive biases in a way that is salient to most people).

    I think the key point here is the claim prior probability, not the communication noise: consider the claim “I saw a giant marshmallow in the sky”.
    If you made that claim most people would be less inclined to believe you than if you claimed “I saw an alien spaceship in the sky”, because the prior probability of the former is even lower than the prior probability of the latter, even if nobody is known to falsely claim the former while lots of people are known to falsely claim the latter.

  • Jason

    This is not “noise” but “clutter” or “interference” — real signals that you are not interested in. One way to see the difference is that averaging will not help you. People don’t flip back and forth between false alien claims and lack of claims. The same people are consistently making the false claims.

    This makes the title “can a tiny bit of interference destroy communication?” for which the answer is definitely yes (depending on what you mean by tiny bit).

  • Dave-944

    I don’t know that your logic is airtight,as each added assumption increases the chances of an untrue or even illogical assertion. There is a greater likelihood of a a mistaken belief than a lie. Though there are liars there are also more people who may hold mis- beliefs. As for the assertion that Robin makes in his paper that the testimony a high status or reliable individual is extraordinary evidence,I submit the following counter example.

    There was a rash of UFO sightings which later proved to be partly due to photographs,of a paper machete model found a man’s attic. A very well respected man and his wife whom I personally know stated that they had witnessed a UFO in the swamp which had then took off. He did not go about talking much about this but did go on national TV.  His reputation was not ruined nor did his account make anyone but UFO enthusiasts believe.
    He did not press the matter and later told me it must have been some sort of secret military craft.There is a nearby airbase doing research. The fact is that if the conventional paradigm is violated you will thy to find a way around it,not believe it. That depends how closely wed to the opinion you hold.

    • Nancy Lebovitz

      I bet it was paper mache, not a paper machete.

      I don’t usually nitpick typos, but considering the subject of this discussion…..

  • Robert Ayers

     Old saying: “Remember that one explanation for an unlikely natural occurrence is ‘All of the witnesses are lying’.”

  • Semi-apropos:

  • Pingback: Can a tiny bit of noise destroy communication? | Meteuphoric()