Does allowing prophets, whistle-blowers, and dissidents to tell people truths they don’t want to hear help those other people or hurt them? Today I heard an excellent talk (see slides and paper) by Roland Benabou explaining how it can help or hurt, depending on the situation:
HURT: If your future is likely to be enjoyable, and if before then anticipating your great future gives you enough joy, then if you come across bad news suggesting otherwise you might enjoy your life more overall if you quickly look the other way and forget about it. Even if later on you realize you are the sort of person who would forget such news, you’d still reasonably guess you had a good chance of an enjoyable future, and you’d enjoy savoring that prospect, at least for a while. Someone who forced you to pay attention to the bad news could do you a real harm.
HELP: On the other hand, if a group of you worked together to build an enjoyable future, how hard you each worked might depend on the chances you each assigned to your efforts working out well. Given that you expected other people to avoid looking at bad news, you might also find it in your interest to avoid looking at bad news, so that you were all in an equilibrium where you all avoided bad news. But for certain parameter values you might all be better off in a different equilibrium where you all expect each other to look at bad news and change your behavior in response. In this case someone who collected bad news, saved it, and later forced you all to pay attention to the bad news you had tried to forget could upgrade your equilibrium. This could do you all a favor, a favor you were individually not willing to do for yourselves.
The value of truth is contingent, and depends on the details of your world and values. It is not guaranteed. So honestly demands that my commitment to truth also be contingent.
One thing that struck me about the lecture was the similarity to the model of enjoyment and experience of music from David Huron in his Sweet Anticipation, though his was a 5, rather than a 2 stage (bayesian) anticipation model. Similarly, it seems some disagreements (up to and including those on large ethical and political questions) end up being (especially when insufficient thought is drawn upon them) matters of which comes first, A or B (the greed of employers/the greed of employees, say) - and that this suggests that the decisions and beliefs made are or can be 'realistic' or 'delusional' on a particular time scale, but not on others(ie an inability to look at the markets long enough to see them work, or an inability to look at collective bargaining long enough to see it work, say). Putting the two together: There may be higher order delusions that we cycle between even than statism or anti-statism, that this same model should be able to say something about, extended just one more set of terms, under similar reasoning(the exception being that the time frame is larger).
(tl; dr lecture generalized up to large numbers of people but not as much along the time axis, only mention of time is that beliefs can persist)
What does this have to do with whistle-blowers? Isn't that situation explicitly framed as a conflict of interests? Often principal-agent-subagent interests?