Challenges of Majoritarianism

Hal Finney does us a great service by articulating "Philosophical Majoritarianism":

On most issues, the average opinion of humanity will be a better and less biased guide to the truth than my own judgment. … Given that we have  … biases pushing us towards overconfidence …, compensating for these biases requires that we give substantial preference to majoritarianism and only depart from it for very strong reasons. … If I were motivated by overconfidence bias then I would think this is the last position I would want to support.  So … to the extent that the crowd disagrees with philosophical majoritarianism, accepting the principle may nevertheless be justified.

Here are some challenges Majoritarianism must eventually face:

  • Do we take a linear or geometric average of probabilities?
  • If the average seems inconsistent, do we correct for that?
  • If the average seems overconfident, do we correct for that?
  • If it has a trend, do we prefer a forecasted future average?
  • What if it is self-serving, e.g., "majorities should enslave minorities"?
  • Do we include only those alive now, or infer what past and future would think?
  • Do we include morons and babies with equal weight?
  • Do we include people who don’t seem to understand the question?
  • Do we weigh opinions by education or IQ or something else?
  • Why not include animals or robots?
  • What if the average differs from the obvious relevant experts?
  • What if the average differs from a thick speculative market?
  • Do we average what people say or infer beliefs from what they do?
  • Do we use what people do say, or what they would say if asked?
  • If answers depend on question framing, which frame do we use?

Added: It is not clear to me that most people disagree with Hal’s position.  That is, even if most people think that they are personally better than average at estimating truth, they should grant that an average person is better off going with the average, instead of adding in error by choosing their own varied belief.   So if they assume Hal is average, they might not disagree that Hal reduces his error by choosing an average belief. 

GD Star Rating
Tagged as:
Trackback URL:
  • josh

    Seeing that this majoritarianism rests on past empirical observation, we should simply look at past data and see what we can add to the equation to improve the fit of our oppinion function on truth.

    Boy, the general equilibrium effects of this would be disasterous if it ever caught on.

  • Obligatory Godel-esque question: how do we react to the statement “80% of people are opposed to philosophical majoritarianism.” Or does it only apply to statements less meta than that?

  • I second Byrne’s question.

  • Byrne and Eliezer, I thought Hal was clear on why he was willing to consider majoritarianism (ick, what a mouthful) itself to be an exception. But if there is a whole class of related topics, where it is not clear whether to go with the majority or with majoritarianism, yes, that seems more of a problem.

  • I’ve added a bit to the post above.

  • Carl Shulman

    The sophistication of Hal’s post is itself sufficient evidence to reject the assumption that Hal is average. Indeed, given the level of ability and epistemic humility required in order to become a philosophical majoritarian, PMs would be better-off relying on the average opinions of their fellows rather than those of the population as a whole.

  • So a Sophisticated Philosophical Majoritarian tries to adopt the average opinion of all other Sophisticated Philosophical Majoritarians.

    (What’s wrong with this picture?)

  • Consider this question: What color eyes does Hal Finney have?

    I’ve never seen Hal Finney, so if I was asked what to guess what color eyes Hal Finney has, I’d guess the most common eye color: brown. Most people haven’t seen Hal Finney’s eyes, so they’d also guess that Hal Finney’s eyes are brown.

    Now let’s assume that Hal Finney’s eyes are actually blue. Hal Finney sees that his eyes are blue when he looks in the mirror, and people who know Hal Finney see that his eyes are blue when they look at him. If you asked any of these people what color eyes Hal Finney has, they would answer blue. Should any of these people change their opinion because a majority of people would guess that Hal Finney has brown eyes? That would be silly.

    Is this a counterexample to “philosophical majoritarianism”?

    Maybe, maybe not. People who have seen Hal Finney’s eyes have better evidence than those who have not seen them; they can reasonably deduce that the majority of people, given this better evidence, would change their opinion. Therefore, we should change our statement of philosophical majoritarianism somewhat: we should agree with the opinions of the majority of the set of people who have analyzed enough evidence to form an independent, well-informed opinion. In this case, we should agree with the majority of people who have looked at Hal Finney’s eyes. In other cases, though, it can be a lot harder to decide who has the expertise required to make their opinion count.

  • Could it be that perhaps Hal’s proposal simply comes down to, “use science”?

    Doug’s comment:

    we should agree with the opinions of the majority of the set of people who have analyzed enough evidence to form an independent, well-informed opinion.

    sounds a lot like the kind of justification one would use for, say, accepting evolution rather than supernatural creation as the more likely explanation for the development of biological organisms on Earth.

    It doesn’t seem that there’s any need to posit a new, special, overgeneralizing, and possibly misleading term like “majoritarianism” to describe a philosophy which has the primary feature of utilizing the net processing power of knowledgeable individuals in the world. That’s what science does already — or at least, that’s what it’s supposed to do.

    And additionally, I don’t see what’s wrong with not having an opinion on a given question that comes up. If you don’t have enough data, or enough high-quality data to allow for the formation of a definitive (or at least best-fit) conclusion, it’s probably best to leave your opinion space blank on the question at hand. It’s fine to say that you think one outcome or conclusion is more likely than any other, or to assign relative probabilities to potential outcomes and conclusions based on what data you do have, but I don’t see the point of “taking a side” just for the sake of being able to say something other than “insufficient data”. If all you know is that a bunch of people think something is a certain way, but you don’t have any idea why they think that, I’d consider that data to be insufficient. Without knowing anything about where or how the people in question got their opinions, I cannot simply “decide” to agree with those opinions. I could say something like, “Well, X number of people believe this which could very well mean that the belief is accurate”, but that wouldn’t mean I could convince myself to hold that same belief. Or do you not distinguish between saying you believe something, and actually finding evidence sufficiently convincing?

    Now, if you are in the position of having to act based on coming to one conclusion or another in the short-term, whether you go with what you perceive as a “majority view” or not ought to be evaluated according to what the costs of being wrong are. It would be interesting to know what weight people place on information and opinions from particular sources when the situation is very high-stakes vs. lower-risk.

  • Carl Shulman

    I thought that the recursion/equilibrium/information cascade problem had been sufficiently belabored, of course the opinions averaged have to be object-level judgments. The members of a philosophical cabal could review the object-level evidence (including the opinions of non-cabal members as non-dispositive data points) on a particular question independently, record their answers, and then adopt the average opinion of the cabal. My claim is that those with the traits that could lead them to adopt Hal’s philosophical majoritarianism would reliably outperform the population majority if they followed this procedure.

  • michael vassar

    I definitely don’t think that an average person can improve the accuracy of their beliefs via PM because I don’t think that the processes required to discover what the majority’s belief regarding some question is are less philosophically problematic than other epistemological processes. The proposed regime doesn’t eliminate the possibility of bias, it just leads to the PM outsourcing blame for his errors from himself to the majority, or more accurately, his conception of majority opinion, which is no more constrained than his other beliefs.
    If one wishes to be absolved of blame for one’s errors, simply adopting physical determinism/chaos works much better than PM. If one wishes to act appropriately given a situation, one is stuck using reason, whether one pretends to or tries to adopt PM or not.
    Presumably, for the average person in average circumstances, the degree of acceptance of consensus that is actually average is reasonably close to what has been fitness maximizing during most of the relevant part of human evolutionary history.

  • Doug, surely Hal’s eye color would fall under his “strong reasons” exception.

    Anne, I’m not sure real science lives up to your hopes for it.

  • Good point, Vassar. Because of the nature of our political system, our most successful frauds and liars specialize in creating the impression that they represent the opinion of the majority and in influencing the opinion of the masses on pivotal issues, and many people make their living, in whole or in part, consciously or unwittingly, by helping the frauds and liars.

  • Eliezer, what’s wrong with that picture is that there is no base case to terminate the recursion. (Also, the addition of the word “sophisticated” risks the No True Scotsman fallacy.)