Meta Majoritarianism

Back in March, Hal Finney advocated "Majoritarianism":

In general, on most issues, the average opinion of humanity will be a better and less biased guide to the truth than my own judgment. ….. Given that we have so many intellectual and emotional biases pushing us towards overconfidence in our opinions, compensating for these biases requires that we give substantial preference to majoritarianism and only depart from it for very strong reasons.

Some of the challenges I raised are addressed in the 1981 book Rational Consensus in Science and Society, where Keith Lehrer and Carl Wagner outlined a position one might call "Meta Majoritarianism":

We shall present a theory of consensual probability … [as a] procedure for aggregating individual probability assignments.  … [that] involves … the computation of consensual weights assigned to each person on the basis of information people have about each other.  …  Our method for finding rational consensus rests on the fundamental assumption that members of a group have opinions about the dependability, reliability and rationality of other members of the group. … a member of the group is rationally committed to the consensual probability … once we agree that the method … is rational, we are rationally committed to the outcome. 

Lehrer and Wagner first describe a "very simple model" in which each person i assigns each other person j a (normalized non-negative) weight Wij.   Collecting these weights into matrix W, one gets a consensus weight wi for each person from the matrix equation W*w = w.   This consensus weight then gives a consensus probability from individual probabilities.   

This first approach implicitly assumes each person gives each other person the same weight at all meta levels of evaluation; that person is just as good at guessing rain, guessing how good John is at guessing rain, or guessing how good Mary is at guessing how good John is.  Lehrer and Wagner also describe an "extended model" where each person can assign each other person a different weight at each meta level.   Consensus weights then come from an infinite product of weight matrices, one for each meta level. 

This makes metaphorical, if not literal, sense.  Literally, one would want to also let weights vary by topic and time, and simple weighted averages of probabilities are just not the best way to combine the info in each person’s beliefs.  But metaphorically, it does make sense to ask not "What makes you think you are better than average?" but instead ask "What makes think you are better than respected others think you are?"

GD Star Rating
Tagged as:
Trackback URL:
  • James Wetterau

    This weight for each person seems very close to how Google calculates page rank for any particular page. However, differing from the consensus procedure you describe, I think the top page returned for a Google search is simply the highest ranked page that fits the search terms. This would be a little like simply asking the top ranked person for a topic (in the consensus creation scheme you describe) to give her or his view, rather than using the weightings to take a vote.

  • Stuart Armstrong

    I’ve got a few issues with this meta-majoritarianism – it sounded great at first, but big problems seem to appear upon analysis:

    -Would not the amount of respect you give to others be strongly biased by how much they agree with you? This seems as if it might be worse than simple majoritarianism at actually finding out the truth – it may just make you more confident of your own opinions, without shifting them much.

    -Moreover the traditional majoritarianists will just assign equal value to everyone, the extreem self confidents will assign weights precisely to the extent that people agree with them, cultist will give their leader 1 and 0’s to all the rest – and everyone will stay happily within their own systems. (I know that the extremely self-confident will come unstuck if the amount of issues to look at is much greater than the amount of people in the system, but that probably won’t be a problem in the real world).

    -If you are allowed to update your estimate of other people’s reliability – what procedure could you use? You could use some pretty objective measure of reliability – but objective measures of reliability are precisely what we need to assume do not exist, if we want to embrace meta-majoritarianism. If you started out with a bad respect distribution, it seems you can’t correct it by meta-majoritarian methods.

  • michael vassar

    I wonder if this technique would lead to much greater stability in artistic preferences than exists in the world of pop-culture and fads. It would be a good proxy for objectivity and would, if answers were honest, tell us what we should be trying to do if we wish to develop more refined artistic tastes.

  • Stuart, under this scheme, if you think the weight someone gives is biased, you can give that person less meta weight about that weight-giving judgment.

  • Has this ever actually been tried out?

  • This sounds a lot like the sort of thing that is done in studies of citations of journals
    when “impact-adjusted” weights are used for citations, which amounts to weighting more
    heavily citations that appear in more heavily cited journals. This can of course lead to
    a certain sort of involuted self-prophesying, which may or may not lead one to unbiased
    truth, if the views of the leading journals are themselves biased. Similar things go on
    in the whole process of google site rankings, highly imperfect, if one way to go.

  • James Wetterau

    Re: citation studies — I believe the theory behind that works out identically to the way Google computes page rank: by taking the eigenvector of the connection matrix.

  • Stuart Armstrong

    Stuart, under this scheme, if you think the weight someone gives is biased, you can give that person less meta weight about that weight-giving judgment.

    Yes, but that either just reflects my own prejudices and biases, or some semi-objective measurement of worth. If we have the second, we don’t need this whole set up. This feels like standard majoritarianism, with an extra layer of bias on top. Standard majoritarianism at least has the advantage that it sometimes works – meta-majoritarianism may well be better, or may be much worse at determining the truth. I’d need to see it in action to get a feel for it.

  • James Wetterau

    Stuart Armstrong:

    How do you feel about Google search results? That’s your chance to see this in action.

  • Stuart Armstrong

    How do you feel about Google search results? That’s your chance to see this in action.

    I love the google search results! I think their algorithm is very impressive. However, three quibles: the first is that the top results in google are based on popularity, not truthfulness or lack of bias; secondly the google algorithm is constantly getting updated, to counter manipulations or correct issues, which means that there are results of the algorithm that can be seen to be bad independently; and lastly, I don’t think that this method is exactly the same as the google algorithm, at least as I understand it (see my next post).

  • Stuart Armstrong

    Mathematical oddities of this method (as I understand it):

    1) If two people, i and j, trust each other totally, and at least one other person trusts them a bit (even to the tiniest amount), then their choices will detmine the entire system (mathematically, wi = wj = 1 and all the other wk‘s are zero is the only eigen-vector of W with the (maximal) eigenvalue 1). Generalising to a larger group of closed fanatics yields the same result.

    2) If the population splits into two groups, that refuse to trust each other at all, then there are two solution to W*w = w (technically, a space spanned by these two solutions), reflecting the opinions of each of the two groups. If one person in one group starts trusting anyone in the other by the tiniest amount, his solution collapses, and the other group’s opinions dominate totally.

    Adding a small amount of “moderates” who trust both groups, and are somewhat trusted by both of them, doesn’t change this result much – the solution is vulnurable to dramatic change by small amounts of change in cross-trust.

    3) Away from situations where the +1 eigen space of W has dimension more than one, w is continuous in the initial data. So we can construct rather more general situations where problem 1) occures. 2) happens because we are close to a discontinuity, but the space of discontinuities is small. However, if the Wij are changing, if it’s a process, then hitting a situation like 2) is very likely.

    Using the meta-meta-meta…-respect matrixes may alleviate some of these problems, but they may just create their own (the discontinuites and closed group issues persist in meta-matixes).

    The google model has some way of integrating the amount of trust that everyone puts in you, not just the amount of trust that people you trust put in you and each other. This is lacking here. Even if it were, it still is much more a measure of poppularity than of truth – a weighted popularity, yes, but whether the weighting gives you anything better that standard majoritarianism, I don’t know.

  • James Wetterau

    Stuart Armstrong:

    You are correct that these situations can occur. If I understand correctly, Google copes with the first and second problems by modifying the actual adjacency matrix to be a stochastic, primitive, irreducible matrix. (My source for info on this is this AMS column:

    The technique is straightforward: any dangling page is implicity considered to link to link to every other page, and the whole matrix is multiplied by some scalar α and added to (1 – α) times the 1 matrix (all ones). The article notes that Google selected .85 as a value of α, somewhat arbitrarily. By ensuring that the matrix is a stochastic matrix with every entry positive, the problems you mention cannot arise.

    I’m not sure how you envision a dynamic situation arising that would give rise to frequent problems, but it seems something like the Google approach could be applied fresh every time, unless I’m missing something.

  • James Wetterau

    Stuart Armstrong:

    Now that we’ve coped with some of the linear algebra considerations, I meant to reply to your other two concerns:

    1. “the first is that the top results in google are based on popularity, not truthfulness or lack of bias”
    2. “secondly the google algorithm is constantly getting updated, to counter manipulations or correct issues, which means that there are results of the algorithm that can be seen to be bad independently”

    Re: #1, this seems like a problem for Google, but maybe not for meta-majoritarianism. Google has no way to interrogate the web about its opinions, so it can only use link-derived popularity. Surely the metric would work better if it were based on the sincere intention to express a preference. The fact that Google seems to get good results despite this limitation is encouraging.

    Re: #2, at least some of the problems relating to people “gaming” Google, as I understand them, have to do with the ease with which people can set up brand new sites and domains for the sole purpose of creating a forest of links to their own sites. These links, existing only to boost page rank, not to be read, are fundamentally fake.

    When we analogize the links to the expressed probability assessments of real people, it seems the parallel scam is nowhere near as easy to pull off — you’d have to get away with creating a fake person. Thus I think meta-majoritarianism is much less susceptible to at least some forms of gaming.

  • Stuart Armstrong

    Dear James,

    Thanks for that info – it is indeed a cunning way of avoiding the closed-cult situation. And it becomes more efficient as the size of the cult goes down, which is nice.

    I’m not sure how you envision a dynamic situation arising that would give rise to frequent problems

    Simply that if you allow eigen-values to move, the top two will cross occasionally, leaving you with two solutions (which may be very different). This model of meta-majoritarianism seems unable to track the fact another solution is so close by, and may be within experiemental error.

    Maybe a better model would be to have the average of all the eigenvectors, weighted by the squares of their eigen-values? This would be continuous in the data, and so would avoid such issues.

  • Stuart Armstrong

    Dear James,

    Re your other comments:
    You’ve convinced me that, theoretically, meta-majoritarianism may be succesful at determining the truth – and that a close analogue to it (Google) works quite well.

    Now I’d need to see meta-majoritarianism in action, along with some analysis as to when it works better or worst that
    1) Expert opinion and
    2) Standard majoritarianism

    My guess is that it would work better than standard majoritarianism when the issue is polarized between big groups, but worst when the issue is polarized between a smaller, fanatical group and a more uncertain majority.

    My guess it would beat both expert opinion and standard majoritarianism in cases where there are degrees of expertise, and the “level” of expertise of any one person is not impossible for average people to see (say market predictions or editors on Wikipedia).