30 Comments

Dear James,

Re your other comments:You've convinced me that, theoretically, meta-majoritarianism may be succesful at determining the truth - and that a close analogue to it (Google) works quite well.

Now I'd need to see meta-majoritarianism in action, along with some analysis as to when it works better or worst that1) Expert opinion and2) Standard majoritarianism

My guess is that it would work better than standard majoritarianism when the issue is polarized between big groups, but worst when the issue is polarized between a smaller, fanatical group and a more uncertain majority.

My guess it would beat both expert opinion and standard majoritarianism in cases where there are degrees of expertise, and the "level" of expertise of any one person is not impossible for average people to see (say market predictions or editors on Wikipedia).

Expand full comment

Dear James,

Thanks for that info - it is indeed a cunning way of avoiding the closed-cult situation. And it becomes more efficient as the size of the cult goes down, which is nice.

I'm not sure how you envision a dynamic situation arising that would give rise to frequent problems

Simply that if you allow eigen-values to move, the top two will cross occasionally, leaving you with two solutions (which may be very different). This model of meta-majoritarianism seems unable to track the fact another solution is so close by, and may be within experiemental error.

Maybe a better model would be to have the average of all the eigenvectors, weighted by the squares of their eigen-values? This would be continuous in the data, and so would avoid such issues.

Expand full comment

Stuart Armstrong:

Now that we've coped with some of the linear algebra considerations, I meant to reply to your other two concerns:

1. "the first is that the top results in google are based on popularity, not truthfulness or lack of bias"2. "secondly the google algorithm is constantly getting updated, to counter manipulations or correct issues, which means that there are results of the algorithm that can be seen to be bad independently"

Re: #1, this seems like a problem for Google, but maybe not for meta-majoritarianism. Google has no way to interrogate the web about its opinions, so it can only use link-derived popularity. Surely the metric would work better if it were based on the sincere intention to express a preference. The fact that Google seems to get good results despite this limitation is encouraging.

Re: #2, at least some of the problems relating to people "gaming" Google, as I understand them, have to do with the ease with which people can set up brand new sites and domains for the sole purpose of creating a forest of links to their own sites. These links, existing only to boost page rank, not to be read, are fundamentally fake.

When we analogize the links to the expressed probability assessments of real people, it seems the parallel scam is nowhere near as easy to pull off -- you'd have to get away with creating a fake person. Thus I think meta-majoritarianism is much less susceptible to at least some forms of gaming.

Expand full comment

Stuart Armstrong:

You are correct that these situations can occur. If I understand correctly, Google copes with the first and second problems by modifying the actual adjacency matrix to be a stochastic, primitive, irreducible matrix. (My source for info on this is this AMS column: http://www.ams.org/featurecolumn/archive/pagerank.html).

The technique is straightforward: any dangling page is implicity considered to link to link to every other page, and the whole matrix is multiplied by some scalar α and added to (1 - α) times the 1 matrix (all ones). The article notes that Google selected .85 as a value of α, somewhat arbitrarily. By ensuring that the matrix is a stochastic matrix with every entry positive, the problems you mention cannot arise.

I'm not sure how you envision a dynamic situation arising that would give rise to frequent problems, but it seems something like the Google approach could be applied fresh every time, unless I'm missing something.

Expand full comment

Mathematical oddities of this method (as I understand it):

1) If two people, i and j, trust each other totally, and at least one other person trusts them a bit (even to the tiniest amount), then their choices will detmine the entire system (mathematically, w<sub>i</sub> = w<sub>j</sub> = 1 and all the other w<sub>k</sub>'s are zero is the only eigen-vector of W with the (maximal) eigenvalue 1). Generalising to a larger group of closed fanatics yields the same result.

2) If the population splits into two groups, that refuse to trust each other at all, then there are two solution to W*w = w (technically, a space spanned by these two solutions), reflecting the opinions of each of the two groups. If one person in one group starts trusting anyone in the other by the tiniest amount, his solution collapses, and the other group's opinions dominate totally.

Adding a small amount of "moderates" who trust both groups, and are somewhat trusted by both of them, doesn't change this result much - the solution is vulnurable to dramatic change by small amounts of change in cross-trust.

3) Away from situations where the +1 eigen space of W has dimension more than one, w is continuous in the initial data. So we can construct rather more general situations where problem 1) occures. 2) happens because we are close to a discontinuity, but the space of discontinuities is small. However, if the W<sub>ij</sub> are changing, if it's a process, then hitting a situation like 2) is very likely.

Using the meta-meta-meta...-respect matrixes may alleviate some of these problems, but they may just create their own (the discontinuites and closed group issues persist in meta-matixes).

The google model has some way of integrating the amount of trust that everyone puts in you, not just the amount of trust that people you trust put in you and each other. This is lacking here. Even if it were, it still is much more a measure of poppularity than of truth - a weighted popularity, yes, but whether the weighting gives you anything better that standard majoritarianism, I don't know.

Expand full comment

How do you feel about Google search results? That's your chance to see this in action.

I love the google search results! I think their algorithm is very impressive. However, three quibles: the first is that the top results in google are based on popularity, not truthfulness or lack of bias; secondly the google algorithm is constantly getting updated, to counter manipulations or correct issues, which means that there are results of the algorithm that can be seen to be bad independently; and lastly, I don't think that this method is exactly the same as the google algorithm, at least as I understand it (see my next post).

Expand full comment

Stuart Armstrong:

How do you feel about Google search results? That's your chance to see this in action.

Expand full comment

Stuart, under this scheme, if you think the weight someone gives is biased, you can give that person less meta weight about that weight-giving judgment.

Yes, but that either just reflects my own prejudices and biases, or some semi-objective measurement of worth. If we have the second, we don't need this whole set up. This feels like standard majoritarianism, with an extra layer of bias on top. Standard majoritarianism at least has the advantage that it sometimes works - meta-majoritarianism may well be better, or may be much worse at determining the truth. I'd need to see it in action to get a feel for it.

Expand full comment

Re: citation studies -- I believe the theory behind that works out identically to the way Google computes page rank: by taking the eigenvector of the connection matrix.

Expand full comment

This sounds a lot like the sort of thing that is done in studies of citations of journalswhen "impact-adjusted" weights are used for citations, which amounts to weighting moreheavily citations that appear in more heavily cited journals. This can of course lead toa certain sort of involuted self-prophesying, which may or may not lead one to unbiasedtruth, if the views of the leading journals are themselves biased. Similar things go onin the whole process of google site rankings, highly imperfect, if one way to go.

Expand full comment

Has this ever actually been tried out?

Expand full comment

Stuart, under this scheme, if you think the weight someone gives is biased, you can give that person less meta weight about that weight-giving judgment.

Expand full comment

I wonder if this technique would lead to much greater stability in artistic preferences than exists in the world of pop-culture and fads. It would be a good proxy for objectivity and would, if answers were honest, tell us what we should be trying to do if we wish to develop more refined artistic tastes.

Expand full comment

I've got a few issues with this meta-majoritarianism - it sounded great at first, but big problems seem to appear upon analysis:

-Would not the amount of respect you give to others be strongly biased by how much they agree with you? This seems as if it might be worse than simple majoritarianism at actually finding out the truth - it may just make you more confident of your own opinions, without shifting them much.

-Moreover the traditional majoritarianists will just assign equal value to everyone, the extreem self confidents will assign weights precisely to the extent that people agree with them, cultist will give their leader 1 and 0's to all the rest - and everyone will stay happily within their own systems. (I know that the extremely self-confident will come unstuck if the amount of issues to look at is much greater than the amount of people in the system, but that probably won't be a problem in the real world).

-If you are allowed to update your estimate of other people's reliability - what procedure could you use? You could use some pretty objective measure of reliability - but objective measures of reliability are precisely what we need to assume do not exist, if we want to embrace meta-majoritarianism. If you started out with a bad respect distribution, it seems you can't correct it by meta-majoritarian methods.

Expand full comment

This weight for each person seems very close to how Google calculates page rank for any particular page. However, differing from the consensus procedure you describe, I think the top page returned for a Google search is simply the highest ranked page that fits the search terms. This would be a little like simply asking the top ranked person for a topic (in the consensus creation scheme you describe) to give her or his view, rather than using the weightings to take a vote.

Expand full comment

Dear James,

Re your other comments:You've convinced me that, theoretically, meta-majoritarianism may be succesful at determining the truth - and that a close analogue to it (Google) works quite well.

Now I'd need to see meta-majoritarianism in action, along with some analysis as to when it works better or worst that1) Expert opinion and2) Standard majoritarianism

My guess is that it would work better than standard majoritarianism when the issue is polarized between big groups, but worst when the issue is polarized between a smaller, fanatical group and a more uncertain majority.

My guess it would beat both expert opinion and standard majoritarianism in cases where there are degrees of expertise, and the "level" of expertise of any one person is not impossible for average people to see (say market predictions or editors on Wikipedia).

Expand full comment