Status App Concept
In my last post I suggested that we prefer institutions of this form:
Masses recognize elites, who oversee experts, who pick details.
However, our existing methods for doing that first step, masses recognizing elites, seem rather limited. One simple method is to inherit a stable social consensus on the relevant weights to give different status markers, such as wealth, birth family, test scores, endorsement of prior elites, or winning between-elite fights. If we agree on such weights, we might quickly agree on who has how much status. Especially when we pick just one of these markers as our main marker of status.
A second method is to use our ancient more complex, opaque, and instinctual human methods of gossip, displays, fights, and other social tricks to come to a shared consensus on who is higher status. As most human communities do in fact come to rough consensus relatively quickly on relative status judgments, humans clearly do have such mechanisms, even we don’t understand them very well. But such gossip, displays, and fights are often very expensive.
A third method is elections, wherein masses choose between elite political candidates, mostly based on the advice of other elites. While electoral systems usually only only have the capacity to set the status of a tiny number of top officials, those few top officials can sometimes set the status of more others below them.
However, it isn’t clear that any of these methods actually give the masses that much influence over elite policy, or even over the relative status of elites. Nor do they seem that great at preventing coalitions of elites from installing themselves as unaccountable dynasties. Nor are they obviously great at picking the best people to be elites. Can we do better?
In this post I will outline a concept for a more fined-grained and decentralized approach: a status app. Though I haven’t figured out all the details, I’m posting this partial concept to entice you all to help me think about the remaining design issues. And then maybe implement something.
But first, let’s get clear on the relevant standards for evaluating such a proposal. Our other systems for agreeing on status induce great costs, and also suffer from strategic gaming, and a great many personal biases and agendas. Thus a new system doesn’t need to eliminate all such problems to be an improvement. It might be good enough if it just does better re some problems, and not worse re other problems. Furthermore, the first version of such a system needn’t be better than the status quo, if we can use trial and error to improve it, to eventually make a much better system.
Okay, here is my proposed concept. In a new status app, the core action is this: random triples of people X,A,B are selected, and then person X is asked which of the pair A, B they more respect, at least re elite social roles. Their answer is a “status bit”.
In my simplest reference design, the status app just fits a simple statistical model to all of the bits it has seen, a model with parameters that include each person’s current status, and the current info (vs noise) level of each person re their status bits. If these parameter estimates are made made public, the world could use them as input to many other social processes.
For example, the app might compute a status Elo score for each person based on their “wins” vs. “losses” in each of their status bit “contests”. Each person’s info score could then be a time-weighted measure of how well their their status bits predict changes to target Elo scores in the time period after their bits.
Now let’s consider some design issues that might drive us to modify this reference design.
The first issue is what triples X,A,B to use to create status bits. Yes one could choose them completely randomly. But to get bits that better help the app to estimate parameters, it would make sense to slant the triples toward X whose bits are more predictive, and toward A,B pairs whose status estimates are closer to each other. Also, toward triples X,A,B who have closer relations and more similarities to each other. And especially toward situations where X actually sees A and B interact, or sees them act in closely related contexts. Especially situations where status judgments are usually and naturally made.
When we can categorize the context type C for each status bit, it would make sense to have an info parameter for each such type, so that the expected error (squared) for each bit depends on both the particular person X as well as the context type C.
However, the more control that X,A,B have over which triples are evaluated when, the more they will try to game these choices, and the more inclined X might be to sacrifice their info score in order to reward or punish associates. There is thus an open question re what kinds of status judging contexts C to include in this system, and who to let cause or veto each one. Is it sufficient to just include an error adjustment parameter for each different context type C?
A second issue is that even a decent statistical model of these status bits will likely have known errors, inducing participants to try to game them. This certainly happens re Chess Elo scores. If we believe that eventually sufficient data will be collected to make such errors small, so that earlier large errors are mostly temporary, then it may be sufficient to create prediction markets on future Elo estimates, and use current market prices as our best status and info estimates, instead of stat model estimates.
But if we can’t trust model errors to fade away with time, then we might instead want prediction markets that pay out based on random future status bits, using context types that are especially hard to game (as in this post). This approach forces market traders to suffer higher risks, but is safer re model estimate errors.
A third issue is who is allowed to see what status bits. One extreme is where everyone can see them all, while the opposite extreme is where only bit creators can see them. The closer that X,A,B, are to each other, the more risk there is of inducing problematic behaviors by X,A,B toward each other when they can the details of such bits. But the more people who can see more all the bit details, the better they might be able to correct model estimate errors in prediction market prices.
A fourth issue is whether we can create a decentralized implementation of such an app concept, so that we don’t have to trust some center who might lie about or distort such a system.
A fifth issue is that while asking people who they generally respect more seems a very direct way to elicit general status judgments, what we might really want as data are actions where people reveal who they actually fear (for dominance) or seek to emulate (for prestige). But could we really find a set of actions where (a) we could reliably extract relative status judgments from those acts, without too many other confounding effects, (b) the set of actions covers a wide enough range of status aspects to allow the estimation of general status, and not just one narrow aspect of status, and (c) such actions can be observed often enough to give sufficiently accurate estimates of individual status? This seems hard.
A sixth issue is whether and how to give higher status people more weight in judging relative status. Will their info estimates naturally be higher in the basic stat model, or do we want to favor them more than would be done by this analysis, and if so how?
A seventh issue is that sometimes data may be of the form of (X,Y,…) ranking many people (A,B,…,N). Can this be reduced to many bits of the form of ranking (A,B), or does a stat model need to handle this differently?
An eighth issue is how to merge different kinds of status bits. That is, if X picks the higher of A and B several times re several different aspects of status, should the different kinds of status bits be analyzed separately, or would it be useful to estimate their correlation and use those to estimate each kind of status for each person?
I’ll add more issues here as I think of them.