6 Comments

Your hypothesis seems plausible, and should be straightforward to model in the framework that I've given here - anyone want to give it a try?

Expand full comment

A possible extension involves multiple issues where where changes in weightings get transferred across issues. In such a set-up you should be conformist on issues you care less about and non-conformist on those you care more about, everything else equal. You gather weighting/reputation through your conformism on the former issues and spend them through your non-conformism on the latter.

C.f. "Spend your weirdness points wisely".

https://www.lesswrong.com/p...

Also didn't you say somewhere that you didn't have a view on most isdues? Seems to fit with this.

Expand full comment

I'm not modeling norm formation or coalition politics. I would of course be interested to see such models of this sort of situation.

Expand full comment

I only calculated a Nash equilibrium. It may satisfy other properties, but I'm not making any claims about that.

Expand full comment

If human cognition is evolved to follow norms, or more precisely to select which of a competing set of norms to follow, it's unclear doing a weighted sum to discover how norms evolve is a good model. That is, most ai default to m without any reflection on personal preferences. And how things evolve is ai latches on to a new m prime proposed by a particular faction.

So you point may be valid. That non-conformists have more influence than conformists. But if the mechanism is the influential non-conformists is the one who wins at coalition politics in getting followers to believe in a new m prime, it's not clear your math is modelling what's going on.

Expand full comment

When you say that is what happens when everyone simultaneously solves what exactly is the formal claim? is it just a Nash equilibrium or a coalition proof Nash equilibrium or something else?

Expand full comment