71 Comments

Hm. I assumed voting over the whole set of possible policies. So, they would all have to vote on actions like "do A1 with probability 1/n, ... do An with probability 1/n, do X with probability 0". Which is intuitively what the Parliament should do, right? Of course, it is a little less effficient when you have to vote on *everything* but.

Expand full comment

I've done some survey research to confirm that human societies are evaluatively diverse (http://grinfree.com/GRINSQ..... That suggests that something like this idea has several thousand years of track-record, although I'd call it an "ecosystem," rather than a "parliament." As Nick implied with his reference to Neurath's ship, we can manage existing ecosystems even before we identify all of the species they contain or the "ideal" proportions between them, and the same would be true of an existing evaluative ecosystem. However, as Nick just released "Superintelligence" I've got to ask, "Can one reliably design a brand new internally balanced evaluative ecosystem, or should computer-development fit into a managed already-existing ecosystem?" The Neurath's ship argument might not float in the former case...

Expand full comment

It seems that specifying the delegates' informational situation creates a dilemma.

As you write above, we should take the delegates to think that Parliament's decision is a stochastic variable such that the probability of the Parliament taking action A is proportional to the fraction of votes for A, to avoid giving the majority bloc absolute power.

However, your suggestion generates its own problems (as long as we take the parliament to go with the option with the most votes):

Suppose an issue The Parliament votes on involves options A1, A2, …, An and an additional option X. Suppose further that the great majority of theories in which the agent has credence agree that it is very important to perform one of A1, A2, …, An rather than X. Although all these theories have a different favourite option, which of A1, A2, …, An is performed makes little difference to them.

Now suppose that according to an additional hypothesis in which the agent has relatively little credence, it is best to perform X.

Because the delegates who favour A1, A2, …, An do not know that what matters is getting the majority, they see no value in coordinating themselves and concentrating their votes on one or a few options to make sure X will not end up getting the most votes. Accordingly, they will all vote for different options. X may then end up being the option with most votes if the agent has slightly more credence in the hypothesis which favours X than in any other individual theory, despite the fact that the agent is almost sure that this option is grossly suboptimal.

This is clearly the wrong result.

Expand full comment

Oops. Just realized this doesn't matter because the number of "representatives" is proportional to the probability you give the theory of being correct.

Expand full comment

I think it's a decent idea. How do you deal with distinguishing between theories? Is every theory of utilitarianism with different domains of utility different theories or the same?

Expand full comment

I would think this is correct for minds in general, but this might not be true for humans. I think it's still an open question as to whether you can derive this in principle from neuroscience. (firing rates, etc)

Expand full comment

I think that Hans Hermann Hoppe's Argumentation Ethics provides a solid framework that seems pretty damned irrefutable. Or, that is, the only way to refute his ideas is by engaging in a Performative Contradiction, making all arguments against it logically false.

Expand full comment

Yes: what does it mean for a moral system to be "correct"?

Expand full comment

Nick, are you suggesting that the object of our vague concept is itself vague, e.g. that the vagueness here is in the territory not just on the map? Historically, it seems to me that vague maps have been believed to correspond to vague territories on many occasions but that they have always been found not to.

Expand full comment

Build the model using fuzzy logic

Expand full comment

Nick, in your critique, you combine utilities derived from different utility functions.

For me, that is simply an illegal operation - there is no general way of combining utility from different utility functions - as though utility were some kind of probability. Each utility function may have its own scale and units - you can't necessarily take utilities from different utility functions and combine them.

As far as it not being clear to you how a utilitiarian version of Kantianism would work: what exactly is the problem?

Utilitiarianism is like a Turing machine of moral systems - if a morality is computable, and finitely expressible, you can represent it by some function - which describes the action to be taken - a utility function. If a morality is not computable, or requires some infinite system to express it, in practice, other moral agents can't make much use it either. [repost, due to Typepad mangling]

Expand full comment

(Bret)I think we should redefine "morality" to mean "practicality," and then assign a goal to this practicality, before we're talking about anything but contextless universals.

For example, morality is practicality with the goal of making us a stable, more intelligent species that colonizes at least 7 planets.----Thanks Brett. :s/morality/practicality/ makes the topic much more comprehensible. The colonization example is a good one. As Nick suggested it is quite possible to leave the goal in question abstract once 'morality' is given the meaning described here.

I was confused by the initial post since as I understand it 'Morality' is a quite different beast to what we've been talking about here. Rather more to do with a cooperation mechanism we adapt to raise our social status and the frequency of our carried alleles relative to the rest of our species. 'Moral overconfidence' made no sense.

Expand full comment

David Jinkins hit the nail on the head.

Expand full comment

Maybe I've missed an extensive discussion of the topic, but I, like David Jinkins, am confused by the concept of a probability distribution over moral frameworks, as the "prior" seems effectively immutable. What would Solomonoff induction applied to morality look like? Presumably still heavy on nihilism. Does the resulting prior+evolution resemble current human morality better than just nihilism+evolution? If not, why even use the term "probability"? Just shorthand for "element of a set of reals that sums to 1"?

Expand full comment

Robin: There is a more general question of how to handle uncertainty over decision theory; I suspect that there the situation we find ourselves in is onboard Neurath's ship. But we can begin addressing some more limited problem, like moral uncertainty. We may then be able to apply the same framework somewhat more generally; for example, to some uncertainty over metaethics, or some possible trade-offs between prudence and morality, etc. Yes, it seems you can always move up a level and consider a higher-level uncertainty, such as uncertainty over whether this proposed framework is correct - and we might also need to acknowledge the possibility that some disagreements at this higher level might themselves involve broadly moral issues. At this point we are not clear about whether our framework itself constitutes a kind of metaethical or meta-meta ethical theory, or whether it falls entirely under the non-moral parts of rationality.

Tim: It's not so easy. Suppose you represent Kantianism as a consequentialist theory that assigns some negative infinite (perhaps surreal-valued) utility to any act of telling a lie. You then seem saddled with the implication that even if you assign Kantianism a mere 0.01% probability of being correct, it will still trump e.g. all finite utilitarian views even if you are virtually certain that some such view is correct. (Also, it is not clear that you would actually get the structure of Kantianism right in this way.)

Expand full comment

The need for surreal numbers in decision theory was established by Conway over three decades ago, in his study of the game of go.

You only get the surreal numbers out of the left-set/right-set construction if your set theory permits transfinite induction. Nothing in the study of finite games requires the surreal numbers.

Expand full comment