Overcoming Bias

Share this post

Moral uncertainty – towards a solution?

www.overcomingbias.com

Discover more from Overcoming Bias

This is a blog on why we believe and do what we do, why we pretend otherwise, how we might do better, and what our descendants might do, if they don't all die.
Over 11,000 subscribers
Continue reading
Sign in

Moral uncertainty – towards a solution?

Nick Bostrom
Jan 1, 2009
1
Share this post

Moral uncertainty – towards a solution?

www.overcomingbias.com
71
Share

It seems people are overconfident about their moral beliefs.  But how should one reason and act if one acknowledges that one is uncertain about morality – not just applied ethics but fundamental moral issues? if you don't know which moral theory is correct?

It doesn't seem you can simply plug your uncertainty into expected utility decision theory and crank the wheel; because many moral theories state that you should not always maximize expected utility.

Even if we limit consideration to consequentialist theories, it still is hard to see how to combine them in the standard decision theoretic framework.  For example, suppose you give X% probability to total utilitarianism and (100-X)% to average utilitarianism.  Now an action might add 5 utils to total happiness and decrease average happiness by 2 utils.  (This could happen, e.g. if you create a new happy person that is less happy than the people who already existed.)  Now what do you do, for different values of X?

The problem gets even more complicated if we consider not only consequentialist theories but also deontological theories, contractarian theories, virtue ethics, etc.  We might even throw various meta-ethical theories into the stew: error theory, relativism, etc.

I'm working on a paper on this together with my colleague Toby Ord.  We have some arguments against a few possible "solutions" that we think don't work.  On the positive side we have some tricks that work for a few special cases.  But beyond that, the best we have managed so far is a kind of metaphor, which we don't think is literally and exactly correct, and it is a bit under-determined, but it seems to get things roughly right and it might point in the right direction:

The Parliamentary Model.  Suppose that you have a set of mutually exclusive moral theories, and that you assign each of these some probability.  Now imagine that each of these theories gets to send some number of delegates to The Parliament.  The number of delegates each theory gets to send is proportional to the probability of the theory.  Then the delegates bargain with one another for support on various issues; and the Parliament reaches a decision by the delegates voting.  What you should do is act according to the decisions of this imaginary Parliament.  (Actually, we use an extra trick here: we imagine that the delegates act as if the Parliament's decision were a stochastic variable such that the probability of the Parliament taking action A is proportional to the fraction of votes for A.  This has the effect of eliminating the artificial 50% threshold that otherwise gives a majority bloc absolute power.  Yet – unbeknownst to the delegates – the Parliament always takes whatever action got the most votes: this way we avoid paying the cost of the randomization!)

The idea here is that moral theories get more influence the more probable they are; yet even a relatively weak theory can still get its way on some issues that the theory think are extremely important by sacrificing its influence on other issues that other theories deem more important.  For example, suppose you assign 10% probability to total utilitarianism and 90% to moral egoism (just to illustrate the principle).  Then the Parliament would mostly take actions that maximize egoistic satisfaction; however it would make some concessions to utilitarianism on issues that utilitarianism thinks is especially important.  In this example, the person might donate some portion of their income to existential risks research and otherwise live completely selfishly.

I think there might be wisdom in this model.  It avoids the dangerous and unstable extremism that would result from letting one’s current favorite moral theory completely dictate action, while still allowing the aggressive pursuit of some non-commonsensical high-leverage strategies so long as they don’t infringe too much on what other major moral theories deem centrally important.

But maybe somebody here has better ideas or suggestions for improving this model?

1
Share this post

Moral uncertainty – towards a solution?

www.overcomingbias.com
71
Share
71 Comments
Share this discussion

Moral uncertainty – towards a solution?

www.overcomingbias.com
Overcoming Bias Commenter
May 15

Hm. I assumed voting over the whole set of possible policies. So, they would all have to vote on actions like "do A1 with probability 1/n, ... do An with probability 1/n, do X with probability 0". Which is intuitively what the Parliament should do, right? Of course, it is a little less effficient when you have to vote on *everything* but.

Expand full comment
Reply
Share
Overcoming Bias Commenter
May 15

I've done some survey research to confirm that human societies are evaluatively diverse (http://grinfree.com/GRINSQ..... That suggests that something like this idea has several thousand years of track-record, although I'd call it an "ecosystem," rather than a "parliament." As Nick implied with his reference to Neurath's ship, we can manage existing ecosystems even before we identify all of the species they contain or the "ideal" proportions between them, and the same would be true of an existing evaluative ecosystem. However, as Nick just released "Superintelligence" I've got to ask, "Can one reliably design a brand new internally balanced evaluative ecosystem, or should computer-development fit into a managed already-existing ecosystem?" The Neurath's ship argument might not float in the former case...

Expand full comment
Reply
Share
69 more comments...
Top
New
Community

No posts

Ready for more?

© 2023 Robin Hanson
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing