Category Archives: Naturalism

Abstracted Idealized Dynamics

Followup toMorality as Fixed Computation

I keep trying to describe morality as a "computation", but people don’t stand up and say "Aha!"

Pondering the surprising inferential distances that seem to be at work here, it occurs to me that when I say "computation", some of my listeners may not hear the Word of Power that I thought I was emitting; but, rather, may think of some complicated boring unimportant thing like Microsoft Word.

Maybe I should have said that morality is an abstracted idealized dynamic.  This might not have meant anything to start with, but at least it wouldn’t sound like I was describing Microsoft Word.

How, oh how, am I to describe the awesome import of this concept, "computation"?

Perhaps I can display the inner nature of computation, in its most general form, by showing how that inner nature manifests in something that seems very unlike Microsoft Word – namely, morality.

Consider certain features we might wish to ascribe to that-which-we-call "morality", or "should" or "right" or "good":

• It seems that we sometimes think about morality in our armchairs, without further peeking at the state of the outside world, and arrive at some previously unknown conclusion.

Someone sees a slave being whipped, and it doesn’t occur to them right away that slavery is wrong.  But they go home and think about it, and imagine themselves in the slave’s place, and finally think, "No."

Can you think of anywhere else that something like this happens?

Continue reading "Abstracted Idealized Dynamics" »

GD Star Rating

Inseparably Right; or, Joy in the Merely Good

Followup toThe Meaning of Right

I fear that in my drive for full explanation, I may have obscured the punchline from my theory of metaethics.  Here then is an attempted rephrase:

There is no pure ghostly essence of goodness apart from things like truth, happiness and sentient life.

What do you value?  At a guess, you value the life of your friends and your family and your Significant Other and yourself, all in different ways.  You would probably say that you value human life in general, and I would take your word for it, though Robin Hanson might ask how you’ve acted on this supposed preference.  If you’re reading this blog you probably attach some value to truth for the sake of truth.  If you’ve ever learned to play a musical instrument, or paint a picture, or if you’ve ever solved a math problem for the fun of it, then you probably attach real value to good art.  You value your freedom, the control that you possess over your own life; and if you’ve ever really helped someone you probably enjoyed it.  You might not think of playing a video game as a great sacrifice of dutiful morality, but I for one would not wish to see the joy of complex challenge perish from the universe.  You may not think of telling jokes as a matter of interpersonal morality, but I would consider the human sense of humor as part of the gift we give to tomorrow.

And you value many more things than these.

Your brain assesses these things I have said, or others, or more, depending on the specific event, and finally affixes a little internal representational label that we recognize and call "good".

There’s no way you can detach the little label from what it stands for, and still make ontological or moral sense.

Continue reading "Inseparably Right; or, Joy in the Merely Good" »

GD Star Rating

Morality as Fixed Computation

Followup toThe Meaning of Right

Toby Ord commented:

Eliezer,  I’ve just reread your article and was wondering if this is a good quick summary of your position (leaving apart how you got to it):

‘I should X’ means that I would attempt to X were I fully informed.

Toby’s a pro, so if he didn’t get it, I’d better try again.  Let me try a different tack of explanation – one closer to the historical way that I arrived at my own position.

Suppose you build an AI, and – leaving aside that AI goal systems cannot be built around English statements, and all such descriptions are only dreams – you try to infuse the AI with the action-determining principle, "Do what I want."

And suppose you get the AI design close enough – it doesn’t just end up tiling the universe with paperclips, cheesecake or tiny molecular copies of satisfied programmers – that its utility function actually assigns utilities as follows, to the world-states we would describe in English as:

<Programmer weakly desires 'X',   quantity 20 of X exists>:  +20
<Programmer strongly desires 'Y',
quantity 20 of X exists>:  0
<Programmer weakly desires 'X',   quantity 30 of Y exists>:  0
<Programmer strongly desires 'Y', quantity 30 of Y exists>:  +60

You perceive, of course, that this destroys the world.

Continue reading "Morality as Fixed Computation" »

GD Star Rating

Zombies: The Movie

FADE IN around a serious-looking group of uniformed military officers.  At the head of the table, a senior, heavy-set man, GENERAL FRED, speaks.

GENERAL FRED:  The reports are confirmed.  New York has been overrun… by zombies.

COLONEL TODD:  Again?  But we just had a zombie invasion 28 days ago!

GENERAL FRED:  These zombies… are different.  They’re… philosophical zombies.

CAPTAIN MUDD:  Are they filled with rage, causing them to bite people?

COLONEL TODD:  Do they lose all capacity for reason?

GENERAL FRED:  No.  They behave… exactly like we do… except that they’re not conscious.

(Silence grips the table.)


Continue reading "Zombies: The Movie" »

GD Star Rating