41 Comments

I would to know the evolutionary explanation of why we feel envy. In economics concepts such as fairness, reciprocity, inequity aversion are sometimes called social preferences and have been shown to lead to seemingly non-rational behavior. Fairness and reciprocity can be understood in terms of their evolutionary value. Societies in which inviduals share rewards are more likely to prosper as the risk of dying from a bad outcome is lowered. Jealousy can also be explained in terms of keeping our mating partner from leaving and having someone else's child.

But envy? What is the advantage of being envious. Yet, people are envious (to different degrees). If you get 100$ your happiness depends on whether your neighbor gets 50$ or 1000$. In extreme cases, this feeling can lead to trying to destroy our neighbor's stack of dollars. And please notice that this example is not zero-sum, so competition for resources is not a complete explanation.

The same goes for Schadenfreude, which is a form of reverse-envy, but equally strange.

I would like the contributors of OB to enlighten me on this topic. Thanks!

Expand full comment

Have you commented on this yet? Fat and long life — The “obesity” crisis is crumbling

Expand full comment

It's not too difficult to try to demonstrate a point, fail, and fail to notice any failure.

Oh, you'll notice, if you're doing it in front of an audience of smart and critical people other than your sycophants. And if nobody at all notices, that's a whole different problem. That's not really your personal failure any more. It might even be a problem of the times.

And if someone is very intelligent, this is sometimes even easier, because it's easier to dismiss the opposition as stupid.

But one does see their replies. However one may personally assess those replies, one does receive them and file them away. One therefore develops a map of the logical territory of the claim. That map is there whatever one may feel about it.

There also may be some blog readers who don't post their ideas. If so, no one is forcing them to demonstrate anything.

Like I said, "Some of us."

Expand full comment

It's not too difficult to try to demonstrate a point, fail, and fail to notice any failure. And if someone is very intelligent, this is sometimes even easier, because it's easier to dismiss the opposition as stupid.

There also may be some blog readers who don't post their ideas. If so, no one is forcing them to demonstrate anything.

Expand full comment

we have a great ability to persuade ourselves that we are accepting something on the strength of the arguments, when in reality we are accepting it on authority.

Some of us. It is possible to learn the difference. For example, if you are in a class or career in which you must demonstrate your conclusions or fail, then you quickly learn to distinguish what you can demonstrate from what you can't. The latter will include things that you are taking on trust, on someone's authority. Or, if reliance on authority is part of the demonstration, then you're likely aware of this as well. You will furthermore get a sense of the weakest points of your demonstration, assuming it is not mathematically rigorous.

the argument from authority does have somewhat more than zero force for a Bayesian

Depends on what you're trying to glean. If your purpose is to understand a mathematical proof (say), then even if the person presenting is a known infallible and truthful being, his assertion of the conclusion does not teach you the proof of it. This blog isn't all that different.

Expand full comment

Robin and I both know we are destined to do this, at some point; but there are more blog posts I wish to write first.

Expand full comment

Constant, we have a great ability to persuade ourselves that we are accepting something on the strength of the arguments, when in reality we are accepting it on authority.

For this reason, and also because the argument from authority does have somewhat more than zero force for a Bayesian, it still would seem useful for the readers of the blog to know who is more biased and in what way.

Expand full comment

It would be good for the readers of the blog to know which of these is the case, so that they could put more confidence in the one who turns out to be more trustworthy.

It is useful only if something is being accepted on their authority, as opposed to being accepted on the strength of their arguments. That they present their arguments for examination suggests that they would themselves prefer that their arguments be accepted on the merits, rather than on the authority of the speaker, so even if a reader starts out depending on their authority, he is directed by their authorial wish back to their arguments.

Expand full comment

It would be nice to see a disagreement case study on the differences between Robin and Eliezer. This could involve their differences regarding agreeing to disagree, or their differing probability assignments for various possibilities, such as the success of cryonics, the event of a world-changing singularity within the next 30 or 40 years, or even the existence of God. Eliezer seems to believe the first two are fairly probable, while Robin seems to think them possible but quite improbable. Both think the last improbable, but Eliezer seems much more extreme in this regard, seemingly assigning it a probability more or less equivalent to the probability of the Teapot hypothesis or the Flying Spaghetti Monster hypothesis.

The differing probability assignments in fact seem to be a result of their differences regarding agreeing to disagree; Robin takes into account expert opinion on cryonics and the singularity, while Eliezer does not consider this necessary. Likewise Robin takes into account the common opinion about the existence of God, by which the hypothesis differs greatly from the Flying Spaghetti Monster hypothesis, while Eliezer considers the common opinion irrelevant.

According to this, either Robin is biased towards the opinions of others, or Eliezer is highly overconfident in many respects. It would be good for the readers of the blog to know which of these is the case, so that they could put more confidence in the one who turns out to be more trustworthy.

Expand full comment

Tiiba,

It wasn't precise enough. And when I tried to write things to replace it, I bogged down in the slow-writer problem. "AI as a positive and negative factor in global risk" is still current, so is "KnowabilityOfFAI".

If I were secretly building a robot, and I told you, it wouldn't be a secret, now would it? So I think I'll answer "No" for this one occasion, then refuse to answer this question on all future occasions on a general policy of maintaining plausible deniability with respect to questions on which I would have a legitimate (ethical) reason for secrecy given at least one of the possible answers.

Expand full comment

I have a question. Your homepage says:

"Most of my old writing is horrifically obsolete. Essentially you should assume that anything from 2001 or earlier was written by a different person who also happens to be named "Eliezer Yudkowsky". 2002-2003 is an iffy call."

Well, as far as I can tell, most of your important writing on AI is "old". So what does this mean? What ideas have been invalidated? What replaced them? Are you secretly building a robot?

Expand full comment

Tom, thanks for the comment.

Richard, was there any key insight that started your process 15 years ago?

Expand full comment

Tom Breton's post describes a process that sounds an awful lot like Rawls' notion of reflective equilibrium.

Expand full comment

Tom slipped in (with a very nice comment).

Expand full comment

Stuart, fifteen years ago I came to believe that the moral environment was vastly "simpler" than I had previously believed -- "simple" in the way that the laws of physics are simple. A formal system took shape and since then I have been refining it in my spare time. When I say, "formal system," I do not mean I have reduced it to mathematics but rather that it resembles mathematics more closely than most moral systems I know of. I rely on the formal system almost exclusively when making my most morally important decisions, which for me so far consists mainly of deciding what new knowledge to acquire and where I should try to contribute to technical or scientific developments. My system is far from mainstream, though.

Expand full comment

Interesting observations and questions, Stuart. I'll make a stab at answering some of them.

1) If a formal moral system will contain situations incompatible with our moral intuitions (which is nearly certain), should we bother to build such a system?

That's not unique to morality. Formal math systems give some results incompatible with our intuitions. ISTM most people (of those familiar with them in the first place) would say it's worth building them.

One could also argue that the alternative to building them neccessarily means refusing to be consistent or logical. That could be considered embracing the absurdity or repugnance we are supposedly avoiding.

3) If we have embraced a formal moral system, and it leads us to a morally repugnant conclusion, what should we do?

Of course for both moral andother formal systems we'd start by examining the logic that derived that conclusion from the formal system. If we find that the logic is flawed, there may not be a problem at all.

We should be a little careful not to use that as what Eliezer calls "motivated continuation", though. As we've seen, when they want to avoid a conclusion, even many bright people have a way of wading into unfamiliar logic and "finding" problems that really aren't.

If after we examine the logic, the flaw is still there, what then?Drawing again on the parallel with non-moral systems, I'd say we should give weight both to the intuitions that led us to accept its axioms in the first place and the intuitions that make us question the conclusion.How much relative weight depends on how satisfactory the system is in general. If the system has produced useful results and withstood criticism well, it should be given more weight.

We should then re-examine it from both ends.

<ul><li>Should we in fact accept the conclusion we don't like? A seemingly repugnant or absurd conclusion may be the only alternative to even worse axioms. Also, if it followed rigorously from axioms we like, maybe it's not as bad as our intuition first thought it was.</li><li>Is there a reformulation that avoids the conclusion without causing a worse problem? Careful not to short-change that condition. Comparing a new-born reformulation to a mature formal system is hazardous. All sorts of problems could be lurking and just not known yet.</li><ul>

Finally, in practice, how do people deal with these dilemmas?

Mostly by pretending they are being more consistent than they are.

Expand full comment