35 Comments

Rewatching the video (which is not available through my earlier link, but can be found from here), I see it concludes with him saying "If you really cared about truth, you would make it one of your causes, maybe your top cause". So I think you're right.

Expand full comment

I don't interpret him that way. (Weak clue: Robin upvoted a conflicting account.) Robin provides a conditional recommendation: if you're concerned about truth, you should recognize the existence of bias and seek to overcome it.

This says nothing about how much truth we should seek. It's palpably silly to have a cause and be indifferent to its truth, but an equally warranted conclusion is to avoid having causes!

In general, Robin seems much more concerned about truth in near-mode than far-mode matters. In the latter, I would go so far as to say he is cavalier about truth. Thus, approvingly:

How can you be so sure of your intellectual standards and your preferred interpretations of our words, so as to put at risk all this useful religious practice?

[Yudkowsky is the opposite: serious about far-mode truth and cavalier about the near-mode variety. In my classification scheme, this makes Robin Monomaniacalist and Yudkowsky Demagogist. See "Utopianism, Demagogism, and Managerialism are left, right, and center: Patterns of opportunism and rigidity" http://tinyurl.com/7xrb9u2 ]

Expand full comment

Robin is explicit that truth should be your first goal here.

Expand full comment

No, that isn't Robin's thesis. That's the Philosophy of LessWrong or Michael Vassar, or something.

Robin offers advice to readers who are concerned only with the truth. His posts are usually, "if you care only about the truth, if you're optimizing for the accuracy in your world model and nothing else, make sure you're mindful of such and such common errors [and here's some evidence for their existence and/or a theory for why they exist]". He occasionally implies that one has a moral imperative to subordinate one's emotions to Truth, but he never explicitly say so, and I don't think he believes it is.

Expand full comment

"Emotions are too important to survival to be left to chance.

If emotion seems random to you, it's probably because you've never been psychoanalyzed."

Psychoanalysis allows too much room for fraud (there are too many tricks that make it easy to fit a BS explanation on everything). When I talk about random processes in the mind I don't mean the results are compeltely unpredictable. I'm talking about processes that use some form of randomness, equivalent to something like metreopolis sampling on computers. Results are only a tiny bit random, they fluctuate only a bit, but the process did use random elements to dramatically speed up calculation time vs. an exhaustive exact process.

Expand full comment

Oh, undoubtedly: but those emotion-like impulses would have to be designed in by someone.

Unless . . . it occurs to me that if AI develops accidentally, the AI which would stand the best chance of surviving would be the one with a "survival instinct" or whatever you wish to call it. So there would be a selective pressure, just like in biological evolution, for AIs to have a sense of self-preservation. One hopes that would include cooperation with other intelligences like us.

Expand full comment

Gerald Edelman showed that there is random growth in the development of synapses, a process of growth and pruning, particularly early in development but persisting throughout the lifespan. But there's no evidence that randomness is involved in the experience of a particular emotion at a particular time or that randomness (even in development) has anything specific to do with emotion (as opposed to other mental processes).

Emotions are too important to survival to be left to chance. They are the result of rapid processing, it is true, and for that reason there's already built in a merely statistical relation to adaptive demands.

We randomize behavior by choice in competitive situations to make our behavior unpredictable to opponents.

Emotions are actually more obviously subject to psychological determinism than most other forms of cognition.

(If emotion seems random to you, it's probably because you've never been psychoanalyzed.)

Expand full comment

Emotions are probably created using some (pseudo) random processes, as is creativity. There really is no other way of doing them even remotely efficient. Of course the random component can be "hardware" (or wetware if you will) based with no actual electrical signals containing random numbers, but for example random growth of synapse strength between brain cells, or maybe random signals are produced by random fluctuations in K and Na flows and concentrations.

Expand full comment

>The brain CAN randomize, just not consciously. Where else would emotions and creativity come from?

I've seen empirical support for the first sentence. But randomization isn't the basic source of emotion, which is (to put it simplistically) the product of complex information processing by the prefrontal cortex (right lobe for negative emotions, left lobe for positive emotions).

Expand full comment

Added to what? Are not all preferences (completely) emotional? (Per Hume.)

But, no, I don't agree with your formula. Emotions don't merely drive behavior (arationally, you might say). They are also carriers of information "not amenable to objective calculus."

For example, you finish a seemingly congenial conversation with someone, but find yourself experiencing a humiliated rage. Although you're not consciously aware of the injury to status you suffered by some very subtle dig, the emotion informs you of its existence. You can then apply objective analysis (sometimes) to figure out what it was (and whether vengeance is warranted). But you would never have been aware of it had you not experienced the (apparently) unjustified emotion.

This process sometimes does misfire because our reactions are often partly neurotic. Still it's real information--and an intrinsic input to rational functioning.

Expand full comment

"Anyway, the point here is that an indifferent powerful entity is almost as bad as a malicious powerful one, since the former could wipe us out without noticing or caring, in pursuit of goals orthogonal to what we care about."

Right, I agree that that is an important point that people have to keep in mind, AIs indeed don't have to have anything against humans per se to still wipe them out, just like humans don't have anything against corals per se but we are wiping them out almost without realizing it.

Expand full comment

Yes, "powerful enough" (e.g. to create nanobots) is the big weasel phrase there. Anyway, the point here is that an indifferent powerful entity is almost as bad as a malicious powerful one, since the former could wipe us out without noticing or caring, in pursuit of goals orthogonal to what we care about.

On a more mundane level, a good example is a privileged person who is not consciously racist or classist, but who does lots of harm to oppressed groups just by participating in an oppressive society in the default, expected way, and therefore is, albeit unconsciously, an agent of, etc., etc.

Expand full comment

That's not the example "everyone uses". A hyper-specialized AI like that can easily be outsmarted and destroyed. Nanobots are a different matter but those are not intelligent.

Expand full comment

@efalken:disqus

The brain CAN randomize, just not consciously. Where else would emotions and creativity come from?

@Cambias

Well, a "superintelligent" AI might very well have emotions or functions behaving like emotions because those help it make decisions. Any AI definitely needs to have "passions" to be active, the more intelligent the AI becomes the more it will be able to apply abstract "passions" to situations it wasn't designed to apply them to. The results are unpredictable.

Expand full comment

Worries about UFAI do not depend at all on the AI not liking us. The example everyone uses is a super-intelligent AI made to build paperclips; it is so good at this that it rapidly turns all matter in the solar system, including us, into paperclips. It wasn't annoyed, didn't even know we existed, yet we are dead. What the example implies is that any powerful enough optimization process, whether it has recognizable emotions or not, is horribly dangerous to us no matter what it is optimized for, unless it is specifically optimized to be nice to us. How to implement this in the form of computer code is the Friendly AI problem, and it is currently considered open.

Expand full comment

Interestingly, from a logical standpoint many indecisions can be rectified via a randomizing strategy, that is, put a penalty on delay, and if the cost of the difference between to choices is less than that cost, make a statistical decision. For example, a tennis play who needs to serve down the middle or to the outside with 60-40 probabilities, can key off whether the second hand is between 0 and 36 to serve one way, etc. Given two equal choices, one can always flip a coin (or pseudocoin). But, randomizing isn't a human default, it's hard to do (really, impossible for a human without some sort of technology).

I think we learn by putting emotion into our uncertain preferences and noting the results, whereas if we acted like programs and randomized in these situations, we would spend too much time evaluating trivia, or too little time evaluating our broader preferences. Emotion is proportional to an inarticulable sense of importance. Now, we could weight that too and do the math, but our neocortex isn't aware of all that information, just the emotion with some vague sense of its origin.

Expand full comment