Category Archives: Self-Deception

Cynicism in Ev-Psych (and Econ?)

Though I know more about the former than the latter, I begin to suspect that different styles of cynicism prevail in evolutionary psychology than in microeconomics.

Evolutionary psychologists are absolutely and uniformly cynical about the real reason why humans are universally wired with a chunk of complex purposeful functional circuitry X (e.g. an emotion) – we have X because it increased inclusive genetic fitness in the ancestral environment, full stop.

Evolutionary psychologists are mildly cynical about the environmental circumstances that activate and maintain an emotion.  For example, if you fall in love with the body, mind, and soul of some beautiful mate, an evolutionary psychologist would like to check up on you in ten years to see whether the degree to which you think your mate's mind is still beautiful, correlates with independent judges' ratings of how physically attractive that mate still is.

But it wouldn't be conventionally ev-psych cynicism to suppose that you don't really love your mate, and that you were actually just attracted to their body all along, but that instead you told yourself a self-deceiving story about virtuously loving them for their mind, in order to falsely signal commitment.

Robin, on the other hand, often seems to think that this general type of cynicism is the default explanation and that anything else bears a burden of proof – why suppose an explanation that invokes a genuine virtue, when a selfish desire will do?

Of course my experience with having deep discussions with economists mostly consists of talking to Robin, but I suspect that this is at least partially reflective of a difference between the ev-psych and economic notions of parsimony.

Ev-psychers are trying to be parsimonious with how complex of an adaptation they postulate, and how cleverly complicated they are supposing natural selection to have been.

Economists… well, it's not my field, but maybe they're trying be parsimonious by having just a few simple motives that play out in complex ways via consequentialist calculations?

Continue reading "Cynicism in Ev-Psych (and Econ?)" »

GD Star Rating

BHTV: Yudkowsky / Wilkinson

Eliezer Yudkowsky and Will Wilkinson.  Due to a technical mistake – I won't say which of us made it, except that it wasn't me – the video cuts out at 47:37, but the MP3 of the full dialogue is available here.  I recall there was some good stuff at the end, too.

We talked about Obama up to 23 minutes, then it's on to rationality.  Wilkinson introduces (invents?) the phrase "good cognitive citizenship" which is a great phrase that I am totally going to steal.

GD Star Rating

Getting Nearer

Reply toA Tale Of Two Tradeoffs

I'm not comfortable with compliments of the direct, personal sort, the "Oh, you're such a nice person!" type stuff that nice people are able to say with a straight face.  Even if it would make people like me more – even if it's socially expected – I have trouble bringing myself to do it.  So, when I say that I read Robin Hanson's "Tale of Two Tradeoffs", and then realized I would spend the rest of my mortal existence typing thought processes as "Near" or "Far", I hope this statement is received as a due substitute for any gushing compliments that a normal person would give at this point.

Among other things, this clears up a major puzzle that's been lingering in the back of my mind for a while now.  Growing up as a rationalist, I was always telling myself to "Visualize!" or "Reason by simulation, not by analogy!" or "Use causal models, not similarity groups!"  And those who ignored this principle seemed easy prey to blind enthusiasms, wherein one says that A is good because it is like B which is also good, and the like.

But later, I learned about the Outside View versus the Inside View, and that people asking "What rough class does this project fit into, and when did projects like this finish last time?" were much more accurate and much less optimistic than people who tried to visualize the when, where, and how of their projects.  And this didn't seem to fit very well with my injunction to "Visualize!"

So now I think I understand what this principle was actually doing – it was keeping me in Near-side mode and away from Far-side thinking.  And it's not that Near-side mode works so well in any absolute sense, but that Far-side mode is so much more pushed-on by ideology and wishful thinking, and so casual in accepting its conclusions (devoting less computing power before halting).

Continue reading "Getting Nearer" »

GD Star Rating

The Meta-Human Condition

Consider these points:

  1. Our entire life stories are fixed by our genetics and our childhood environment (nature and nurture, more broadly), both of which we did not choose;
  2. Our bodies are slowly growing more frail and debilitated until we die of something such as heart disease, cancer or stroke (or accident before then);
  3. Even if someone develops a cure for aging, most of the experts who have studied the issue estimate about a 50/50 chance that our species will survive this century;
  4. We live on a giant rotating planet, in an unimaginably large universe that is almost all empty space, and appears to be lifeless;
  5. The fact that we were designed by evolution to value or desire certain things doesn’t seem to justify actually valuing or desiring them;
  6. While most people believe in some sort of religion that provides cosmic context, the thousands of religions contradict each other, and all appear to be fictions created by men;
  7. While most people believe in an “afterlife,” people don’t believe that parts of a crazy person’s mind go to Heaven when he loses them; by extrapolation, all of a person’s mind doesn’t go to Heaven when you lose all of it.

My point is not to push these beliefs onto anyone who resists them.  I suspect, though, that most OB readers already think they are facts.  And I suspect that many otherwise religious people, in their heart of hearts, already believe the above too.

My point, instead, is to make an observation about the above set of facts, which I’ll call “the human condition,” in the pessimistic sense.  My observation is this: while all of the above facts can be considered an insult or injury, there is one more that goes largely unnoticed.  The final insult is that we are not supposed to talk about the human condition.  Indeed, we are not even supposed to acknowledge its existence.  I call this last insult the “Meta-Human Condition”—the salt in the wound.

Continue reading "The Meta-Human Condition" »

GD Star Rating
Tagged as:

Is That Your True Rejection?

It happens every now and then, that the one encounters some of my transhumanist-side beliefs – as opposed to my ideas having to do with human rationality – strange, exotic-sounding ideas like superintelligence and Friendly AI.  And the one rejects them.

If the one is called upon to explain the rejection, not uncommonly the one says,

"Why should I believe anything Yudkowsky says?  He doesn’t have a PhD!"

And occasionally someone else, hearing, says, "Oh, you should get a PhD, so that people will listen to you."  Or this advice may even be offered by the same one who disbelieved, saying, "Come back when you have a PhD."

Now there are good and bad reasons to get a PhD, but this is one of the bad ones.

There’s many reasons why someone actually has an adverse reaction to transhumanist theses.  Most are matters of pattern recognition, rather than verbal thought: the thesis matches against "strange weird idea" or "science fiction" or "end-of-the-world cult" or "overenthusiastic youth".

So immediately, at the speed of perception, the idea is rejected.  If, afterward, someone says "Why not?", this lanches a search for justification.  But this search will not necessarily hit on the true reason – by "true reason" I mean not the best reason that could be offered, but rather, whichever causes were decisive as a matter of historical fact, at the very first moment the rejection occurred.

Instead, the search for justification hits on the justifying-sounding fact, "This speaker does not have a PhD."

But I also don’t have a PhD when I talk about human rationality, so why is the same objection not raised there?

And more to the point, if I had a PhD, people would not treat this as a decisive factor indicating that they ought to believe everything I say.  Rather, the same initial rejection would occur, for the same reasons; and the search for justification, afterward, would terminate at a different stopping point.

They would say, "Why should I believe you?  You’re just some guy with a PhD! There are lots of those.  Come back when you’re well-known in your field and tenured at a major university."

Continue reading "Is That Your True Rejection?" »

GD Star Rating

Are you dreaming?

Often when I’m dreaming I “feel” that I’m awake.  When I’m awake, however, I always  “feel” that I’m awake and have no conscious doubt (except in the philosophical sense) that I’m not dreaming.

But logically when I “feel” awake I should believe there is a non-trivial chance that I’m dreaming.  This has implications for how I should behave.

For example, imagine I’m considering eating spinach or chocolate.  I like the taste of chocolate more than spinach, but recognize that spinach is healthier for me.  Let’s say that if the probability of my being awake were greater than 99% then to maximize the expected overall quality of my life I should eat the spinach otherwise I should pick the chocolate. 

Rationally, I should probably figure that the chance of my being awake is less than 99% so I should go with the chocolate.  Yet like most other humans I don’t take into account that I might be dreaming when I “feel” awake.

Over the long run you would likely reduce your inclusive genetic fitness if when you  “feel” awake you act as if there is a less than 100% chance of your actually being awake.  For this reason I suspect we are “genetically programmed” to never doubt that we are awake when we “feel” awake even though it would be rational to hold such a doubt.

GD Star Rating
Tagged as:

Ask OB: Leaving the Fold

Followup toCrisis of Faith

I thought this comment from "Jo" deserved a bump to the front page:

"So here I am having been raised in the Christian faith and trying not to freak out over the past few weeks because I’ve finally begun to wonder whether I believe things just because I was raised with them. Our family is surrounded by genuinely wonderful people who have poured their talents into us since we were teenagers, and our social structure and business rests on the tenets of what we believe. I’ve been trying to work out how I can ‘clear the decks’ and then rebuild with whatever is worth keeping, yet it’s so foundational that it will affect my marriage (to a pretty special man) and my daughters who, of course, have also been raised to walk the Christian path.

Is there anyone who’s been in this position – really, really invested in a faith and then walked away?"

Handling this kind of situation has to count as part of the art.  But I haven’t gone through anything like this.  Can anyone with experience advise Jo on what to expect, what to do, and what not to do?
GD Star Rating

Back Up and Ask Whether, Not Why

Followup toThe Bottom Line

A recent conversation reminded me of this simple, important, and difficult method:

When someone asks you "Why are you doing X?",
And you don’t remember an answer previously in mind,
Do not ask yourself "Why am I doing X?".

For example, if someone asks you
"Why are you using a QWERTY keyboard?" or "Why haven’t you invested in stocks?"
and you don’t remember already considering this exact question and deciding it,
do not ask yourself "Why am I using a QWERTY keyboard?" or "Why aren’t I invested in stocks?"

Instead, try to blank your mind – maybe not a full-fledged crisis of faith, but at least try to prevent your mind from knowing the answer immediately – and ask yourself:

"Should I do X, or not?"

Should I use a QWERTY keyboard, or not?  Should I invest in stocks, or not?

When you finish considering this question, print out a traceback of the arguments that you yourself considered in order to arrive at your decision, whether that decision is to X, or not X.  Those are your only real reasons, nor is it possible to arrive at a real reason in any other way.

And this is also writing advice: because I have sometimes been approached by people who say "How do I convince people to wear green shoes?  I don’t know how to argue it," and I reply, "Ask yourself honestly whether you should wear green shoes; then make a list of which thoughts actually move you to decide one way or another; then figure out how to explain or argue them, recursing as necessary."

GD Star Rating

Trust But Don’t Verify

Aimone and Houser find we are willing to pay to avoid knowing that we have been betrayed: 

Here we report data from one-shot two-person binary investment games in which investors can choose not to know the decision of their particular trustee, and instead receive payment according to a random draw from a separate pool of decisions identical to the pool of trustees’ decisions. Note that the probability of receiving the "cooperative" outcome is identical in the two cases, and participants understand this is the case. … Our main finding is that investors systematically prefer to remain ignorant of their specific trustee’s decision. Moreover, when avoiding this information is not possible investors are substantially less likely to make trusting decisions. These results are convergent evidence that outcome-based models cannot fully explain economic decision making in strategic environments.

Added 25 Oct: Can this help explain why we rely so little on incentive contracts for docs, real estate agents, etc.? 

GD Star Rating
Tagged as:

Dark Side Epistemology

Followup toEntangled Truths, Contagious Lies

If you once tell a lie, the truth is ever after your enemy.

I have previously spoken of the notion that, the truth being entangled, lies are contagious.  If you pick up a pebble from the driveway, and tell a geologist that you found it on a beach – well, do you know what a geologist knows about rocks?  I don’t.  But I can suspect that a water-worn pebble wouldn’t look like a droplet of frozen lava from a volcanic eruption.  Do you know where the pebble in your driveway really came from?  Things bear the marks of their places in a lawful universe; in that web, a lie is out of place.

What sounds like an arbitrary truth to one mind – one that could easily be replaced by a plausible lie – might be nailed down by a dozen linkages to the eyes of greater knowledge.  To a creationist, the idea that life was shaped by "intelligent design" instead of "natural selection" might sound like a sports team to cheer for.  To a biologist, plausibly arguing that an organism was intelligently designed would require lying about almost every facet of the organism.  To plausibly argue that "humans" were intelligently designed, you’d have to lie about the design of the human retina, the architecture of the human brain, the proteins bound together by weak van der Waals forces instead of strong covalent bonds…

Or you could just lie about evolutionary theory, which is the path taken by most creationists.  Instead of lying about the connected nodes in the network, they lie about the general laws governing the links.

And then to cover that up, they lie about the rules of science – like what it means to call something a "theory", or what it means for a scientist to say that they are not absolutely certain.

Continue reading "Dark Side Epistemology" »

GD Star Rating