Distrusting Drama

Imagine someone made a unlikely claim that to you, i.e., a claim to which you would have assigned a low probability.  For many kinds of unlikely claims, such as that your mother was just in a car accident, you usually just believe them.  But if the claim was suspiciously dramatic, you may well suspect they had fallen prey to common human biases toward dramatic claims.  Here are a four reasons for such distrust:

Incentives If a stockbroker said you should buy a certain stock because it will double in a month, you might suspect he was just lying because he gets paid a commission on each trade.  You have similar incentive reasons to suspect emails from Nigerian diplomats, and I’ll-love-you-forever promises from would-be seducers.  This doubt might be overcome for those who show clear enough evidence, or showed they actually had little to gain.

Craziness If someone told you they were abducted by aliens, talked with God, or saw a chair levitate, you might suspect a serious mental malfunction, whereby he just could not reliably see or remember what he saw.  This doubt might be overcome if he showed you reliable sight and memory, and that he was not then in some altered more susceptible state of mind (e.g., trance).  Adding more similarly qualified observers would help too.

Sloppiness If someone told you the available evidence strongly suggests a 911 conspiracy, or that aliens regularly visit Earth, you might suspect him of sloppy analysis.  Analyzing such evidence requires detailed attention to which theories are how consistent which observations, and so one needs either very thorough attention to all possibilities, or more realistically good calibration on how often one makes analysis mistakes.  You may suspect he did not correct sufficiently for unconscious human attractions to dramatic claims.  This doubt might be overcome if he showed a track record of detailed analysis of similar dramatic claims; he might be a professional accident investigator, for example. A group of such professionals would be even more believable, if there were not similar larger groups on the other side.

Fuzziness If someone told you they invented an architecture allowing a human-level AI to be built soon, or a money system immune to financial crises, or found a grand pattern of history predicting a world war soon, and if these claims were not based on careful analysis using standard theories, but instead on new informal abstractions they are pioneering, you might suspect them of being too fuzzy, i.e., of too eagerly embracing their own new abstractions.

There are lots of ways to slice up reality, and only a few really “carve reality at the joints.”  But if you give a loose abstraction the “benefit of the doubt” for long enough, you can find yourself thinking in its terms, using it to draw what seem reasonable if tentative conclusions.  These conclusions might even be reasonable as weak tendencies, all else equal, but you may forget how much else is not equal, and how many other abstractions were available.  Here we can be biased not only toward dramatic claims, but also toward our own we-hope-someday-celebrated creations.

Some critics suspect whole professions, such as literary critics or sociologists of norms, of fooling themselves into over-reliance on certain abstractions, even after thousands of experts have used those abstractions full-time for decades.  Such critics want a clearer track record of such a profession dealing well with concrete problems, and even then critics may suspect the abstractions contributed little.  For example, Freudian therapy skeptics suspect patients just feel better after someone listens to their troubles.  How much more then should we suspect new personal abstractions that give dramatic implications, if their authors have not yet convinced relevant experts they offer new insight into less dramatic cases?

I don’t fully accept Eliezer’s AI foom estimates; I’ve explained my reasoning most recently here, here, and here.  But since we both post on disagreement meta-issues, I should discuss some of my meta-reasoning.

I try to be reluctant to disagree, but this applies most directly to an average opinion, weighted by smarts and expertise; if I agreed with Eliezer more I’d have to agree with other experts less.  His social proximity to me shouldn’t matter, except as it lets me better estimate auxiliary parameters.

But even so, if I disagree with Eliezer, I must distrust something about Eliezer’s rationality; disagreement is disrespect, after all.  So what do I distrust?  I guess I suspect that Eliezer has succumbed to the all-too-human tendency to lower our standards on fuzzy abstractions leading to dramatic claims. Yes, he has successfully resisted this temptation other times, but then so have most who succumb to it.

Your not believing in God or Nigerian diplomats or UFOs, or your having felt the draw of dramatic beliefs but resisted them, doesn’t mean you don’t believe something else equally unsupported where you never even noticed the draw of drama.  Our intuitions in such cases are simply not trustworthy.

How are his claims “dramatic”? I could list many of their biasing features, but this seems impolite unless requested.  After all, I do overall greatly respect and like Eliezer, and am honored to co-blog with him.

GD Star Rating
loading...
Tagged as:
Trackback URL:
  • Unknown

    Robin, did you see my wager with Eliezer? You might want to profit from his overconfidence yourself, if he is willing to make more than one such bet.

  • http://billmill.org Bill Mill

    I did recently call him dramatic, and if offense was taken, I apologize. I greatly respect Eliezer.

    I *do* think that his explanation for why people will want to revive him in the future was less than perfectly rational, and I think he was a very small bit dramatic in explaining them. Instead of reasoning why people would do so, he seemed to me to be saying “I’m going to try and make it a world where people do so, because people should want to”.

    Which is fine, but I think it’s a bit dramatic for a person who prides themselves on rationalization to argue that it’s OK to rely on themselves to have an impact on such amazingly complex issues.

    So, again, if I misread him, or am just wrong, I apologize. But that part of his “why you should get cryonics” article struck me as a tiny bit dramatic.

  • http://billmill.org Bill Mill

    I also think the comment in which I stated that opinion was poorly written, and was received as much more inflammatory than it was intended, which I regret.

  • http://gov.state.ak.us burger flipper

    “this seems impolite unless requested.”
    I assume the request must come from EY himself, else you have it: I’d love to hear it.
    (but this does come from a semi-troll whose favorite post here may be the “gotta catch a plane” dialog.)

  • James Andrix

    If Robin’s abstractions are good, then Eliezer should be able to describe the foom event in economic/evolutionary terms without resorting to his abstractions, and (I think) that should convince Robin.

    If Robin’s abstractions break down in the case of a self modifying AI, Eliezer should find other examples of them breaking down that Robin already acknowledges, and that are similar in some relevant way to self modifying AI.

    Perhaps each party should outline situations in which their own abstractions don’t apply or aren’t accurate.

  • http://profile.typekey.com/halfinney/ Hal Finney

    Doesn’t Eliezer’s world view make him (Eliezer) the most important person in the world? The problem of Friendly AI is the most important problem we face, since it determines whether the future is like Heaven or Hell. And Eliezer is clearly the person who has put the most work into this specific problem, and has made the most progress.

    Having beliefs that make you yourself the most important person in the world, possibly the most important person in history, has got to be a powerful source of bias. I’m not saying such a bias can’t be overcome, but the circumstance does increase the burden of proof on him.

    • Efferan

      Sure, but don’t confuse the causality. If EY realized the FAI problem was the most pressing of all problems, and only then inserted himself into the midst of it, then this is an example of rationality not rationalization.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    @burgerflipper: I think that Robin and I both know what my temptations to bias are; hence there’s little enough need to list them.

    @James, I think I already went into that in around as much detail as I can do. My fundamental objection to Robin’s worldview is that it doesn’t deal with what happens when agents get smarter, larger, better-designed, or even faster. So Robin’s methodology would break down if e.g. he tried to describe the effect of human intelligence on the planet, not using abstractions that he’s already fine-tuned on humans, but using only the sort of abstractions that he would have approved of using before humans came along. If you’re allowed to “predict” humans using experimental data from after the fact, that is, of course, hindsight.

    @Hal: From my perspective, I’m working on the most important problem I can find, in order to maximize my expected utility. It’s not my fault if others don’t do the same.

    Also, I keep saying this, but we’re talking about Heaven versus Null. Hell takes additional bad luck beyond that.

    We’re running a rationalist culture here; so to make something look bad here, you overdramatize it, so that people will suspect its adherents of bias, because we all know that (outside rationalist culture) things are overdramatized to sell them. So here we have the opponents casting the scenario in a dramatic light, and the proponents trying to make it sound less dramatic. This is something to keep in mind.

  • http://profile.typekey.com/aroneus/ Aron

    I don’t see many attempts to underdramatize the foom scenario.

  • http://gov.state.ak.us burger flipper

    I suspect I have a pretty good idea of the gist as well, but I’d love to read how he’d choose to say it.

    Changing gears, since a complete and true copy of a person is that person, and preserving brains so they might live in the future is a worthy goal, why not concentrate on creating the first unfriendly AI? (Since being first is vital.)

    Then just give it one Asimov-like rule: when you disassemble a person you must also simulate him.

    That leaves tricky questions about setting the rules of the sand box we get plunked down in. But would the sandbox rules need to be as perfect as the AI’s utility function?

  • James Andrix

    Eliezer:
    But the question is: Where does ROBIN think Robin’s abstractions break down? He thinks he’s accounted for your scenario, but he probably doesn’t think his abstractions are perfect. It should be a strong argument if you can show that your foom is in one of the regions where his abstractions break, but first he should concede those regions.

  • Z. M. Davis

    Burger flipper: “That leaves tricky questions about setting the rules of the sand box we get plunked down in. But would the sandbox rules need to be as perfect as the AI’s utility function?”

    Yes.

  • http://gov.state.ak.us burger flipper

    If fooming AI is an eventuality and the cutting edge of friendly technology consists of inspiring the next generation, it might be time for a contingency plan.

    If the AI is going to be so powerful it cannot be contained in any box, if it could use human atoms for building blocks, and if simulated me=me; maybe we should trade the atoms for a box, one to be placed in ourselves: simulations given a simulated earth. We don’t even have to swallow the blue pill. Leave the memories so we either don’t build another AI within the AI simulation, or if we do build one, leave the memory of the solution intact so we just live in a recursive loop.

  • http://www.physics.ucsb.edu/People/person.php3?userid=mike Mike Blume

    burger flipper: trade?

    If I choose to keep a set of ants in a glass sandbox on my desk, I do it because it amuses me. There is absolutely nothing those ants could hope to offer me which would make one iota of difference in the matter, and indeed, which I could not take from them myself.

    What on earth could we possibly trade with an unfriendly AI?

  • http://shagbark.livejournal.com Phil Goetz

    Eliezer is pretty dramatic a lot of the time. I mean, sometimes he uses King James Version grammar when speaking of himself. That’s a little disturbing.

    However, I have often observed that, in subjective fields, people who absurdly overstate their claims, like Freud and Kuhn and Lovelock and Skinner and Derrida and most famous philosophers; or make claims that are absurd to begin with, like James Tipler and (sometimes) Jerry Fodor and most of whatever famous philosophers are left; get more attention than people like, I don’t know, Spinoza, who make reasonable claims.

  • http://profile.typekey.com/sentience/ Eliezer Yudkowsky

    I mean, sometimes he uses King James Version grammar when speaking of himself.

    Not that I think that’s a bad thing, but still, I’m not entirely sure what you’re talking about here. Thou shalt give me examples.

  • http://gov.state.ak.us burger flipper

    Mike Blume, I’m simply proposing an ad hoc stipulation be built in at go: when you pull me apart for the carbon atoms, plunk down a sim of me in the ant farm. Instead of us building an escape proof box for it, let it build one for us.

    Not ideal perhaps. But if the first AI is likely to take over, the world’s leading proponent of friendly AI is in the providing-inspiration-for-the-young’ins stage, and copies/uploads/simulations of a person are that person, it seems like the best bet.

    And then EY can redirect his efforts toward creating the first AI–friendly or not.

  • http://www.physics.ucsb.edu/People/person.php3?userid=mike Mike Blume

    Ok, that makes more sense, it just seems to me like you run into massive wishing problems when you try to formulate the phrase “when you pull me apart for carbon atoms”

  • scholar

    @Phil

    Berashith Bera Eliezer Ath Ha Amudi Va Ath Ha AI.
    In the beginning Eliezer Created the Friendly and the AI.

    Viamr Eliezer Ihi AI Vihi AI.
    And said Eliezer Let there be AI, and there was AI.

    — Overcoming Genesis, 1:1, 1:3, The AI Testament (^.^)

  • http://shagbark.livejournal.com Phil Goetz

    Eliezer: “Thou shalt give me examples.”

    Put this in a google search box:
    site:overcomingbias.com “and lo”
    site:overcomingbias.com “unto”

    I’ll retract the qualifier “when speaking of himself”, which I used because the examples I remembered were from the recent string of autobiographical posts. It seems to be a general inclination to use occasional archaic words.